Sign In
Sign In

Proxmox Backup Server (PBS): Integration with Proxmox VE and Basic Operations

Proxmox Backup Server (PBS): Integration with Proxmox VE and Basic Operations
Hostman Team
Technical writer
Servers
21.11.2024
Reading time: 9 min

Proxmox Backup Server is a Debian-based solution that makes backup simple. With it, you can back up virtual machines, containers, and the contents of physical hosts. PBS is installed on bare metal. All the necessary tools are bundled in a single distribution.

Proxmox Backup Server is optimized for the Proxmox Virtual Environment platform. With this combination, you can:

  • Safely back up and replicate data.
  • Manage backups through both a graphical interface and command line.

Proxmox Backup Server is free software.

Key Features

Data loss or corruption due to deletion, ransomware, or other dangers can occur at any time. Therefore, regular backups of critical data are essential.

To improve performance and achieve objectives, create backups of data using Proxmox Debian. These backups will take up minimal space, allow for instant recovery, and efficiently reduce working time through simplified management.

User Role and Group Permissions

Proxmox Backup protects data from unauthorized access. A range of access control options ensures that users are limited to only the level of access they need. For example, marketers don’t need access to accounting reports, and accountants don’t need to see backups of the main product code.

For convenience, you can use several authentication domains: OpenID Connect, Linux PAM, or a separate authentication server. The administrator precisely defines what each user is allowed to do and what is prohibited.

Easy Management

PBS comes with a graphical interface through which the administrator manages the server. For advanced users who are familiar with the Unix shell, Proxmox provides a command-line interface for performing specialized or highly complex tasks.

Additionally, Proxmox Backup Server uses a RESTful API. The main data format is JSON. The entire API is formally defined using its schema. This ensures fast and easy integration with third-party management tools.

Reliable Encryption

It’s not just important to have access to backups, but also to be confident that the information has not been compromised. To provide that confidence, PBS securely encrypts Linux backups. This guarantees security even on less-trusted hosts — for example, on rented servers. No one except the owner can decrypt and read the stored information.

Granular Recovery

Why restore all data when you can restore only what’s needed? To reduce overhead, Proxmox Backup Server comes with a snapshot catalog for navigation. You can quickly explore the contents of an archive and instantly recover individual objects.

System Requirements

  • CPU: A 64-bit processor based on AMD or Intel with at least 4 cores.
  • Memory: At least 4 GB for the system, file system cache, and daemons. It is recommended to add at least 1 GB of memory for each terabyte of disk space.
  • Storage: Requires at least 32 GB of free space. The documentation suggests using hardware RAID. It is recommended to use solid-state drives (SSDs) for backup storage.

Server Installation

To store backups, you need a server. You will install Proxmox Backup Server on it. You can manage this setup through either a graphical interface or the command line, depending on what suits you best.

The easiest way to install the backup system on the server is by using a disk image (ISO file). This distribution includes all the necessary components for full functionality:

  • Installation wizard
  • Operating system with all dependencies
  • Proxmox Linux kernel with ZFS support
  • Tools to manage backups and other resources
  • Management interface

Installation from the disk is very simple. If you have ever installed an operating system, you will have no trouble. The installation wizard will help partition the disk and configure basic settings like time zone, language, and network for internet access. During the installation process, you will add all the necessary packages that convert a regular Debian system into one for managing backups.

PBS uses the entire server. During installation, all other data will be deleted. You will create a server dedicated to one task — managing backups.

Setting up a separate server also involves a security consideration. In this case, you will have access to backups even if other parts of the distributed system stop working.

Installation on Debian

Suppose you already have a server with Debian installed. In this case, installing Proxmox can be done through a different scenario. There’s no need to reinstall the OS—just add the missing packages. They will integrate seamlessly on top of the standard setup.

To perform the Proxmox download, enter the following commands in the Debian command line:

apt-get update
apt-get install proxmox-backup-server

This will install the packages in a minimal configuration. If you want to get the same set as when using the installer, run:

apt-get update
apt-get install proxmox-backup

This will install the packages in the full configuration, including the ZFS-supporting kernel and a set of useful tools. Essentially, this is the same as using the disk image.

After installation, you can immediately connect to the Proxmox Web Interface through a browser, using HTTPS on Proxmox port 8007. For example, at https://<ip-or-dns-name>:8007.

You can also separately install the Proxmox Backup Client. To do so, you need to configure the client repository based on APT and run these commands:

apt-get update
apt-get install proxmox-backup-client

These are the standard installation recommendations. If you need to set up a custom configuration, such as with Proxmox DHCP, refer to the documentation for further guidance.

Adding a Server to Proxmox VE

Before backing up the server, you need to perform a preliminary configuration.

Create a User

In Proxmox, configuration is done through an easy-to-use interface. Let’s create the first user:

  1. Open the Configuration tab — Access Control.
  2. Click Add.
  3. Add a new user.

For example, you can add a user user1@pbs. The "pbs" part is mandatory; if it’s omitted, an error message about incorrect credentials will appear.

Create a Storage

The next step is to create repositories. These allow you to distribute data according to your criteria. For example, you can create incremental backups for PostgreSQL, store data saved by Proxmox Ubuntu separately, and more. To do this, use the Proxmox Add Disk procedure.

  1. Go to Administration — Storage / Disks.
  2. Select a disk and initialize it by clicking Initialize Disk with GPT.
  3. Go to Directory — Create: Directory and create a directory for storing data.
  4. Specify the name of the data storage and the absolute path to the directory. If you check Add as Datastore, the new data storage will be immediately connected as a datastore object.

Proxmox Storage configuration is now complete, and you just need to assign access rights to the repository. To do this:

  1. Click on the name of the created data storage, go to Permissions, and click Add — User Permission.
  2. Select the desired user and their role, then click Add to confirm.

At this point, the preliminary setup is complete.

Save the Fingerprint

By default, PBS uses a self-signed SSL certificate. You must save the fingerprint to establish trusted connections between the client and the server in the future. Without it, you won’t be able to connect — this is one of the security mechanisms.

Go to Administration — Shell and capture the server's "fingerprint" with the command:

proxmox-backup-manager cert info | grep Fingerprint

This will return a string containing the unique fingerprint. You can later use it to establish a connection with the backup server.

Add a Server

You can add storage directly from the Proxmox VE web interface (Datacenter — Storage — Add) or manually via the console. Let’s explore the second option, as it provides more flexibility in configuration.

You need to define the new storage with the pbs type on your Proxmox VE node. In the following example, store2 is used as the storage name, and the server address is localhost. You are connecting as user1@pbs.

  1. Add the storage:

pvesm add pbs store2 --server localhost --datastore store2
  1. Set the username and password for access:

pvesm set store2 --username user1@pbs --password <secret>

If you don’t want to enter the password as plain text, you can pass the --password parameter without any arguments. This will prompt the program to ask for the password when you enter the command.

If your backup server uses a self-signed certificate, you need to add the certificate's fingerprint to the configuration. You already obtained the fingerprint earlier with the following command:

proxmox-backup-manager cert info | grep Fingerprint

To establish a trusted relationship with the Backup Server Linux, add the fingerprint to the configuration:

pvesm set store2 --fingerprint 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe

After --fingerprint, paste the fingerprint you obtained.

  1. Check the status of the storage with:

pvesm status --storage store2

Through the web interface, you will see the storage listed among the virtual machines and container backups available for data storage, along with usage statistics. It’s now time to create your first backup.

Backup and Recovery

Suppose you have an LXC container running Ubuntu inside. To back it up:

  1. Open the Backup section.
  2. Select the desired Storage.
  3. Click Backup now.
  4. Choose the type of backup.

If you access the PBS server, you can view the information about the completed backup task.

To verify the backup’s functionality, delete the Ubuntu container and then perform a recovery.

  1. In the PVE web interface, go to Storage.
  2. Open the Content tab.
  3. Select the backup file.
  4. For recovery, choose the location and a new identifier (by default, it will be the same as when we created the backup), and set the read data limit. This will help avoid overloading the virtualization server’s input channel. Click Restore and start the container.

Thanks to the fast backup creation and recovery process in Proxmox, you can also easily migrate a virtual machine.

Backing up a virtual machine is no different from backing up a container. The recovery process is the same. You specify the desired backup and the location for deployment and decide whether to start the machine immediately after the procedure is completed.

If you need not the entire backup but only individual files, you can recover them through the PBS web interface.

Conclusion

By setting up backups with Proxmox, you can be confident that virtual machines or containers won’t be lost in case of a storage failure. You can easily restore them with minimal effort. All that is required is to mount a new host, add the data storage, and start the recovery process.

Servers
21.11.2024
Reading time: 9 min

Similar

Servers

How to Migrate From Zapier to n8n and Organize n8n Workflows

Automation means setting up processes so that they run without manual intervention. Instead of copying data, sending notifications, or checking reports every day, you create a scenario once—and the system does everything automatically: quickly, accurately, and on schedule. For example: a form submission is sent to Telegram and the CRM; an order automatically turns into an invoice; every morning, a fresh report from Google Sheets is delivered to the chat; if a service fails, the team receives a log and a notification. Such scenarios eliminate routine errors, save time, and make processes more reliable. Increasingly, they are built not by developers but by product managers, marketers, and operations managers, through no-code tools like Zapier. However, this simplicity comes at a cost: strict limits, dependence on the cloud, paid integrations, and limited flexibility. That’s why many people are looking for a Zapier alternative, just as convenient, but more transparent and autonomous. One of the most popular alternatives is n8n, an open-source tool that can be deployed on your own server. Why Zapier Stops Being Enough and How n8n Helps Zapier has become the automation standard for small and medium-sized businesses. It has a simple interface, ready-made integrations, and triggers and actions that can be set up in a couple of clicks. But as your tasks grow, you start to encounter limitations that no one thinks about at first. First, strict limits. A form submission, a date filter, and a table entry already count as three tasks. You can easily reach hundreds per month, and with CRM or email campaigns, thousands.The basic plan runs out quickly, and the next tiers cost $49, then $69, then $299—and even then, the limits remain. Second, dependence on the cloud. Your entire business logic lives on Zapier’s servers. If the service goes down, nothing works. If the pricing or terms change, you have to accept them. Third, a closed architecture. Requesting an external API with authorization, writing a filter in JavaScript, or creating a universal error handler is either impossible, available only on Pro accounts, or heavily restricted in the free plan. n8n solves these problems. You install the platform on your own server. Unlike Zapier, there are no limits on the number of tasks, executions, or workflows. It supports more than 400 integrations, and if needed, you can connect any API or write your own logic. For that, there’s a built-in Code Node, where you can use JavaScript or Python directly within a workflow. At the same time, the interface remains visual and user-friendly, just like Zapier, but with more flexibility and control. n8n isn’t just for developers. The interface is intuitive for anyone who has ever built a Zapier flow. And if necessary, you can always bring in a developer to add a node, custom module, or code snippet. In essence, n8n is the same kind of builder, just without restrictions and hosted on your own server. You decide where your workflows are stored, who has access, how logs are managed, and when updates happen. No subscription fees, no external rules. In summary, the n8n vs. Zapier comparison looks like this: Criterion Zapier n8n (self-hosted) Cost From $19.99/month and up, depending on the number of tasks The software itself is free; you only pay for the server Limits Limits on tasks, steps, and run frequency No limits: run as many tasks as your server can handle Where it runs Only in Zapier’s cloud On your own server, in the cloud, or locally Data control All workflows and tokens are stored by Zapier Full control. Everything stays with you Access to logic Only what’s allowed (limited feature set) You can write scripts in JavaScript and Python, and connect any API Interface Visual, simple Also visual Workflow flexibility Simple linear flow Conditions, branches, loops, sub-workflows Updates and support Only from Zapier, on their schedule Update when you want; community support available Entry barrier Minimal, no technical skills required Requires some admin skills to install and set up Zapier on your server  Scalability Limited by your plan You can add queues and scale workers How to Migrate Workflows from Zapier to n8n If you’ve used Zapier before, you probably already have workflows. Migrating them manually may seem tedious, but in practice, it’s much simpler: many chains in n8n can be built more clearly and flexibly. The key is to approach migration step by step. Step 1. Analyze your current Zaps Before starting the migration, make a list of all active workflows: Which ones run daily or on a schedule? Which involve external services (Gmail, Telegram, Google Sheets)? Which have many steps and complex logic? It’s helpful to create a priority table: note which workflows must be migrated first, which can wait, and which can be removed entirely. Step 2. Determine what can be transferred directly Many workflows from Zapier can be recreated in n8n almost identically: webhook → filter → API request on schedule → fetch records from Airtable → write to table email received → extract data → send to Notion If the chain is simple, you can rebuild it in n8n in about 10–15 minutes. Nodes in the editor visually resemble Zapier steps: settings on the right, inputs and outputs on the left. Everything feels familiar. Step 3. Where manual adjustments are needed Some things in Zapier are handled automatically, such as date formatting or filtering. In n8n, you need to configure them explicitly through separate nodes (IF, Set, or Function). However, this gives you full control. Examples: In Zapier: a filter for “amount > 1000” takes one line. In n8n: you set up the condition manually in an IF node. The advantage is that in n8n you can use variables, combine conditions, work with nested JSON structures, and create branches. Step 4. Import JSON from Zapier (if available) Some Zaps can be exported as JSON using Zapier CLI or Zapier Transfer. There’s no direct import feature in n8n, but you can open the exported structure, copy URLs, fields, and steps, and recreate them manually. This process is faster than it sounds, especially once you understand the structure. Step 5. Test and enable n8n lets you run each step of a workflow individually. This is convenient because: You can see the input and output data at each stage. You can debug faulty branches separately. There’s a built-in log for all runs. After configuration, make sure the workflow runs stably. Then connect real webhooks, enable the schedule, and delete the old Zap. How to Organize Workflows in n8n: Tips for Usability and Reliability Give Nodes Meaningful Names By default, each node is named after its type, for example, HTTP Request or IF. If your workflow has ten such nodes in a row, it’s hard to navigate. Rename them: “Request to CRM,” “Filter by Status,” “Send to Telegram.” This will especially help when you revisit the workflow later. Use Color Coding In n8n, you can highlight nodes with colors. It doesn’t affect execution but makes the structure easier to read: Incoming data: blue Logic or conditions: yellow External APIs: red Final actions: green On large workflows, this really saves time. Split Workflows into Multiple Entities In n8n, there are workflows and subworkflows.The latter don’t trigger on their own; they’re used only within other workflows. n8n includes a special node, Execute Workflow, which allows you to call one workflow from another. If a single workflow grows too large, break it into parts. This approach has several advantages: You can reuse common actions, such as error logging. The main workflow stays compact. Changes in a subworkflow don’t break logic elsewhere. Use the Code node for Complex Transformations When you need to filter an array, extract a field from a nested object, or calculate a date difference, it’s often easier to use the Code node and write a few lines of JavaScript instead of building a long chain of Set + IF + Merge nodes. Example: // Keep only orders greater than 1000 return items.filter(item => item.json.total > 1000); To add this node, search for “Code” in the editor and select the first result. Enable Queue Mode if You Expect a Heavy Load By default, n8n runs all workflows in a single process. That’s fine for 10–20 executions a day. But if your workflows run more than 100 times per hour, it’s recommended to enable queue mode. This allows you to distribute tasks among multiple worker nodes and process them in parallel. You can find more details in the n8n documentation. It’s an advanced feature, but the docs include ready-made examples. Don’t Forget try/catch and fallback External APIs may be unavailable, or a server may return a 500 error. A failure in one step shouldn’t break the entire workflow. Add status checks, use IF and Set nodes with default values, and in extreme cases, send a notification to Telegram or write to a log. Common Migration Errors and How to Fix Them The cases in the table below are the most common. By fixing them early, you’ll avoid 80% of typical issues when migrating to n8n. Symptom Problem Solution After restarting the container, some workflows disappear Workflows were stored in SQLite inside the container without a persistent volume; the database reverted on stop Mount a volume to /home/node/.n8n or use PostgreSQL/MySQL right away. More details are available in the n8n community. You can also set up regular data backups. The server crashes with 502 or “out of memory” when processing thousands of records The workflow processes the entire array at once, and RAM runs out Split data with SplitInBatches, use a subworkflow or a queue, and increase RAM on the host if needed Logs fill up with “Redis unavailable – trying to reconnect…” and n8n shuts down Incorrect environment variables for queue mode; the service can’t reach Redis Set QUEUE_BULL_REDIS_HOST=redis (or the correct host) and check your Docker Compose or K8s network Webhooks return 404/502 behind Nginx or Traefik WEBHOOK_URL or N8N_HOST contain an incorrect path or missing protocol; URL doesn’t match proxy Set WEBHOOK_URL=https://example.com (without /webhook), restart n8n and the proxy Workflow stops on the first failed request and you find out by accident No error handler; workflow is purely linear without an Error Workflow Add a global Error Workflow, catch errors with the Error Trigger node, send alerts, and write logs Where to Learn n8n and What to Explore Next If you’re just starting with n8n, you don’t need to master every detail manually. The platform has an excellent community and dozens of resources to help you get up to speed, from your first workflow to advanced setups with queues, databases, and authentication. Here’s where to begin: Official Documentation. A clear structure with examples for every node, explanations of variables, queues, and storage. There are even ready-made YAML files for running n8n in Docker or Kubernetes. n8n YouTube Channel. Short, easy-to-follow videos demonstrating how to build workflows, from simple ones (Telegram bots, forms, Airtable) to complex ones (API requests, array processing, conditional transitions). Community Forum. The real knowledge hub for n8n. You can ask any question and almost always get an answer from developers or experienced users. You’ll also find workflow examples, unconventional solutions, and architecture advice there. Node and Template Marketplace. A collection of all available integrations: more than 400 services, from popular ones (Telegram, Gmail, Slack, Notion) to more niche ones. You can instantly install needed nodes or browse ready-made workflow templates. Should You Switch to n8n? Who Benefits Most n8n isn’t universal. It’s a powerful tool, but not for everyone. Before abandoning Zapier, evaluate your team’s needs, skills, and budget realistically. The switch almost always makes sense if: You’re paying more than $50 a month for Zapier. One or two workflows with frequent triggers can already exceed the basic plan. n8n gives you the same functionality with no subscription fees. You have more than 5–10 active Zaps. If automation is core to your operations (leads, orders, reports, notifications), migration provides both cost savings and greater flexibility. You’ve outgrown Zapier’s features. Need to connect a custom API, handle JSON, or add JavaScript? In Zapier, this is either impossible or expensive. In n8n, it’s built in. You want to store your data locally. For personal data, tokens, or orders, a self-hosted solution is safer and more controllable. You have basic technical skills or developer support. Even with a visual interface, production setups require some DevOps knowledge: domain configuration, volumes, and backups. You might want to stay on Zapier if: You have only one or two simple workflows and fit within the free or lowest-paid plan. You don’t want to deal with setup or updates. You just need to quickly test an idea rather than build an infrastructure. If automation is a key part of your processes rather than a casual experiment, switching to n8n gives you more control, flexibility, and predictability. Yes, it takes a bit more attention, but the result is worth it.
06 October 2025 · 11 min to read
Servers

Deploying and Configuring Keycloak

If you have a web application and you don’t want to write your own authentication from scratch, most likely, Keycloak will come in handy. This is a ready-made user management system that can do everything: logins, roles, tokens, social networks, and even SSO out of the box. In this article, we’ll look at how to deploy Keycloak on a Hostman server and configure authentication for your applications. What Keycloak Is and What It Does Keycloak is a service that takes over all the work with authorization and authentication. Instead of building your own system of logins, passwords, email confirmations, roles, and tokens, you just connect Keycloak and get everything you need. Keycloak functions as an independent service: it has a control panel, a REST API, integration with external systems, and clients for popular languages and frameworks. In essence, Keycloak becomes the central authorization hub in the project: users authorize through it, receive tokens, and then get into your applications. Key scenarios where Keycloak shows itself best: Single Sign-On (SSO) — one login for all your services. OAuth2 and OpenID Connect — ready-made implementation of standards. Roles and groups — determine which actions are available to a user. Social logins — login through Google, GitHub, etc. User management — creation, ban, password reset, email confirmation. Integration with any frontends and backends — Java, Python, Node.js, React, Angular. Keycloak helps not only to quickly launch login via username and password but also to scale access: from one landing page to dozens of microservices with different permissions. Installing Keycloak You can deploy Keycloak anywhere, from a home server to Kubernetes. But if you need a quick start without unnecessary complications, a regular VPS is suitable. Let’s see how to install Keycloak in Hostman conveniently, quickly, and inexpensively. What you will need: A Hostman account A cloud server (VPS) on Ubuntu 22.04 Installed Docker and Docker Compose (we’ll show below) Step 1. Create a server in Hostman Go to the control panel → Cloud servers. Click Create. Select the Ubuntu 22.04 image. Set the parameters (CPU, RAM, disk); the minimum configuration is enough for a test. Launch the server and connect to it via SSH. Step 2. Install Docker and Docker Compose This can be done in two commands: curl -fsSL https://get.docker.com -o get-docker.sh   sh get-docker.sh  Wait until the commands finish executing. Step 3. Create a Docker Compose file Create a folder for the project and a configuration file: mkdir keycloak && cd keycloak   nano docker-compose.yml  Insert the following content: services: keycloak: image: quay.io/keycloak/keycloak:26.3.2 command: start-dev environment: - KEYCLOAK_ADMIN=admin - KEYCLOAK_ADMIN_PASSWORD=admin ports: - "8080:8080" restart: always Save with the key combination Ctrl+O, then Enter to confirm. Close the editor with the combination Ctrl+X. Step 4. Start Keycloak Use the command: docker compose up -d In a minute, Keycloak will be available at: http://<your_IP>:8080 Step 5. Disable the HTTPS requirement (only for testing) By default, Keycloak requires HTTPS even in dev mode, which may result in the message “HTTPS required” when opening. To disable this behavior only in the test environment, run the following commands inside the Keycloak container: docker exec -it keycloak-keycloak-1 /opt/keycloak/bin/kcadm.sh config credentials --server http://localhost:8080 --realm master --user admin --password admin   docker exec -it keycloak-keycloak-1 /opt/keycloak/bin/kcadm.sh update realms/master -s sslRequired=NONE  After this, you can refresh the page; the HTTPS message will disappear. Now you can log in to the panel with the username and password admin. Basic Keycloak Configuration After successfully launching the container with Keycloak, you will get the admin panel at the address: http://<your-server>:8080/admin   This is where all configuration takes place: from creating realms to connecting clients, roles, and users. Realms In Keycloak, everything starts with a realm. It’s like a separate “world” with its own database of users, security settings, and applications. Imagine you are building a platform with two projects: an internal portal for employees and a website for clients. Each has its own users, its own roles, its own login settings. To avoid storing everything together, you create two realms: staff and clients. They are completely isolated from each other: logins, rules, login pages, and even password policies can be configured differently. A realm is a way to maintain order in the system and not mix users from different applications. Let’s create our own realm. To do this, go to the Manage realms tab (1) in the admin panel and click the Create realm button (2). Navigation to creating a realm in Keycloak: the “Manage realms” tab and the “Create realm” button Next, enter the realm name and click the Create button. Realm creation form: entering the name and confirming by clicking “Create” Go back to the Manage realms tab and click on the new realm; now it is selected by default. If you are testing Keycloak, disable the SSL certificate requirement for the new realm; it is not required in the test environment. Use: docker exec -it keycloak-keycloak-1 /opt/keycloak/bin/kcadm.sh update realms/<NEW_REALM_NAME> -s sslRequired=NONE   Users and Roles Users are people or services that will log into your applications through Keycloak. Each has its own username, password, and set of permissions. Users without assigned roles do not get access to any functions. To determine what they can and cannot do, roles are assigned to them. Roles are labels like “admin,” “manager,” “viewer.” They don’t do anything by themselves, but they let the application know: “this person is an admin, they can delete; and that one can only view.” Create your own role. To do this, go to the Realm roles tab (1) and click the Create role button (2). Navigation to the roles section: the “Realm roles” tab and the “Create role” button for creating a new role Enter the role name and click the Save button. Creating a role Now let’s try creating a user. Go to the Users tab and click the Add user button. Be sure to enter a username, and optionally an email, first name, and last name. Click Create. Creating a user: specify the parameters and save with the “Create” button Assign the new user a password for login. To do this, on the opened page, go to the Credentials tab (1), click the Set password button (2), set the password and repeat it. Leave the Temporary parameter enabled so that the new user changes their password after their first login into the system. Assigning login credentials: open the “Credentials” tab and enter the password of the new user Now assign the new user a role. In the same section, go to the Role mapping tab (1), click the Assign role button (2) → Realm roles (3). Assigning a role: open the “Role mapping” tab and select the desired role via “Assign role” → “Realm roles” Select the role and click Assign. Selecting a role from the list and confirming the assignment with the “Assign” button Now the role is assigned to the user. Clients Clients in Keycloak are applications that connect to the authorization system. Through them, the user logs into the service, and Keycloak verifies their identity and rights. Without a client, the system will not understand where the user came from, where to return them after login, and what permissions can be given. For each client, you can configure the login method: by username and password, through social networks, with two-factor authentication, or with tokens. You can allow or deny specific roles. You can specify where to redirect the user after successful login and after logout. Important: the same user can log into different clients. For example, in the frontend client, they log in as a regular user, and in the admin-panel client as a moderator. This is convenient when the application has multiple interfaces with different access levels. Authorization begins with the client. The application redirects the user to Keycloak. It verifies their data and returns them with a token. And the application uses this token to find out who it is dealing with and what is allowed for them. Create a test client. Go to the Clients tab and click the Create client button. Enter the client name in the Client ID field. At the Login settings step, in the Valid redirect URIs field, enter valid paths where the user can be redirected after authorization. For testing, you can leave an asterisk *. The other values can be left by default. Screen after creating a client in Keycloak Configuring Authorization for Applications Keycloak can be connected to almost any application: a frontend in React, a backend in Flask, a native desktop, or a mobile app. Keycloak itself implements standard protocols OAuth 2.0 and OpenID Connect, which means the application does not depend on the platform: if it supports authorization via the standard, it can work with Keycloak. The connection process is always roughly the same. The application redirects the user to Keycloak. It requests their login and password and returns a code. The application exchanges the code for a token and starts working with it. From that moment, the user is considered authorized. You can check their rights, roles, and accesses. On the Keycloak side, the application is set up as a client, for which authorization scenarios and access restrictions are defined in the interface. All these settings depend on the type of application and its capabilities. For example, if the user is writing a regular website, the standard flow will be enough. And if you want to authorize an IoT device, most likely, you will need to use the client credentials flow without user participation. Here's an example of configuring environment variables in the .env file for connecting to Keycloak. In your case, you would enter the IP address or domain of your server instead of the one shown there, and change the realm to the one you created. # Keycloak configuration KEYCLOAK_URL=http://166.1.227.100:8080 KEYCLOAK_REALM=master KEYCLOAK_CLIENT_ID=test-client KEYCLOAK_CLIENT_SECRET=your-client-secret KEYCLOAK_ADMIN_USERNAME=admin KEYCLOAK_ADMIN_PASSWORD=admin # Server configuration PORT=3000 SERVER_URL=http://166.1.227.100:3000 SESSION_SECRET=your-session-secret-change-this # Application URLs (must match Keycloak client configuration) APP_URL=http://166.1.227.100:3000 VALID_REDIRECT_URIS=http://166.1.227.100:3000/*,http://166.1.227.100:3000/oauth2/callback/* Integration with External Services Keycloak can be used not only for your own projects but also for logging into third-party services—for example, GitLab, Jenkins, or Grafana. This is especially convenient if you want to implement single sign-on (SSO) for the team. Documentation for integrating any service with Keycloak can be found publicly. As an example, let’s look at setting up authorization through Keycloak for GitLab. For this, you will need docker-compose and basic configuration in the control panel. Note that in this case, external services require the mandatory presence of an SSL certificate so that Keycloak can ensure secure login. For this, you will need your own domain. Here, it will be convenient to create two additional subdomains, for GitLab and for Keycloak, respectively. If GitLab is already installed, you can add the settings manually. But it’s simpler to deploy everything together right away. Below is an example docker-compose.yml that launches Keycloak and GitLab, already configured to work with each other. Don’t forget to put your domain instead of example.com. version: "3.9" services: traefik: image: traefik:v3.1 container_name: traefik command: - "--api.dashboard=true" - "--providers.docker=true" - "--entrypoints.web.address=:80" - "--entrypoints.websecure.address=:443" - "--certificatesresolvers.letsencrypt.acme.httpchallenge=true" - "--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web" - "--certificatesresolvers.letsencrypt.acme.email=admin@example.com" - "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json" ports: - "80:80" - "443:443" volumes: - "./letsencrypt:/letsencrypt" - "/var/run/docker.sock:/var/run/docker.sock:ro" restart: unless-stopped networks: - app-network gitlab: image: gitlab/gitlab-ce:latest container_name: gitlab hostname: gitlab.example.com volumes: - gitlab-config:/etc/gitlab - gitlab-logs:/var/log/gitlab - gitlab-data:/var/opt/gitlab restart: unless-stopped environment: GITLAB_OMNIBUS_CONFIG: | external_url 'https://gitlab.example.com' nginx['listen_https'] = false nginx['listen_port'] = 80 gitlab_rails['omniauth_enabled'] = true gitlab_rails['omniauth_allow_single_sign_on'] = ['openid_connect'] gitlab_rails['omniauth_auto_link_user'] = ['openid_connect'] gitlab_rails['omniauth_block_auto_created_users'] = false gitlab_rails['omniauth_providers'] = [ { name: "openid_connect", label: "Keycloak", args: { name: "openid_connect", scope: ["openid", "profile", "email"], response_type: "code", issuer: "https://keycloak.example.com/realms/master", discovery: true, client_auth_method: "query", uid_field: "preferred_username", client_options: { identifier: "gitlab", secret: "secret", redirect_uri: "https://gitlab.example.com/users/auth/openid_connect/callback" } } } ] labels: - "traefik.enable=true" - "traefik.http.routers.gitlab.rule=Host(`gitlab.example.com`)" - "traefik.http.routers.gitlab.entrypoints=websecure" - "traefik.http.routers.gitlab.tls.certresolver=letsencrypt" - "traefik.http.services.gitlab.loadbalancer.server.port=80" networks: - app-network keycloak: image: quay.io/keycloak/keycloak:26.3.2 container_name: keycloak command: start-dev environment: KC_HOSTNAME: https://keycloak.example.com KC_HOSTNAME_STRICT: false KC_HOSTNAME_HTTPS: true KC_PROXY: edge KC_HTTP_ENABLED: true KEYCLOAK_ADMIN: admin KEYCLOAK_ADMIN_PASSWORD: admin labels: - "traefik.enable=true" - "traefik.http.routers.keycloak.rule=Host(`keycloak.example.com`)" - "traefik.http.routers.keycloak.entrypoints=websecure" - "traefik.http.routers.keycloak.tls.certresolver=letsencrypt" - "traefik.http.services.keycloak.loadbalancer.server.port=8080" networks: - app-network volumes: gitlab-config: gitlab-logs: gitlab-data: networks: app-network: driver: bridge In the project, Traefik is used as a reverse proxy. It will automatically issue free Let’s Encrypt SSL certificates for the subdomains. Launch the project: docker compose up -d In the Keycloak admin panel, create a client gitlab, where you specify: Root URL — the GitLab domain with the https protocol. For example, https://gitlab.example.com. Valid redirect URIs — the GitLab domain with the https protocol and all possible paths under this domain. For example, https://gitlab.example.com/*. The default realm is master. If desired, you can create a separate realm. Users for GitLab and other services are created manually through the Keycloak admin panel. After loading GitLab, go to the login page at the domain belonging to GitLab. The service will offer to log in via Keycloak: GitLab login screen with an available option to log in via Keycloak After successful authorization via Keycloak, the editing panel of the new user created after authorization will open. GitLab new user settings window after authorization via Keycloak Troubleshooting Common Issues Sometimes errors occur when deploying and configuring Keycloak, both in the panel itself and during integration with other services. Below we’ve collected common symptoms, causes, and solutions so you can quickly fix the problem and continue setup. Symptom Problem Solution “HTTPS required” in the browser or logs Keycloak requires HTTPS even in dev mode Disable SSL requirement: docker exec -it keycloak-keycloak-1 bash./kcadm.sh config credentials --server http://localhost:8080 --realm master --user admin --password admin ./kcadm.sh update realms/master -s sslRequired=NONE Keycloak UI loads endlessly Error due to incorrect KC_HOSTNAME or CORS Make sure the KC_HOSTNAME variable is not set or matches the address where you are opening Keycloak Keycloak does not save sessions/settings Launched without volume, state is not saved Add a volume in docker-compose.yml:- keycloak_data:/opt/keycloak/data Error Web Crypto API is not available React application is running in an environment without HTTPS or in an old browser Run via HTTPS or in a modern browser. On a dev server, use localhost A 'Keycloak' instance can only be initialized once Multiple Keycloak initializations in React Make sure initialization happens only once, for example, in a separate keycloak.js file, not in each component Ssl connect returned=1 errno=0 ... in GitLab GitLab requires HTTPS, but Keycloak is running over HTTP Temporarily disable SSL requirement in Keycloak (dev only), or configure HTTPS with a self-signed or Let’s Encrypt certificate After login, the user is not created in GitLab Automatic user creation disabled in GitLab Make sure the parameters are set:omniauth_auto_link_user = ['openid_connect'] omniauth_block_auto_created_users = false  The login button via Keycloak does not appear Error in omniauth_providers or issuer Check client_id, issuer, and redirect_uri in the GitLab configuration. They must exactly match the client in Keycloak Keycloak does not start Old docker-compose file or wrong image version Make sure you are using the current image (for example, quay.io/keycloak/keycloak:26.3.2) and the start-dev startup command Conclusion If you are creating a web application and want to quickly launch authorization, Keycloak becomes an excellent solution. It eliminates routine tasks: logins, roles, sessions, social networks, access rights—everything is available right away. And if you’re looking for a reliable, high-performance, and budget-friendly solution for your workflows, Hostman has you covered with Linux VPS Hosting options, including Debian VPS, Ubuntu VPS, and VPS CentOS. We covered how to deploy Keycloak on a server, configure the basic panel, connect React and Express applications, and integrate a third-party service like GitLab. This is a universal approach: once you configure Keycloak, you can add new services to it in just minutes and manage access from a single panel. This approach saves time, simplifies maintenance, reduces risks, and makes the system more secure. And most importantly, you no longer waste effort reinventing your own authorization.
05 September 2025 · 16 min to read
Ubuntu

How to Install VNC on Ubuntu

If you need to interact with a remote server through a graphical interface, you can use VNC technology.Through a network, users can connect remotely to a server using VNC (Virtual Network Computing). It employs the RFB protocol to send screen images and input data from different devices (such keyboards and mice) and runs on a client-server architecture. Ubuntu, Windows, macOS, and other operating systems are among those that VNC supports. The ability to connect several users at once is another benefit of VNC, which can be helpful for group tasks or training sessions. And if you’re looking for a reliable, high-performance, and budget-friendly solution for your workflows, Hostman has you covered with Linux VPS Hosting options, including Debian VPS, Ubuntu VPS, and VPS CentOS. In this guide, we will describe how to install VNC on Ubuntu, using a Hostman cloud server with Ubuntu 22.04 as an example. Finished installation of VNC on Ubuntu Step 1: Preparing to Install VNC Before starting the installation process on both the server and the local machine, there are a few prerequisites to review.  Here is a list of what you’ll need to complete the installation: A Server Running Ubuntu 22.04. In this guide, we will use a cloud server from Hostman with minimal hardware configuration. Hostman's plan selection in admin panel A User with sudo Privileges. You should perform the installation as a regular user with administrative privileges. Select a Graphical Interface. You’ll need to choose a desktop environment that you will use to interact with the remote server after installing the system on both the server and the local machine. A Computer with a VNC Client Installed.  At the moment, the console is the sole method of communication with a rented server running Ubuntu 22.04. You must install a desktop environment and VNC on the server in order to enable remote management through a graphical interface. The desktop environments and VNC servers that are compatible with Ubuntu servers are listed below. VNC Servers: TightVNC Server. One of the most popular VNC servers for Ubuntu. It is easy to set up and offers good performance. RealVNC Server. RealVNC provides a commercial solution for remote access to servers across various Linux distributions, including Ubuntu, Debian, Fedora, Arch Linux, and others. Desktop Environments: Xfce. A lightweight and fast desktop environment, ideal for remote sessions over VNC. It uses fewer resources than heavier desktop environments, making it an excellent choice for servers and virtual machines. GNOME. The default Ubuntu desktop environment, offering a modern and user-friendly interface. It can be used with VNC but will consume more resources than Xfce. KDE Plasma. Another popular desktop environment that provides a wide range of features and a beautiful design. The choice of VNC server and desktop environment depends on the user’s specific needs and available resources. TightVNC and Xfce are excellent options for stable remote sessions on Ubuntu, as they do not require high resources. In the next step, we will describe how to install them on the server in detail. Step 2: Installing the Desktop Environment and VNC Server To install the VNC server on Ubuntu along with the desktop environment, connect to the server and log in as a regular user with administrative rights. Update the Package List  After logging into the server, run the following command to update the packages from the connected repositories: sudo apt update Install the Desktop Environment  Next, install the previously selected desktop environment. To install Xfce, enter: sudo apt install xfce4 xfce4-goodies Here, the first package provides the basic Xfce desktop environment, while the second includes additional applications and plugins for Xfce, which are optional. Install the TightVNC Server  To install TightVNC, enter: sudo apt install tightvncserver Start the VNC Server  Once the installation is complete, initialize the VNC server by typing: vncserver This command creates a new VNC session with a specific session number, such as :1 for the first session, :2 for the second, and so on. This session number corresponds to a display port (for example, port 5901 corresponds to :1). This allows multiple VNC sessions to run on the same machine, each using a different display port. This command will ask you to create a password during the initial setup, which is necessary for users to access the server's graphical user interface. Don't forget to verify your password to run VNC on Ubuntu Set the View-Only Password (Optional)  After setting the main password, you’ll be prompted to set a password for view-only mode. View-only mode allows users to view the remote desktop without making any changes, which is helpful for demonstrations or when limited access is needed. If you need to change the passwords set above, use the following command: vncpasswd Now you have a VNC session. VNC on Ubuntu is running In the next step, we will set up VNC to launch the Ubuntu server with the installed desktop environment. Step 3: Configuring the VNC Server The VNC server needs to know which desktop environment it should connect to. To set this up, we’ll need to edit a specific configuration file. Stop Active VNC Instances  Before making any configurations, stop any active VNC server instances. In this guide, we’ll stop the instance running on display port 5901. To do this, enter: vncserver -kill :1 Simple command to stop VNC running on Ubuntu Here, :1 is the session number associated with display port 5901, which we want to stop. Create a Backup of the Configuration File  Before editing, it’s a good idea to back up the original configuration file. Run: mv ~/.vnc/xstartup ~/.vnc/xstartup.bak Edit the Configuration File  Now, open the configuration file in a text editor: nano ~/.vnc/xstartup Replace the contents with the following: #!/bin/bashxrdb $HOME/.Xresourcesstartxfce4 & #!/bin/bash: This line is called a "shebang," and it specifies that the script should be executed using the Bash shell. xrdb $HOME/.Xresources: This line reads settings from the .Xresources file, where desktop preferences like colors, fonts, cursors, and keyboard options are stored. startxfce4 &: This line starts the Xfce desktop environment on the server. Make the Configuration File Executable To allow the configuration file to be executed, use: chmod +x ~/.vnc/xstartup Start the VNC Server with Localhost Restriction Now that the configuration is updated, start the VNC server with the following command: vncserver -localhost The -localhost option restricts connections to the VNC server to the local host (the server itself), preventing remote connections from other machines. You will still be able to connect from your computer, as we’ll set up an SSH tunnel between it and the server. These connections will also be treated as local by the VNC server. The VNC server configuration is now complete. Step 4: Installing the VNC Client and Connecting to the Server Now, let’s proceed with installing a VNC client. In this example, we’ll install the client on a Windows 11 computer. Several VNC clients support different operating systems. Here are a few options:  RealVNC Viewer. The official client from RealVNC, compatible with Windows, macOS, and Linux. TightVNC Viewer. A free and straightforward VNC client that supports Windows and Linux. UltraVNC. Another free VNC client for Windows with advanced remote management features. For this guide, we’ll use the free TightVNC Viewer. Download and Install TightVNC Viewer Visit the official TightVNC website, download the installer, and run it. Download VNC from official website In the installation window, click Next and accept the license agreement. Then, select the custom installation mode and disable the VNC server installation, as shown in the image below. This is what you need to install Click Next twice and complete the installation of the VNC client on your local machine. Set Up an SSH Tunnel for Secure Connection To encrypt your remote access to the VNC server, use SSH to create a secure tunnel. On your Windows 11 computer, open PowerShell and enter the following command: ssh -L 56789:localhost:5901 -C -N -l username server_IP_address Make sure that OpenSSH is installed on your local machine; if not, refer to Microsoft’s documentation to install it. This command configures an SSH tunnel that forwards the connection from your local computer to the remote server over a secure connection, making VNC believe the connection originates from the server itself. Here’s a breakdown of the flags used: -L sets up SSH port forwarding, redirecting the local computer’s port to the specified host and server port. Here, we choose port 56789 because it is not bound to any service. -C enables compression of data before transmitting over SSH. -N tells SSH not to execute any commands after establishing the connection. -l specifies the username for connecting to the server. Connect with TightVNC Viewer After creating the SSH tunnel, open the TightVNC Viewer and enter the following in the connection field: localhost:56789 You’ll be prompted to enter the password created during the initial setup of the VNC server. Once you enter the password, you’ll be connected to the VNC server, and the Xfce desktop environment should appear. Stop the SSH Tunnel To close the SSH tunnel, return to the PowerShell or command line on your local computer and press CTRL+C. You found out how to install VNC on Ubuntu Conclusion This guide has walked you through the step-by-step process of setting up VNC on Ubuntu 22.04. We used TightVNC Server as the VNC server, TightVNC Viewer as the client, and Xfce as the desktop environment for user interaction with the server. We hope that using VNC technology helps streamline your server administration, making the process easier and more efficient. We're prepared more detailed instruction on how to create server on Ubuntu if you have some trouble deploying it.
21 August 2025 · 8 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support