The Proxmox Virtual Environment (Proxmox VE) is a platform designed for clustering servers. This product allows you to deploy multiple virtual machines with various operating systems on a single computer. Additionally, it allows centralized management of an organization's IT infrastructure. This approach saves resources, easily protects critical services, and quickly recovers them after failures.
High-Availability: Ensuring fault tolerance.
Load Balancing: Distributing workload.
High Performance: Increasing performance.
Distributed Computing: Performing distributed computing.
Clusters consist of a group of servers connected by a high-speed communication channel. The system administrator sees them as a single entity, which provides access to the aforementioned functionalities. Each scenario has specific requirements for the elements forming a resilient cluster, so you should select servers based on their technical specifications.
For example, pursuing high performance requires systems with high floating-point operation speeds and low network latency. In any case, a server with high availability is needed. These virtualization clusters differ from grid systems, which, although they combine servers similarly, have a heterogeneous structure and no specific availability requirements for individual nodes.
Proxmox VE's popularity stems from its open-source nature. The software can be used as-is or modified according to specific technical requirements. The platform is built on Debian Linux with an RHEL kernel, so finding a developer with relevant experience is usually not an issue. Even in its original form, it has powerful functionality.
Things to keep in mind when using ProxmoxVE:
It offers two types of virtualization: full virtualization with KVM and containers with LXC.
You can add up to 32 physical machines per cluster.
It's preferable to use the same release of Proxmox on all active hosts.
Proxmox HA (High-Availability) mode requires at least three nodes in the cluster.
Nodes use UDP/5404, UDP/5405, TCP/22 ports.
Network latency between nodes should not exceed 2 milliseconds.
We recommend downloading the distribution only from the official website. The software is available as ISO files for direct download or as a torrent. Other sources may offer edited versions with added functionalities but may be less manageable.
Connect to the server and mount the previously downloaded image.
Select the disk for software installation from the dropdown in the GUI.
The Options section allows additional partitioning parameters, including region settings.
Set the ROOT authorization password and the email address of the Proxmox cluster system administrator.
Enter the full domain name, server IP address, subnet mask, primary gateway, and DNS server address in the next window.
Reboot the server by clicking the Reboot button to apply the changes.
You can now access the system's web interface at https://IP_address:8006
. Before using it, update the release to the current state and install protective software like Fail2Ban to guard against brute-force attacks. Subsequent steps include setting up storage and deploying virtual machines and operating systems.
Before combining servers into a cluster to host critical data, it's recommended to launch the system in test mode. We'll do this with three servers of identical configuration (2 CPU cores and 2 GB of RAM each). Initially, each server operates in Standalone Mode upon OS installation.
Create a new Proxmox cluster:
Click Create Cluster in the Proxmox Cluster section.
Name the future cluster and select the appropriate network connection.
Click Create to generate a 2048-bit security key.
The message "TASK OK" confirms the process is complete.
Rechecking the status will show the Proxmox Node is now operating in cluster mode.
To join a cluster:
Open Join Information in the Cluster section.
Copy the information from the Join Information section, which includes the server address and its fingerprint.
Click Join Cluster, paste the copied data, enter the ROOT password for Node #1 Proxmox VE Cluster, and click Join.
Repeat this process to connect the remaining nodes, resulting in a fully operational cluster on three servers.
Proxmox VE supports HA functions in both KVM and LXC virtualization modes. The ha-manager
utility detects errors and failures, switching from a failed host to a functional one. This provides full automation of node fault tolerance.
To achieve this, an NFS storage (example: 192.168.88.18
) must be set up.
Add NFS Storage:
In the GUI, navigate to Datacenter – Storage – Add – NFS.
Enter the storage ID and server IP address.
Select the desired directory from the Expert dropdown.
Specify the data type in Content.
Click Add to connect the storage to all active nodes.
Create an Ubuntu Container:
Go to Datacenter – HA – Add.
Enter the container ID, maximum restart attempts, and movement between hosts.
Click Add to notify active cluster hosts to control the container with the specified ID in case of failure.
To test the protection mechanism:
Power off Host #1. This guarantees a failure registration.
The interface will show the system's reaction; the virtual machine resumes within 120 seconds from "freeze".
To test quorum enforcement, use the command pvecm expected 1. This verifies that the HA mechanism switches the VM to Host #3.
Reconnecting the first two servers restores the cluster, ready to handle any failures. The third node remains active, but can be manually switched.
This example illustrates Proxmox VE's High Availability mechanism in KVM virtualization and LXC container usage. Once activated, the system operates automatically and can be managed from any internet-connected computer. Ensuring sufficient machine power is crucial, as high-speed connections are typically available in data centers.