Sign In
Sign In

Server Hardening

Server Hardening
Hostman Team
Technical writer
Servers
19.03.2025
Reading time: 18 min

Server hardening is the process of improving security by reducing vulnerabilities and protecting against potential threats.

There are several types of hardening:

  1. Physical: A method of protection based on the use of physical means, such as access control systems (ACS), video surveillance, safes, motion detectors, and protective enclosures.
  2. Hardware: Protection implemented at the hardware level. This includes trusted platform modules (TPM), hardware security modules (HSM, such as Yubikey), and biometric scanners (such as Apple Touch ID or Face ID). Hardware protection measures also include firmware integrity control mechanisms and hardware firewalls.
  3. Software: A type of hardening that utilizes software tools and security policies. This involves access restriction, encryption, data integrity control, monitoring anomalous activity, and other measures to secure digital information.

We provide these examples of physical and hardware hardening to give a full understanding of security mechanisms for different domains. In this article, we will focus on software protection aspects, as Hostman has already ensured hardware and physical security.

Most attacks are financially motivated, as they require high competence and significant time investments. Therefore, it is important to clearly understand what you are protecting and what losses may arise from an attack. Perhaps you need continuous high availability for a public resource, such as a package mirror or container images, and you plan to protect your resource for this purpose. There can be many variations. First, you need to create a threat model, which will consist of the following points:

  • Value: Personal and public data, logs, equipment, infrastructure.
  • Possible Threats: Infrastructure compromise, extortion, system outages.
  • Potential Attackers: Hacktivists, insider threats, competitors, hackers.
  • Attack Methods: Physical access, malicious devices, software hacks, phishing/vishing, supply chain attacks.
  • Protection Measures: Periodic software updates, encryption, access control, monitoring, hardening—what we will focus on in this article.

Creating a threat model is a non-trivial but crucial task because it defines the overall “flow” for cybersecurity efforts. After you create the threat model, you might need to perform revisions and clarifications depending on changes in business processes or other related parameters.

While creating the threat model, you can use STRIDE, a methodology for categorizing threats (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege), and DREAD, a risk assessment model (Damage, Reproducibility, Exploitability, Affected Users, Discoverability). For a more formalized approach, you can also refer to ISO/IEC 27005 or NIST 800-30 standards.

There will always be risks that can threaten both large companies and individual users who recently ordered a server to host a simple web application. The losses and criticality may vary, but from a technical perspective, the most common threats are:

  • DoS/DDoS: Denial of service or infrastructure failure, resulting in financial and/or reputational losses.
  • Supply Chain Attack: For example, infecting an artifact repository, such as a Container Registry: JFrog Artifactory, Sonatype Nexus.
  • Full System Compromise: Includes establishing footholds and horizontal movement within the infrastructure.
  • Using your server as a launchpad for complex technological attacks on other resources. If this leads to serious consequences, you will likely spend many hours in court and incur significant financial costs.
  • Gaining advantages by modifying system resources, bypassing authentication, or altering the logic of the entire application. This can lead to reputational and/or financial losses.

Some of these attacks can be cut off early or significantly complicated for potential attackers if the server is properly configured.

Hardening is not a one-time procedure; it is an ongoing process that requires continuous monitoring and adaptation to new threats.

The main goal of this article is to equip readers with server hardening techniques.

However, in the context of this article, we will discuss a more relevant and practical example—server protection.

After ordering a server, we would normally perform the initial setup. This is typically done by system administrators or DevOps specialists. In larger organizations, other technical experts (SecOps, NetOps, or simply Ops) may get involved, but in smaller setups, the same person who writes the code usually handles these tasks. This is when the most interesting misconfigurations can arise. Some people configure manually: creating users, groups, setting network configurations, installing the required software; others write and reuse playbooks—automated scripts.

In this article, we will go over the following server hardening checklist:

  1. Countering port scanning
  2. Configuring the Nginx web server
  3. Protecting remote connections via SSH
  4. Setting up Port Knocking
  5. Configuring Linux kernel parameters
  6. Hardening container environments

If you later require automation, you can easily write your own playbook, as you will already know whether specific security configurations are necessary.

Countering Port Scanning

Various types of attackers, from botnet networks to APT (Advanced Persistent Threat) groups, use port scanners and other device discovery systems (such as shodan.io, search.censys.io, zoomeye.ai, etc.) that are available on the internet to search for interesting hosts for further exploitation and extortion.

One popular network scanner is Nmap. It allows determining "live" hosts in a network and the services running on them through a variety of scanning methods. Nmap also includes the Nmap Script Engine, which offers both out-of-the-box functionality and the possibility to add custom scripts.

To scan resources using Nmap, an attacker would execute a command like:

nmap -sC -sV -p- -vv --min-rate 10000 $IP

Where:

  • $IP is the IP address or range of IP addresses to scan.
  • -sC enables the script engine.
  • -sV detects service versions.
  • -vv (from "double verbose") enables detailed output.
  • --min-rate 10000 is a parameter defining how many requests are sent in one go. In this case, an aggressive mode (10,000 units) is selected. Additionally, the rate modes can be adjusted separately with the flag -T (Aggressive, Insane, Normal, Paranoid, Polite, Sneaky).

Example of a scan result is shown below. From this information, we can see that three services are running:

  • SSH on port 22
  • Web service on port 80
  • Web service on port 8080

Image2

The tool also provides software versions and more detailed information, including HTTP status codes, port status (in this case, "open"), and TTL values, which help to determine if the service is in a container or if there is additional routing that changes the TTL.

Thus, an attacker can use a port scanner or search engine results to find your resource and attempt to attack based on the gathered information.

To prevent this, we need to break the attacker's pattern and confuse them. Specifically, we can make it so that they cannot identify which port is open and what service is running on it. This can be achieved by opening all ports: 2^16 - 1 = 65535. By "opening," we mean configuring incoming connections so that all connection attempts to TCP ports are redirected to port 4444, on which the portspoof utility dynamically responds with random signatures of various services from the Nmap fingerprint database.

To implement this, install the portspoof utility. Clone the appropriate repository with the source code and build it:

git clone https://github.com/drk1wi/portspoof.git
cd portspoof
./configure && make && sudo make install

Note that you may need to install dependencies for building the utility:

sudo apt install gcc g++ make

Grant execution rights and run the automatic configuration script with the specified network interface. This script will configure the firewall correctly and set up portspoof to work with signatures that mask ports under other services.

sudo chmod +x $HOME/portspoof/system_files/init.d/portspoof.sh
sudo $HOME/portspoof/system_files/init.d/portspoof.sh start $NETWORK_INTERFACE

Where $NETWORK_INTERFACE is your network interface (in our case, eth0).

To stop the utility, run the command:

sudo $HOME/portspoof/system_files/init.d/portspoof.sh stop eth0

Repeating the scan using Nmap or any other similar program, which works based on banner checking of running services, will now look like this:

Image3

Image source: drk1wi.github.io

There is another trick that, while less effective as it does not create believable service banners, allows you to avoid additional utilities like portspoof.

First, configure the firewall so that after the configuration, you can still access the server via SSH (port 22) and not disrupt the operation of existing legitimate services.

sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 22 -j RETURN

Then, initiate the process of redirecting all TCP traffic to port 5555:

sudo iptables -t nat -A PREROUTING -i eth0 -p tcp -m conntrack --ctstate NEW -j REDIRECT --to-ports 5555

Now, create a process that generates pseudo-random noise on port 5555 using NetCat:

nc -lp 5555 < /dev/urandom

These techniques significantly slow down the scan because the scanner will require much more time to analyze each of the 65,535 "services." Now, the primary task of securing the server is complete!

Configuring the Nginx Web Server

Nmap alone is not sufficient for a comprehensive analysis of a web application. In addition to alternatives like naabu from Project Discovery and rustscan, there are advanced active reconnaissance tools. These not only perform standard port scanning but specialize in subdomain enumeration, directory brute-forcing, HTTP parameter testing (such as dirbuster, gobuster, ffuf), and identifying and exploiting vulnerabilities in popular CMS platforms (wpscan, joomscan) and specific attacks (sqlmap for SQL injections, tplmap for SSTI).

These scanners work by searching for endpoints of an application, utilizing techniques like brute-forcing, searching through HTML pages, or connected JavaScript files. During their operation, millions of iterations occur comparing the response with the expected output to identify potential vulnerabilities and expose the service to exploitation.

To protect web applications from such scanners, we suggest configuring the web server. In this example, we’ll configure Nginx, as it is one of the most popular web servers.

In most configurations, Nginx proxies and exposes an application running on the server or within a cluster. This setup allows for rich configuration options.

To enhance security, we can add HTTP Security Headers and the lightweight and powerful ChaCha20 encryption protocol for devices that lack hardware encryption support (such as mobile phones). Additionally, rate limiting may be necessary to prevent DoS and DDoS attacks.

HTTP headers like Server and X-Powered-By reveal information about the web server and technologies used, which can help an attacker determine potential attack vectors.We need to remove these headers.

To do this, install the Nginx extras collection:

sudo apt install nginx-extras

Then, configure the Nginx settings in /etc/nginx/nginx.conf:

server_tokens off;
more_clear_headers Server;
more_clear_headers 'X-Powered-By';

Also, add headers to mitigate Cross-Site Scripting (XSS) attack surface:

add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
add_header X-XSS-Protection "1; mode=block";

And protect against Clickjacking:

add_header X-Frame-Options "SAMEORIGIN";

You can slow down automated attacks by setting request rate limits from a single IP address. Do this only if you are confident it won't impact service availability or functionality.

A sample configuration might look like this:

http {
    limit_req_zone $binary_remote_addr zone=req_zone:10m rate=10r/s;

    server {
        location /api/ {
            limit_req zone=req_zone burst=20 nodelay;
        }
    }
}

This configuration limits requests to 10 per second from a single IP, with a burst buffer of 20 requests.

To protect traffic from MITM (Man-in-the-Middle) attacks and ensure high performance, enable TLS 1.3 and configure strong ciphers:

ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers "TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256";
ssl_prefer_server_ciphers on;

You can also implement additional web application protection using a WAF (Web Application Firewall). Some free solutions include:

  • BunkerWeb — Lightweight, popular, and effective WAF.
  • ModSecurity — A powerful Nginx module with flexible rules.

To perform basic configuration of ModSecurity, you can install it like this:

sudo apt install libnginx-mod-security2

Then, enable ModSecurity in the Nginx configuration:

server {
    modsecurity on;
    modsecurity_rules_file /etc/nginx/modsecurity.conf;
}

Use Security Headers to analyze HTTP headers and identify possible configuration errors.

When configuring any infrastructure components, it's important to follow best practices. For instance, to create secure Nginx configurations, you can use an online generator, which allows you to easily generate optimal base settings for Nginx, including ciphers, OCSP Stapling, logging, and other parameters.

Image5

Protecting Remote Connections via SSH

If your server is still secured only by a password, this is a quite insecure configuration. Even complex passwords can eventually be compromised, especially when outdated or vulnerable versions of SSH are in use, allowing brute force attacks without restrictions, such as in CVE-2020-1616. Below is a table showing how long it might take to crack a password based on its complexity

Image4

Image source: security.org

It’s recommended to disable password authentication and set up authentication using private and public keys.

  1. Generate a SSH key pair (public and private keys) on your workstation:

ssh-keygen -t ed25519 -C $EMAIL

Where $EMAIL is your email address, and -t ed25519 specifies the key type based on elliptic curve cryptography (using the Curve25519 curve). This provides high performance, compact key sizes (256 bits), and resistance to side-channel attacks.

  1. Copy the public key to the server.

Read your public key from the workstation and save it to the authorized_keys file on the server, located at $HOME/.ssh/authorized_keys (where $HOME is the home directory of the user on the server you are connecting to). You can manually add the key or use the ssh-copy-id utility, which will prompt for the password.

ssh-copy-id user@$IP

Alternatively, you can add the key directly through your Hostman panel. Go to the Cloud serversSSH Keys section and click Add SSH key.  

00ff0ac8 9bbf 4cf9 9c1e 07f1b26d4985.png

Enter your key and give it a name.

F1223238 762b 43cf 8170 16e228029a5f.png

Once added, you can upload this key to a specific virtual machine or add it directly during server creation in the 6. Authorization section.

Ab8a4173 8122 4b14 9b1d 5842617648a3.png

To further secure SSH connections, adjust the SSH server configuration file at /etc/ssh/sshd_config by applying the following settings:

  • PermitRootLogin no — Prevents login as the root user.
  • PermitEmptyPasswords no — Disallows the use of empty passwords.
  • X11Forwarding no — Disables forwarding of graphical applications.
  • AllowUsers $USERS — Defines a list of users allowed to log in via SSH. Separate usernames with spaces.
  • PasswordAuthentication no — Disables password authentication.
  • PubkeyAuthentication yes — Enables public and private key authentication.
  • HostbasedAuthentication no — Disables host-based authentication.
  • PermitUserEnvironment no — Disallows changing environment variables to limit exploitation through variables like LD_PRELOAD.

After adjusting the configuration file, restart the OpenSSH daemon:

systemctl restart sshd

Finally, after making these changes, you can conduct a security audit using a service like ssh-audit or this website designed for SSH security checks. This will help ensure your configuration is secure and appropriately hardened.

Configuring Port Knocking

SSH is a relatively secure protocol, as it was developed by the OpenBSD team, which prides itself on creating an OS focused on security and data integrity. However, even in such widely used and serious software, software vulnerabilities occasionally surface.

Some of these vulnerabilities allow attackers to perform user enumeration. Although these issues are typically patched promptly, it doesn't eliminate the fact that recent critical vulnerabilities, like regreSSHion, have allowed for Remote Code Execution (RCE). Although this particular exploit requires special conditions, it highlights the importance of protecting your server's data.

One way to further secure SSH is to hide the SSH port from unnecessary visibility. Changing the SSH port seems pointless because, after the first scan by an attacker, they will quickly detect the new port. A more effective strategy is to use Port Knocking, a method of security where a "key" (port knocking sequence) is used to open the port for a short period, allowing authentication.

  1. Install knockd using your package manager:

sudo apt install knockd -y
  1. Configure knockd by editing the /etc/knockd.conf file to set the port knocking sequence and the corresponding actions. For example:

[options]
    UseSyslog

[openSSH]
    sequence = 7000,8000,9000
    seq_timeout = 5
    command = /usr/sbin/iptables -A INPUT -s %IP% -p tcp --dport 22 -j ACCEPT
    tcpflags = syn

[closeSSH]
    sequence = 9000,8000,7000
    seq_timeout = 5
    command = /usr/sbin/iptables -D INPUT -s %IP% -p tcp --dport 22 -j ACCEPT
    tcpflags = syn
    • sequence: The port sequence that needs to be "knocked" (accessed) in the correct order.
    • seq_timeout: The maximum time allowed to send the sequence (in seconds).
    • command: The command to be executed once the sequence is received correctly. It typically opens or closes the SSH port (or another service).
    • %IP%: The client IP address that sent the sequence (the one "knocking").
    • tcpflags: The SYN flag is used to filter out other types of packets.
  1. Start and enable knockd to run at boot:

sudo systemctl enable --now knockd
  1. Use knock or nmap to send the correct port knocking sequence:

Example command with nmap:

nmap -Pn --max-retries 0 -p 7000,8000,9000 $IP

Example command with knock:

knock $IP 7000 8000 9000

Where $IP is the IP address of the server you're trying to connect to.

If everything is configured correctly, once the correct sequence of port knocks is received, the SSH port (port 22) will temporarily open. At this point, you can proceed with the standard SSH authentication process.

This technique isn't limited to just SSH; you can configure port knocking for other services if needed (e.g., HTTP, FTP, or any custom service).

Port knocking adds an extra layer of security by obscuring the SSH service from the general public and only allowing access to authorized clients who know the correct sequence.

Configuring Linux Kernel Parameters

In today's insecure world, one of the common types of attack is Living off the Land (LOTL). This is when legitimate tools and resources are used to exploit and escalate privileges on the compromised system. One such tool that attackers frequently leverage is the ability to view kernel system events and message buffers. This technique is even used by advanced persistent threats (APTs).

It is important to secure your Linux kernel configurations to mitigate the risk of such exploits. Below are some recommended settings that can enhance the security of your system.

To enable ASLR (Address Space Layout Randomization), set these parameters:

  • kernel.randomize_va_space = 2: Randomizes the memory spaces for applications to prevent attackers from knowing where specific processes will run..
  • kernel.kptr_restrict = 2: Restricts user-space applications from obtaining kernel pointer information.

Also, disable system request (SysRq) functionality:

kernel.sysrq = 0

And restrict access to kernel message buffer (dmesg):

kernel.dmesg_restrict = 1

With this configuration, an attacker will not know a program's memory address and won't be able to infiltrate any important process for exploitation purposes. They will also be unable to view the kernel message buffer (dmesg) or send debugging requests (sysrq), which will further complicate their interaction with the system.

Hardening Container Environments

In modern architectures, container environments are an essential part of the infrastructure, offering significant advantages for developers, DevOps engineers, and system administrators. However, securing these environments is crucial to protect against potential threats and ensure the integrity of your systems.

To protect container environments, it's essential to adopt secure development practices and integrate DevSecOps alongside existing DevOps methodologies. This also includes forming resilient patterns and building strong security behaviors from an information security perspective.

Use minimal images, such as Google Distroless, and Software Composition Analysis (SCA) tools to check the security of your images.

You can use the following methods to analyze the security of an image.

  1. Docker Scout and Docker SBOM for generating a list of artifacts that make up an image.

Install Docker Scout and Docker SBOM as plugins for Docker. 

Create a directory for Docker plugins (if it doesn't exist):

mkdir -pv $HOME/.docker/cli-plugins

Install Docker Scout:

curl -sSfL https://raw.githubusercontent.com/docker/scout-cli/main/install.sh | sh -s --

Install Docker SBOM:

curl -sSfL https://raw.githubusercontent.com/docker/sbom-cli-plugin/main/install.sh | sh -s --

To check for vulnerabilities in an image using Docker Scout:

docker scout cves gradle

6f9eb055 E109 48c4 Ae5c 58d9a2b16a50

To generate an SBOM using Docker SBOM (which internally uses Syft):

docker sbom $IMAGE_NAME

Dccc2d99 C2a9 4a09 Afdd D22181865de6

$IMAGE_NAME is the name of the container image you wish to analyze.

To save the SBOM in JSON format for further analysis:

docker sbom alpine:latest --format syft-json --output sbom.txt

sbom.txt will be the file containing the generated SBOM.

  1. Container Scanning with Trivy

Trivy is a powerful security scanner for container images. It helps identify vulnerabilities and misconfigurations.

Install Trivy using the following script:

curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sudo sh -s -- -b /usr/local/bin v0.59.1

Run a security scan for a container image:

trivy image $IMAGE_NAME

$IMAGE_NAME is the name of the image you want to analyze.

For detailed output in JSON format, use:

trivy -q image --ignore-unfixed --format json --list-all-pkgs $IMAGE_NAME

Even with the minimal practices listed in this section, you can ensure a fairly decent level of container security.

Conclusion

Using the techniques outlined in the article, you can significantly complicate or even prevent a hack by increasing entropy. However, it is important to keep in mind that entropy should be balanced with system usability to avoid creating unnecessary difficulties for legitimate users.

Servers
19.03.2025
Reading time: 18 min

Similar

Ubuntu

How to Install VNC on Ubuntu

If you need to interact with a remote server through a graphical interface, you can use VNC technology.VNC (Virtual Network Computing) allows users to establish a remote connection to a server over a network. It operates on a client-server architecture and uses the RFB protocol to transmit screen images and input data from various devices (such as keyboards or mice). VNC supports multiple operating systems, including Ubuntu, Windows, macOS, and others. Another advantage of VNC is that it allows multiple users to connect simultaneously, which can be useful for collaborative work on projects or training sessions. In this guide, we will describe how to install VNC on Ubuntu, using a Hostman cloud server with Ubuntu 22.04 as an example. Step 1: Preparing to Install VNC Before starting the installation process on both the server and the local machine, there are a few prerequisites to review.  Here is a list of what you’ll need to complete the installation: A Server Running Ubuntu 22.04. In this guide, we will use a cloud server from Hostman with minimal hardware configuration. A User with sudo Privileges. You should perform the installation as a regular user with administrative privileges. Select a Graphical Interface. You’ll need to choose a desktop environment that you will use to interact with the remote server after installing the system on both the server and the local machine. A Computer with a VNC Client Installed.  Currently, the only way to communicate with a rented server running Ubuntu 22.04 is through the console. To enable remote management via a graphical interface, you’ll need to install a desktop environment along with VNC on the server. Below are lists of available VNC servers and desktop environments that can be installed on an Ubuntu server. VNC Servers: TightVNC Server. One of the most popular VNC servers for Ubuntu. It is easy to set up and offers good performance. RealVNC Server. RealVNC provides a commercial solution for remote access to servers across various Linux distributions, including Ubuntu, Debian, Fedora, Arch Linux, and others. Desktop Environments: Xfce. A lightweight and fast desktop environment, ideal for remote sessions over VNC. It uses fewer resources than heavier desktop environments, making it an excellent choice for servers and virtual machines. GNOME. The default Ubuntu desktop environment, offering a modern and user-friendly interface. It can be used with VNC but will consume more resources than Xfce. KDE Plasma. Another popular desktop environment that provides a wide range of features and a beautiful design. The choice of VNC server and desktop environment depends on the user’s specific needs and available resources. TightVNC and Xfce are excellent options for stable remote sessions on Ubuntu, as they do not require high resources. In the next step, we will describe how to install them on the server in detail. Step 2: Installing the Desktop Environment and VNC Server To install the VNC server on Ubuntu along with the desktop environment, connect to the server and log in as a regular user with administrative rights. Update the Package List  After logging into the server, run the following command to update the packages from the connected repositories: sudo apt update Install the Desktop Environment  Next, install the previously selected desktop environment. To install Xfce, enter: sudo apt install xfce4 xfce4-goodies Here, the first package provides the basic Xfce desktop environment, while the second includes additional applications and plugins for Xfce, which are optional. Install the TightVNC Server  To install TightVNC, enter: sudo apt install tightvncserver Start the VNC Server  Once the installation is complete, initialize the VNC server by typing: vncserver This command creates a new VNC session with a specific session number, such as :1 for the first session, :2 for the second, and so on. This session number corresponds to a display port (for example, port 5901 corresponds to :1). This allows multiple VNC sessions to run on the same machine, each using a different display port. During the first-time setup, this command will prompt you to set a password, which will be required for users to connect to the server’s graphical interface. Set the View-Only Password (Optional)  After setting the main password, you’ll be prompted to set a password for view-only mode. View-only mode allows users to view the remote desktop without making any changes, which is helpful for demonstrations or when limited access is needed. If you need to change the passwords set above, use the following command: vncpasswd Now you have a VNC session. In the next step, we will set up VNC to launch the Ubuntu server with the installed desktop environment. Step 3: Configuring the VNC Server The VNC server needs to know which desktop environment it should connect to. To set this up, we’ll need to edit a specific configuration file. Stop Active VNC Instances  Before making any configurations, stop any active VNC server instances. In this guide, we’ll stop the instance running on display port 5901. To do this, enter: vncserver -kill :1 Here, :1 is the session number associated with display port 5901, which we want to stop. Create a Backup of the Configuration File  Before editing, it’s a good idea to back up the original configuration file. Run: mv ~/.vnc/xstartup ~/.vnc/xstartup.bak Edit the Configuration File  Now, open the configuration file in a text editor: nano ~/.vnc/xstartup Replace the contents with the following: #!/bin/bashxrdb $HOME/.Xresourcesstartxfce4 & #!/bin/bash – This line is called a "shebang," and it specifies that the script should be executed using the Bash shell. xrdb $HOME/.Xresources – This line reads settings from the .Xresources file, where desktop preferences like colors, fonts, cursors, and keyboard options are stored. startxfce4 & – This line starts the Xfce desktop environment on the server. Make the Configuration File Executable To allow the configuration file to be executed, use: chmod +x ~/.vnc/xstartup Start the VNC Server with Localhost Restriction Now that the configuration is updated, start the VNC server with the following command: vncserver -localhost The -localhost option restricts connections to the VNC server to the local host (the server itself), preventing remote connections from other machines. You will still be able to connect from your computer, as we’ll set up an SSH tunnel between it and the server. These connections will also be treated as local by the VNC server. The VNC server configuration is now complete. Step 4: Installing the VNC Client and Connecting to the Server Now, let’s proceed with installing a VNC client. In this example, we’ll install the client on a Windows 11 computer. Several VNC clients support different operating systems. Here are a few options:  RealVNC Viewer. The official client from RealVNC, compatible with Windows, macOS, and Linux. TightVNC Viewer. A free and straightforward VNC client that supports Windows and Linux. UltraVNC. Another free VNC client for Windows with advanced remote management features. For this guide, we’ll use the free TightVNC Viewer. Download and Install TightVNC Viewer Visit the official TightVNC website, download the installer, and run it. In the installation window, click Next and accept the license agreement. Then, select the custom installation mode and disable the VNC server installation, as shown in the image below. Click Next twice and complete the installation of the VNC client on your local machine. Set Up an SSH Tunnel for Secure Connection To encrypt your remote access to the VNC server, use SSH to create a secure tunnel. On your Windows 11 computer, open PowerShell and enter the following command: ssh -L 56789:localhost:5901 -C -N -l username server_IP_address Make sure that OpenSSH is installed on your local machine; if not, refer to Microsoft’s documentation to install it. This command configures an SSH tunnel that forwards the connection from your local computer to the remote server over a secure connection, making VNC believe the connection originates from the server itself. Here’s a breakdown of the flags used: -L sets up SSH port forwarding, redirecting the local computer’s port to the specified host and server port. Here, we choose port 56789 because it is not bound to any service. -C enables compression of data before transmitting over SSH. -N tells SSH not to execute any commands after establishing the connection. -l specifies the username for connecting to the server. Connect with TightVNC Viewer After creating the SSH tunnel, open the TightVNC Viewer and enter the following in the connection field: localhost:56789 You’ll be prompted to enter the password created during the initial setup of the VNC server. Once you enter the password, you’ll be connected to the VNC server, and the Xfce desktop environment should appear. Stop the SSH Tunnel To close the SSH tunnel, return to the PowerShell or command line on your local computer and press CTRL+C. Conclusion This guide has walked you through the step-by-step process of setting up VNC on Ubuntu 22.04. We used TightVNC Server as the VNC server, TightVNC Viewer as the client, and Xfce as the desktop environment for user interaction with the server. We hope that using VNC technology helps streamline your server administration, making the process easier and more efficient. We're prepared more detailed instruction on how to create server on Ubuntu if you have some trouble deploying it.
30 May 2025 · 8 min to read
Servers

How to Correct Server Time

The method you choose for correcting the time on your server depends on how far off the server's clock is. If the difference is small, use the first method. If the clock is significantly behind or ahead, it's better not to adjust it in a single step — it's safer to change the time gradually. Configuration on Ubuntu/Debian Quick Fix To quickly change the time on the server, use the ntpdate utility. You need sudo privileges to install it: apt-get install ntpdate To update the time once: /usr/sbin/ntpdate 1.north-america.pool.ntp.org Here, the NTP pool is the address of a trusted server used to synchronize the time. For the USA, you can use NTP servers from this page. You can find pool zones for other regions at ntppool.org. You can also set up automatic time checks using cron: crontab -e 00 1 * * * /usr/sbin/ntpdate 1.north-america.pool.ntp.org This schedules synchronization once a day. Instead of a set interval, you can specify a condition. For example, to synchronize the time on every server reboot using cron reboot: crontab -e @reboot /usr/sbin/ntpdate 1.north-america.pool.ntp.org Gradual Correction To update the time gradually, install the ntp utility on Ubuntu or Debian. It works as follows: The utility checks data from synchronization servers defined in the configuration. It calculates the difference between the current system time and the reference time. NTP gradually adjusts the system clock. This gradual correction helps avoid issues in other services caused by sudden time jumps. Install NTP: apt-get install ntp For the utility to work correctly, configure it in the file /etc/ntp.conf. Add NTP servers like: server 0.north-america.pool.ntp.org server 1.north-america.pool.ntp.org iburst server 2.north-america.pool.ntp.org server 3.north-america.pool.ntp.org The iburst option improves accuracy by sending multiple packets at once instead of just one. You can also set a preferred data source using the prefer option: server 0.ubuntu.pool.ntp.org iburst prefer After each configuration change, restart the utility: /etc/init.d/ntp restart Configuration on CentOS The method choice rules are the same. If you need to correct a difference of a few seconds, the first method will do. For minutes or hours, the second method is better. Quick Fix To quickly adjust the time, use ntpdate. Install it with: yum install ntpdate For a one-time sync: /usr/sbin/ntpdate 1.north-america.pool.ntp.org Use Crontab to set automatic periodic synchronization. For daily sync: crontab -e 00 1 * * * /usr/sbin/ntpdate 1.north-america.pool.ntp.org To sync on boot instead of at regular intervals: crontab -e @reboot /usr/sbin/ntpdate 1.north-america.pool.ntp.org Gradual Correction To change the time on the server gradually, use ntp in CentOS. Install it: yum install ntp Enable the service on startup: chkconfig ntpd on In the file /etc/ntp.conf, specify accurate time sources, for example: server 0.north-america.pool.ntp.org server 1.north-america.pool.ntp.org iburst server 2.north-america.pool.ntp.org server 3.north-america.pool.ntp.org The iburst parameter works the same as in Ubuntu/Debian — it improves accuracy by sending a burst of packets. Restart the service after making changes: /etc/init.d/ntp restart Then restart the daemon: /etc/init.d/ntpd start Additional Options Time synchronization is usually done with the server closest to your server geographically. But in the configuration, you can specify the desired region directly in the subdomain. For example: asia.pool.ntp.org europe.pool.ntp.org Even if the NTP server is offline, it can still pass on system time. Just add this line: server 127.127.1.0 You can also restrict access for external clients. By default, these parameters are set: restrict -4 default kod notrap nomodify nopeer noquery restrict -6 default kod notrap nomodify nopeer noquery The options notrap, nomodify, nopeer, and noquery prevent changes to the server's configuration. KOD (kiss of death) adds another layer of protection: if a client sends requests too frequently, it receives a warning packet and then is blocked. If you want to allow unrestricted access for the local host: restrict 127.127.1.0 To allow devices in a local network to sync with the server: restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap After any changes, restart the service: service restart ntp To check the service’s operation, use the command: ntpq -p It will display a table showing the time source address, server stratum, last synchronization time, and other useful data.
16 April 2025 · 4 min to read
Servers

How to Set Up Network Storage with FreeNAS

NAS (Network Attached Storage) is a network data storage device. It provides shared file access from any connected computer or gadget. With this setup, all data is stored in one place, offering convenient access to NAS over a local network (LAN) or the Internet, and supports RAID and other technologies for data protection. NAS can be used as home storage for media files, an office server for shared documents, or a corporate solution for backups and file resources. In this tutorial, we’ll look at configuring FreeNAS — a free operating system for creating NAS based on FreeBSD. It is now developed under the name TrueNAS, but the core principles remain the same. This OS is free, uses the crash-resistant ZFS file system, and is flexible in configuration. Installing FreeNAS We’ll go through the installation of FreeNAS OS using a cloud server from Hostman. Choosing the Configuration Important system requirements for FreeNAS: RAM: 8 GB minimum (16 GB+ recommended, especially with large disks) Free disk for the system: at least 8 GB (16–32 GB recommended) Data storage disk: size depending on your needs This configuration ensures stable operation with up to 4 TB of data when using iSCSI, virtual machines, and databases and 8 TB for lighter tasks. In this tutorial, we’ll use Hostman, where only NVMe SSDs are available. However, in general, consider the following: For large media libraries, archives, and backups, HDDs are sufficient. For high-speed access, processing small files, or running VMs or databases, SSDs are better, either as primary storage or as a cache for performance. Step 1: Uploading the OS Image to Hostman Panel Go to the download page and choose an appropriate installer version in .iso format. To find the image: Click the directory of the version you want (recommended: STABLE) Open the x64 folder and copy the link to the .iso file. In this tutorial, we use version 13.3 STABLE. Image download link: https://download.freenas.org/13.3/STABLE/RELEASE/x64/TrueNAS-13.3-RELEASE.iso In the Hostman panel, go to the Cloud servers - Images section, click Upload image and paste the copied URL. Choose the server location and click Upload. Wait for the image to finish uploading. Step 2: Creating a Cloud Server Once the image is uploaded, click Create server from image. Choose the server configuration. Click Order to create the server. Step 3: Adding a Disk The default configuration includes 80 GB of NVMe storage — we’ll use this for the OS. Now we need to add an additional disk for storing data: Wait for the image to mount and for the server to become available. Go to the Plan tab and click Add disk. Choose the required size and click Add. Step 4: Installing the System Go to the Console tab. You can open the console in a new tab for convenience. The installer should appear in the console. Press Enter to start the installation. Choose the destination disk (in this case, the 80 GB NVMe). Press Space to select, then Enter to confirm. The installer will warn you that the disk will be erased. Confirm to proceed. Enter and confirm a password — you will use it later to access the web interface as the root user. Choose the boot mode. Hostman servers use Legacy BIOS. The installer offers to create a 16 GB swap partition. It helps extend RAM by using disk space, which is useful if you have less than 16 GB RAM or expect unstable loads. Not recommended on USB drives due to wear. The installation will begin.  After it completes, confirm that it finished successfully. Press Space, and in the menu, select Shutdown System to turn off the server. You can now delete the installation image so it doesn't incur charges. After rebooting, the system will show that the web interface is available via the server’s IP address. Installation is complete! Initial Setup of FreeNAS First Login to the Web Interface Go to the UI using the server’s IP address. Log in with: Username: root Password: the one set during installation You’ll see the TrueNAS dashboard. Setting Basic System Parameters Set the correct time zone: Open System, then General settings. Set your timezone in the Timezone field. Enable alerts: Go to Alert Services and Alert Settings. You can configure email notifications or messenger integrations. Creating and Configuring Storage (ZFS) FreeNAS uses the ZFS file system for reliable and flexible storage. Its benefits include data protection and useful tools for backups and replication. Go to Storage (1), then Pools (2) Click Add to create a new pool. Choose Create new pool. Enter a name for your pool, e.g., mypool. Select your disk (1) and move it to the Data VDevs field (2). You’ll see options for Mirror and Stripe modes: — Mirror Data is written to all disks in the group. If one disk fails, data remains on the others. Total storage equals the size of the smallest disk. Use when: Reliability is more important than capacity. You have two or more disks of similar size. You want redundancy without complex RAID setups. — Stripe Data is split and written across all disks. Better performance and full use of all disk space. If one disk fails, all data is lost. Use when: Speed and space are more important than reliability. Data isn’t critical (can be restored from elsewhere). You want to maximize storage with minimal setup. Click Create. Note that all data on the disk will be erased. The new pool should now appear in the panel. User Management and Access Rights In the left-hand menu, select the Account tab, then Users, and click the Add button. Fill in the required fields — full name, username (alias), and password. If needed, configure a home directory inside one of the created datasets. You can manage permissions within datasets, which are logical partitions or storage spaces created inside a ZFS pool. To do this, go to the Pools tab (1) and use the Edit Permissions option (2) on the desired dataset. You can configure access rights for individual users or entire groups. Try not to grant administrator (root) privileges to too many users, even if it seems more convenient. The fewer people with elevated access, the more secure your data will be. Setting Up Services and Sharing Protocols Enable the necessary services under the Services tab to take advantage of NAS features. The following protocols are available: SMB for Windows networks NFS for UNIX-based environments AFP for Apple users WebDAV for HTTP-based access iSCSI, FTP, and others You can configure each protocol after activation. For example, with SMB, you can set a workgroup and guest access parameters and enable auto-start on system reboot. After enabling a service, create a share in the Shares section by selecting the appropriate protocol. Advanced Features and Plugins FreeNAS (TrueNAS) features a robust plugin system (Jails, Plugins) that includes many popular applications. Some of the most in-demand plugins include: Nextcloud: A private cloud solution with office tools, calendar, audio/video conferencing. Ideal for collaborative work and personal file syncing (like Dropbox or Google Drive). Plex Media Server: A powerful tool for managing your media library — TV shows, movies, music, photos. It can auto-fetch metadata, download covers, and track viewed/unviewed status. Transmission: A lightweight torrent client with a web interface. Perfect for downloading large files directly to your NAS. Syncthing: Focused on peer-to-peer folder synchronization. Great for distributed teamwork or backup syncing across devices. Zoneminder: Enables you to set up a video surveillance system. Supports IP cameras, recording, and alert configurations. Tarsnap: A secure backup service for UNIX-like systems. To install a plugin, go to Plugins (1), choose an application, and click Install (2). Configuration (like ports or storage paths) is usually done after the quick setup. If you want more isolation, use Jails — FreeBSD-based environments that let you install packages and libraries independently of the main system. Backups and Data Protection ZFS Snapshots allow for quick recovery of data in case of accidental deletion or corruption. You can automate this by scheduling snapshots via the Tasks → Periodic Snapshot Tasks tab. Choose the dataset, snapshot lifetime, and frequency. Data deduplication saves storage space but is RAM-intensive (about 5 GB RAM per 1 TB of data). If you plan to use it heavily, consider increasing your memory. Otherwise, ZFS may slow down or run into resource issues. For advanced backup features, consider plugins like Asigra or Tarsnap. Choose a backup strategy based on your risk tolerance and data volume. Some users are fine with local snapshots; others may prefer offsite copies. Common Issues and Troubleshooting Symptom Problem Description Solution Cannot access the web interface (browser won’t open URL) Network or IP configuration issues, firewall port blocking 1. Check IP settings in TrueNAS console (options 1, 4, 6 in network menu). 2. Verify gateway and DNS settings. 3. If behind NAT, open/forward required ports (usually 80/443). 4. Ensure local firewall allows access. [EINVAL] vm_create: This system does not support virtualization CPU/motherboard doesn’t support VT-x/AMD-V, or it's disabled in BIOS/UEFI, or virtualization is off in the hypervisor 1. Enable Intel VT-x / AMD-V (SVM) in BIOS. 2. Confirm CPU supports virtualization. 3. If running inside a hypervisor, enable Nested Virtualization. "Pool is DEGRADED" or "FAULTED" ZFS pool has a failing or disconnected disk 1. Run zpool status in the console to identify the faulty disk. 2. Replace the failed disk if using RAIDZ or Mirror. 3. Start the resilvering process. 4. Review logs and run SMART tests. Slow performance or errors with deduplication Deduplication consumes too much RAM 1. Add more RAM. 2. Disable deduplication where not needed (e.g., media files). 3. Use only compression (LZ4) if resources are limited. Cannot access SMB share or it doesn't show up on the network Incorrect ACL or SMB configuration, workgroup mismatch, bad user profile 1. Enable SMB in Services and set it to auto-start. 2. Create a new share under Sharing → SMB and check permissions. 3. Configure ACLs on the dataset (e.g., Full Control for user/group). 4. Verify the correct workgroup setting. Snapshot creation/deletion fails Not enough free space or quota exceeded, or permission issues 1. Check available space in pools. 2. Increase/remove dataset quotas if too strict. 3. Make sure the user has snapshot permissions. SSH doesn’t work or key authentication fails SSH service off, keys not in the right place, wrong file permissions 1. Enable SSH under Services. 2. Upload public key under System → SSH Keypairs, or place it in ~/.ssh/authorized_keys. 3. Set correct permissions (700 for .ssh, 600 for key files). WebDAV access via password doesn’t work WebDAV user/password not set or port blocked by firewall 1. Go to Services → WebDAV and set the webdav user password. 2. Make sure the port (e.g., 8080) is open in the firewall. 3. Verify the correct access path (e.g., http://IP:8080/resource_name). Conclusion FreeNAS (TrueNAS) version 11.3 is well-suited for setting up a file server and running additional services. The system offers tools for managing ZFS pools, user permissions, and protocols like SMB, WebDAV, and iSCSI. If you need extended functionality, check out plugins and built-in virtualization (like VirtualBox or bhyve in newer versions). ZFS features such as deduplication, snapshots, and replication provide robust data protection. Plugins like Nextcloud or Plex make collaboration and media management much easier. The FreeNAS project evolved into TrueNAS, but the key principles remain: using ZFS instead of hardware RAID, flexible shared folder configuration, and a user-friendly web interface.
14 April 2025 · 10 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support