Server hardening is the process of improving security by reducing vulnerabilities and protecting against potential threats.
There are several types of hardening:
We provide these examples of physical and hardware hardening to give a full understanding of security mechanisms for different domains. In this article, we will focus on software protection aspects, as Hostman has already ensured hardware and physical security.
Most attacks are financially motivated, as they require high competence and significant time investments. Therefore, it is important to clearly understand what you are protecting and what losses may arise from an attack. Perhaps you need continuous high availability for a public resource, such as a package mirror or container images, and you plan to protect your resource for this purpose. There can be many variations. First, you need to create a threat model, which will consist of the following points:
Creating a threat model is a non-trivial but crucial task because it defines the overall “flow” for cybersecurity efforts. After you create the threat model, you might need to perform revisions and clarifications depending on changes in business processes or other related parameters.
While creating the threat model, you can use STRIDE, a methodology for categorizing threats (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege), and DREAD, a risk assessment model (Damage, Reproducibility, Exploitability, Affected Users, Discoverability). For a more formalized approach, you can also refer to ISO/IEC 27005 or NIST 800-30 standards.
There will always be risks that can threaten both large companies and individual users who recently ordered a server to host a simple web application. The losses and criticality may vary, but from a technical perspective, the most common threats are:
Some of these attacks can be cut off early or significantly complicated for potential attackers if the server is properly configured.
Hardening is not a one-time procedure; it is an ongoing process that requires continuous monitoring and adaptation to new threats.
The main goal of this article is to equip readers with server hardening techniques.
However, in the context of this article, we will discuss a more relevant and practical example—server protection.
After ordering a server, we would normally perform the initial setup. This is typically done by system administrators or DevOps specialists. In larger organizations, other technical experts (SecOps, NetOps, or simply Ops) may get involved, but in smaller setups, the same person who writes the code usually handles these tasks. This is when the most interesting misconfigurations can arise. Some people configure manually: creating users, groups, setting network configurations, installing the required software; others write and reuse playbooks—automated scripts.
In this article, we will go over the following server hardening checklist:
If you later require automation, you can easily write your own playbook, as you will already know whether specific security configurations are necessary.
Various types of attackers, from botnet networks to APT (Advanced Persistent Threat) groups, use port scanners and other device discovery systems (such as shodan.io, search.censys.io, zoomeye.ai, etc.) that are available on the internet to search for interesting hosts for further exploitation and extortion.
One popular network scanner is Nmap. It allows determining "live" hosts in a network and the services running on them through a variety of scanning methods. Nmap also includes the Nmap Script Engine, which offers both out-of-the-box functionality and the possibility to add custom scripts.
To scan resources using Nmap, an attacker would execute a command like:
nmap -sC -sV -p- -vv --min-rate 10000 $IP
Where:
$IP
is the IP address or range of IP addresses to scan.-sC
enables the script engine.-sV
detects service versions.-vv
(from "double verbose") enables detailed output.--min-rate 10000
is a parameter defining how many requests are sent in one go. In this case, an aggressive mode (10,000 units) is selected. Additionally, the rate modes can be adjusted separately with the flag -T
(Aggressive, Insane, Normal, Paranoid, Polite, Sneaky).Example of a scan result is shown below. From this information, we can see that three services are running:
The tool also provides software versions and more detailed information, including HTTP status codes, port status (in this case, "open"), and TTL values, which help to determine if the service is in a container or if there is additional routing that changes the TTL.
Thus, an attacker can use a port scanner or search engine results to find your resource and attempt to attack based on the gathered information.
To prevent this, we need to break the attacker's pattern and confuse them. Specifically, we can make it so that they cannot identify which port is open and what service is running on it. This can be achieved by opening all ports: 2^16 - 1 = 65535. By "opening," we mean configuring incoming connections so that all connection attempts to TCP ports are redirected to port 4444, on which the portspoof utility dynamically responds with random signatures of various services from the Nmap fingerprint database.
To implement this, install the portspoof
utility. Clone the appropriate repository with the source code and build it:
git clone https://github.com/drk1wi/portspoof.git
cd portspoof
./configure && make && sudo make install
Note that you may need to install dependencies for building the utility:
sudo apt install gcc g++ make
Grant execution rights and run the automatic configuration script with the specified network interface. This script will configure the firewall correctly and set up portspoof to work with signatures that mask ports under other services.
sudo chmod +x $HOME/portspoof/system_files/init.d/portspoof.sh
sudo $HOME/portspoof/system_files/init.d/portspoof.sh start $NETWORK_INTERFACE
Where $NETWORK_INTERFACE
is your network interface (in our case, eth0
).
To stop the utility, run the command:
sudo $HOME/portspoof/system_files/init.d/portspoof.sh stop eth0
Repeating the scan using Nmap or any other similar program, which works based on banner checking of running services, will now look like this:
Image source: drk1wi.github.io
There is another trick that, while less effective as it does not create believable service banners, allows you to avoid additional utilities like portspoof.
First, configure the firewall so that after the configuration, you can still access the server via SSH (port 22) and not disrupt the operation of existing legitimate services.
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 22 -j RETURN
Then, initiate the process of redirecting all TCP traffic to port 5555:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp -m conntrack --ctstate NEW -j REDIRECT --to-ports 5555
Now, create a process that generates pseudo-random noise on port 5555 using NetCat:
nc -lp 5555 < /dev/urandom
These techniques significantly slow down the scan because the scanner will require much more time to analyze each of the 65,535 "services." Now, the primary task of securing the server is complete!
Nmap alone is not sufficient for a comprehensive analysis of a web application. In addition to alternatives like naabu from Project Discovery and rustscan, there are advanced active reconnaissance tools. These not only perform standard port scanning but specialize in subdomain enumeration, directory brute-forcing, HTTP parameter testing (such as dirbuster, gobuster, ffuf), and identifying and exploiting vulnerabilities in popular CMS platforms (wpscan, joomscan) and specific attacks (sqlmap for SQL injections, tplmap for SSTI).
These scanners work by searching for endpoints of an application, utilizing techniques like brute-forcing, searching through HTML pages, or connected JavaScript files. During their operation, millions of iterations occur comparing the response with the expected output to identify potential vulnerabilities and expose the service to exploitation.
To protect web applications from such scanners, we suggest configuring the web server. In this example, we’ll configure Nginx, as it is one of the most popular web servers.
In most configurations, Nginx proxies and exposes an application running on the server or within a cluster. This setup allows for rich configuration options.
To enhance security, we can add HTTP Security Headers and the lightweight and powerful ChaCha20 encryption protocol for devices that lack hardware encryption support (such as mobile phones). Additionally, rate limiting may be necessary to prevent DoS and DDoS attacks.
HTTP headers like Server and X-Powered-By reveal information about the web server and technologies used, which can help an attacker determine potential attack vectors.We need to remove these headers.
To do this, install the Nginx extras collection:
sudo apt install nginx-extras
Then, configure the Nginx settings in /etc/nginx/nginx.conf
:
server_tokens off;
more_clear_headers Server;
more_clear_headers 'X-Powered-By';
Also, add headers to mitigate Cross-Site Scripting (XSS) attack surface:
add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
add_header X-XSS-Protection "1; mode=block";
And protect against Clickjacking:
add_header X-Frame-Options "SAMEORIGIN";
You can slow down automated attacks by setting request rate limits from a single IP address. Do this only if you are confident it won't impact service availability or functionality.
A sample configuration might look like this:
http {
limit_req_zone $binary_remote_addr zone=req_zone:10m rate=10r/s;
server {
location /api/ {
limit_req zone=req_zone burst=20 nodelay;
}
}
}
This configuration limits requests to 10 per second from a single IP, with a burst buffer of 20 requests.
To protect traffic from MITM (Man-in-the-Middle) attacks and ensure high performance, enable TLS 1.3 and configure strong ciphers:
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers "TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256";
ssl_prefer_server_ciphers on;
You can also implement additional web application protection using a WAF (Web Application Firewall). Some free solutions include:
To perform basic configuration of ModSecurity, you can install it like this:
sudo apt install libnginx-mod-security2
Then, enable ModSecurity in the Nginx configuration:
server {
modsecurity on;
modsecurity_rules_file /etc/nginx/modsecurity.conf;
}
Use Security Headers to analyze HTTP headers and identify possible configuration errors.
When configuring any infrastructure components, it's important to follow best practices. For instance, to create secure Nginx configurations, you can use an online generator, which allows you to easily generate optimal base settings for Nginx, including ciphers, OCSP Stapling, logging, and other parameters.
If your server is still secured only by a password, this is a quite insecure configuration. Even complex passwords can eventually be compromised, especially when outdated or vulnerable versions of SSH are in use, allowing brute force attacks without restrictions, such as in CVE-2020-1616. Below is a table showing how long it might take to crack a password based on its complexity
Image source: security.org
It’s recommended to disable password authentication and set up authentication using private and public keys.
Generate a SSH key pair (public and private keys) on your workstation:
ssh-keygen -t ed25519 -C $EMAIL
Where $EMAIL
is your email address, and -t ed25519
specifies the key type based on elliptic curve cryptography (using the Curve25519 curve). This provides high performance, compact key sizes (256 bits), and resistance to side-channel attacks.
Copy the public key to the server.
Read your public key from the workstation and save it to the authorized_keys
file on the server, located at $HOME/.ssh/authorized_keys
(where $HOME
is the home directory of the user on the server you are connecting to). You can manually add the key or use the ssh-copy-id
utility, which will prompt for the password.
ssh-copy-id user@$IP
Alternatively, you can add the key directly through your Hostman panel. Go to the Cloud servers → SSH Keys section and click Add SSH key.
Enter your key and give it a name.
Once added, you can upload this key to a specific virtual machine or add it directly during server creation in the 6. Authorization section.
To further secure SSH connections, adjust the SSH server configuration file at /etc/ssh/sshd_config by applying the following settings:
PermitRootLogin no
— Prevents login as the root user.PermitEmptyPasswords no
— Disallows the use of empty passwords.X11Forwarding no
— Disables forwarding of graphical applications.AllowUsers $USERS
— Defines a list of users allowed to log in via SSH. Separate usernames with spaces.PasswordAuthentication no
— Disables password authentication.PubkeyAuthentication yes
— Enables public and private key authentication.HostbasedAuthentication no
— Disables host-based authentication.PermitUserEnvironment no
— Disallows changing environment variables to limit exploitation through variables like LD_PRELOAD
.After adjusting the configuration file, restart the OpenSSH daemon:
systemctl restart sshd
Finally, after making these changes, you can conduct a security audit using a service like ssh-audit or this website designed for SSH security checks. This will help ensure your configuration is secure and appropriately hardened.
SSH is a relatively secure protocol, as it was developed by the OpenBSD team, which prides itself on creating an OS focused on security and data integrity. However, even in such widely used and serious software, software vulnerabilities occasionally surface.
Some of these vulnerabilities allow attackers to perform user enumeration. Although these issues are typically patched promptly, it doesn't eliminate the fact that recent critical vulnerabilities, like regreSSHion, have allowed for Remote Code Execution (RCE). Although this particular exploit requires special conditions, it highlights the importance of protecting your server's data.
One way to further secure SSH is to hide the SSH port from unnecessary visibility. Changing the SSH port seems pointless because, after the first scan by an attacker, they will quickly detect the new port. A more effective strategy is to use Port Knocking, a method of security where a "key" (port knocking sequence) is used to open the port for a short period, allowing authentication.
Install knockd
using your package manager:
sudo apt install knockd -y
Configure knockd
by editing the /etc/knockd.conf
file to set the port knocking sequence and the corresponding actions. For example:
[options]
UseSyslog
[openSSH]
sequence = 7000,8000,9000
seq_timeout = 5
command = /usr/sbin/iptables -A INPUT -s %IP% -p tcp --dport 22 -j ACCEPT
tcpflags = syn
[closeSSH]
sequence = 9000,8000,7000
seq_timeout = 5
command = /usr/sbin/iptables -D INPUT -s %IP% -p tcp --dport 22 -j ACCEPT
tcpflags = syn
sequence
: The port sequence that needs to be "knocked" (accessed) in the correct order.seq_timeout
: The maximum time allowed to send the sequence (in seconds).command
: The command to be executed once the sequence is received correctly. It typically opens or closes the SSH port (or another service).%IP%
: The client IP address that sent the sequence (the one "knocking").tcpflags
: The SYN flag is used to filter out other types of packets.Start and enable knockd
to run at boot:
sudo systemctl enable --now knockd
Use knock
or nmap
to send the correct port knocking sequence:
Example command with nmap
:
nmap -Pn --max-retries 0 -p 7000,8000,9000 $IP
Example command with knock
:
knock $IP 7000 8000 9000
Where $IP
is the IP address of the server you're trying to connect to.
If everything is configured correctly, once the correct sequence of port knocks is received, the SSH port (port 22) will temporarily open. At this point, you can proceed with the standard SSH authentication process.
This technique isn't limited to just SSH; you can configure port knocking for other services if needed (e.g., HTTP, FTP, or any custom service).
Port knocking adds an extra layer of security by obscuring the SSH service from the general public and only allowing access to authorized clients who know the correct sequence.
In today's insecure world, one of the common types of attack is Living off the Land (LOTL). This is when legitimate tools and resources are used to exploit and escalate privileges on the compromised system. One such tool that attackers frequently leverage is the ability to view kernel system events and message buffers. This technique is even used by advanced persistent threats (APTs).
It is important to secure your Linux kernel configurations to mitigate the risk of such exploits. Below are some recommended settings that can enhance the security of your system.
To enable ASLR (Address Space Layout Randomization), set these parameters:
kernel.randomize_va_space = 2
: Randomizes the memory spaces for applications to prevent attackers from knowing where specific processes will run..kernel.kptr_restrict = 2
: Restricts user-space applications from obtaining kernel pointer information.Also, disable system request (SysRq
) functionality:
kernel.sysrq = 0
And restrict access to kernel message buffer (dmesg
):
kernel.dmesg_restrict = 1
With this configuration, an attacker will not know a program's memory address and won't be able to infiltrate any important process for exploitation purposes. They will also be unable to view the kernel message buffer (dmesg
) or send debugging requests (sysrq
), which will further complicate their interaction with the system.
In modern architectures, container environments are an essential part of the infrastructure, offering significant advantages for developers, DevOps engineers, and system administrators. However, securing these environments is crucial to protect against potential threats and ensure the integrity of your systems.
To protect container environments, it's essential to adopt secure development practices and integrate DevSecOps alongside existing DevOps methodologies. This also includes forming resilient patterns and building strong security behaviors from an information security perspective.
Use minimal images, such as Google Distroless, and Software Composition Analysis (SCA) tools to check the security of your images.
You can use the following methods to analyze the security of an image.
Docker Scout and Docker SBOM for generating a list of artifacts that make up an image.
Install Docker Scout and Docker SBOM as plugins for Docker.
Create a directory for Docker plugins (if it doesn't exist):
mkdir -pv $HOME/.docker/cli-plugins
Install Docker Scout:
curl -sSfL https://raw.githubusercontent.com/docker/scout-cli/main/install.sh | sh -s --
Install Docker SBOM:
curl -sSfL https://raw.githubusercontent.com/docker/sbom-cli-plugin/main/install.sh | sh -s --
To check for vulnerabilities in an image using Docker Scout:
docker scout cves gradle
To generate an SBOM using Docker SBOM (which internally uses Syft):
docker sbom $IMAGE_NAME
$IMAGE_NAME
is the name of the container image you wish to analyze.
To save the SBOM in JSON format for further analysis:
docker sbom alpine:latest --format syft-json --output sbom.txt
sbom.txt
will be the file containing the generated SBOM.
Container Scanning with Trivy
Trivy is a powerful security scanner for container images. It helps identify vulnerabilities and misconfigurations.
Install Trivy using the following script:
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sudo sh -s -- -b /usr/local/bin v0.59.1
Run a security scan for a container image:
trivy image $IMAGE_NAME
$IMAGE_NAME
is the name of the image you want to analyze.
For detailed output in JSON format, use:
trivy -q image --ignore-unfixed --format json --list-all-pkgs $IMAGE_NAME
Even with the minimal practices listed in this section, you can ensure a fairly decent level of container security.
Using the techniques outlined in the article, you can significantly complicate or even prevent a hack by increasing entropy. However, it is important to keep in mind that entropy should be balanced with system usability to avoid creating unnecessary difficulties for legitimate users.