Sign In
Sign In

How to Set Up Network Storage with FreeNAS

How to Set Up Network Storage with FreeNAS
Hostman Team
Technical writer
Servers
14.04.2025
Reading time: 10 min

NAS (Network Attached Storage) is a network data storage device. It provides shared file access from any connected computer or gadget. With this setup, all data is stored in one place, offering convenient access to NAS over a local network (LAN) or the Internet, and supports RAID and other technologies for data protection.

NAS can be used as home storage for media files, an office server for shared documents, or a corporate solution for backups and file resources.

In this tutorial, we’ll look at configuring FreeNAS — a free operating system for creating NAS based on FreeBSD. It is now developed under the name TrueNAS, but the core principles remain the same.

This OS is free, uses the crash-resistant ZFS file system, and is flexible in configuration.

Installing FreeNAS

We’ll go through the installation of FreeNAS OS using a cloud server from Hostman.

Choosing the Configuration

Important system requirements for FreeNAS:

  • RAM: 8 GB minimum (16 GB+ recommended, especially with large disks)
  • Free disk for the system: at least 8 GB (16–32 GB recommended)
  • Data storage disk: size depending on your needs

This configuration ensures stable operation with up to 4 TB of data when using iSCSI, virtual machines, and databases and 8 TB for lighter tasks.

In this tutorial, we’ll use Hostman, where only NVMe SSDs are available. However, in general, consider the following:

  • For large media libraries, archives, and backups, HDDs are sufficient.
  • For high-speed access, processing small files, or running VMs or databases, SSDs are better, either as primary storage or as a cache for performance.

Step 1: Uploading the OS Image to Hostman Panel

  1. Go to the download page and choose an appropriate installer version in .iso format.

  2. To find the image:

    • Click the directory of the version you want (recommended: STABLE)

    • Open the x64 folder and copy the link to the .iso file.

In this tutorial, we use version 13.3 STABLE. Image download link:

https://download.freenas.org/13.3/STABLE/RELEASE/x64/TrueNAS-13.3-RELEASE.iso

  1. In the Hostman panel, go to the Cloud servers - Images section, click Upload image and paste the copied URL.

Image2

  1. Choose the server location and click Upload. Wait for the image to finish uploading.

Step 2: Creating a Cloud Server

  1. Once the image is uploaded, click Create server from image.

Image3

  1. Choose the server configuration.

  2. Click Order to create the server.

Step 3: Adding a Disk

The default configuration includes 80 GB of NVMe storage — we’ll use this for the OS. Now we need to add an additional disk for storing data:

  1. Wait for the image to mount and for the server to become available.

  2. Go to the Plan tab and click Add disk.

  3. Choose the required size and click Add.

Step 4: Installing the System

  1. Go to the Console tab. You can open the console in a new tab for convenience.

Image1

  1. The installer should appear in the console. Press Enter to start the installation.

23be7079 958c 413e 8746 735d9a7c1b6a

  1. Choose the destination disk (in this case, the 80 GB NVMe). Press Space to select, then Enter to confirm.

6a119ac3 A99d 4b0c 9e8e 4a7b698202b8

  1. The installer will warn you that the disk will be erased. Confirm to proceed.

  2. Enter and confirm a password — you will use it later to access the web interface as the root user.

3228b3f0 8d5e 42a9 B50f 6e666dfbef0f

  1. Choose the boot mode. Hostman servers use Legacy BIOS.

F628739b 10b9 48a9 939d A0c0ad69c0d5

  1. The installer offers to create a 16 GB swap partition. It helps extend RAM by using disk space, which is useful if you have less than 16 GB RAM or expect unstable loads. Not recommended on USB drives due to wear.

D77eee11 F5b3 416d 93df 54eec9be65a6

The installation will begin. 

  1. After it completes, confirm that it finished successfully.

92979e1d 7edd 45e6 B2d6 9f0310b5c83d

  1. Press Space, and in the menu, select Shutdown System to turn off the server.

23be7079 958c 413e 8746 735d9a7c1b6a

You can now delete the installation image so it doesn't incur charges.

After rebooting, the system will show that the web interface is available via the server’s IP address.

B1ff9382 88cb 4fa6 8560 6bc6c9af12f8

Installation is complete!

Initial Setup of FreeNAS

First Login to the Web Interface

Go to the UI using the server’s IP address. Log in with:

  • Username: root
  • Password: the one set during installation

You’ll see the TrueNAS dashboard.

8eba818b Dd64 42bc B04b 39e792b7940a

Setting Basic System Parameters

  1. Set the correct time zone: Open System, then General settings. Set your timezone in the Timezone field.

  2. Enable alerts: Go to Alert Services and Alert Settings. You can configure email notifications or messenger integrations.

Creating and Configuring Storage (ZFS)

FreeNAS uses the ZFS file system for reliable and flexible storage. Its benefits include data protection and useful tools for backups and replication.

  1. Go to Storage (1), then Pools (2)

638af75f F980 4a93 93dc A398215be3db

  1. Click Add to create a new pool.

  2. Choose Create new pool.

292671f3 Fac1 4ea7 94b4 293537f9321c

  1. Enter a name for your pool, e.g., mypool.

Fe300e21 4a53 40a7 9397 8bc94e2007a7

  1. Select your disk (1) and move it to the Data VDevs field (2).

Ea6427a0 Da18 4d18 9e49 018c04d040a1

You’ll see options for Mirror and Stripe modes:

— Mirror

  • Data is written to all disks in the group.
  • If one disk fails, data remains on the others.
  • Total storage equals the size of the smallest disk.

Use when:

  • Reliability is more important than capacity.
  • You have two or more disks of similar size.
  • You want redundancy without complex RAID setups.

— Stripe

  • Data is split and written across all disks.
  • Better performance and full use of all disk space.
  • If one disk fails, all data is lost.

Use when:

  • Speed and space are more important than reliability.
  • Data isn’t critical (can be restored from elsewhere).
  • You want to maximize storage with minimal setup.

Click Create. Note that all data on the disk will be erased. The new pool should now appear in the panel.

40369784 44bf 4fa4 923c B48efbc81d01

User Management and Access Rights

  1. In the left-hand menu, select the Account tab, then Users, and click the Add button.
  2. Fill in the required fields — full name, username (alias), and password.

7fea1063 A91b 421f 922b 14481f35f72f

  1. If needed, configure a home directory inside one of the created datasets.
  2. You can manage permissions within datasets, which are logical partitions or storage spaces created inside a ZFS pool. To do this, go to the Pools tab (1) and use the Edit Permissions option (2) on the desired dataset.

14e566bb E2d9 4e04 8d01 497fad9c95e8

You can configure access rights for individual users or entire groups.

Ab188177 942b 4484 9d12 423c3bcce5bd

Try not to grant administrator (root) privileges to too many users, even if it seems more convenient. The fewer people with elevated access, the more secure your data will be.

Setting Up Services and Sharing Protocols

Enable the necessary services under the Services tab to take advantage of NAS features.

F4c00428 E289 4b95 8703 15a1478d99fd

The following protocols are available:

  • SMB for Windows networks
  • NFS for UNIX-based environments
  • AFP for Apple users
  • WebDAV for HTTP-based access
  • iSCSI, FTP, and others

You can configure each protocol after activation. For example, with SMB, you can set a workgroup and guest access parameters and enable auto-start on system reboot.

After enabling a service, create a share in the Shares section by selecting the appropriate protocol.

Advanced Features and Plugins

FreeNAS (TrueNAS) features a robust plugin system (Jails, Plugins) that includes many popular applications. Some of the most in-demand plugins include:

  • Nextcloud: A private cloud solution with office tools, calendar, audio/video conferencing. Ideal for collaborative work and personal file syncing (like Dropbox or Google Drive).

  • Plex Media Server: A powerful tool for managing your media library — TV shows, movies, music, photos. It can auto-fetch metadata, download covers, and track viewed/unviewed status.

  • Transmission: A lightweight torrent client with a web interface. Perfect for downloading large files directly to your NAS.

  • Syncthing: Focused on peer-to-peer folder synchronization. Great for distributed teamwork or backup syncing across devices.

  • Zoneminder: Enables you to set up a video surveillance system. Supports IP cameras, recording, and alert configurations.

  • Tarsnap: A secure backup service for UNIX-like systems.

To install a plugin, go to Plugins (1), choose an application, and click Install (2). Configuration (like ports or storage paths) is usually done after the quick setup.

F4834a33 8e8a 46e4 86fe 002bcee68631

If you want more isolation, use Jails — FreeBSD-based environments that let you install packages and libraries independently of the main system.

Backups and Data Protection

ZFS Snapshots allow for quick recovery of data in case of accidental deletion or corruption. You can automate this by scheduling snapshots via the TasksPeriodic Snapshot Tasks tab. Choose the dataset, snapshot lifetime, and frequency.

Data deduplication saves storage space but is RAM-intensive (about 5 GB RAM per 1 TB of data). If you plan to use it heavily, consider increasing your memory. Otherwise, ZFS may slow down or run into resource issues.

A264301d 686f 4c8e A4fc F04963a1aca9

For advanced backup features, consider plugins like Asigra or Tarsnap. Choose a backup strategy based on your risk tolerance and data volume. Some users are fine with local snapshots; others may prefer offsite copies.

Common Issues and Troubleshooting

Symptom

Problem Description

Solution

Cannot access the web interface (browser won’t open URL)

Network or IP configuration issues, firewall port blocking

1. Check IP settings in TrueNAS console (options 1, 4, 6 in network menu).

2. Verify gateway and DNS settings.

3. If behind NAT, open/forward required ports (usually 80/443).

4. Ensure local firewall allows access.

[EINVAL] vm_create: This system does not support virtualization

CPU/motherboard doesn’t support VT-x/AMD-V, or it's disabled in BIOS/UEFI, or virtualization is off in the hypervisor

1. Enable Intel VT-x / AMD-V (SVM) in BIOS.

2. Confirm CPU supports virtualization.

3. If running inside a hypervisor, enable Nested Virtualization.

"Pool is DEGRADED" or "FAULTED"

ZFS pool has a failing or disconnected disk

1. Run zpool status in the console to identify the faulty disk.

2. Replace the failed disk if using RAIDZ or Mirror.

3. Start the resilvering process.

4. Review logs and run SMART tests.

Slow performance or errors with deduplication

Deduplication consumes too much RAM

1. Add more RAM.

2. Disable deduplication where not needed (e.g., media files).

3. Use only compression (LZ4) if resources are limited.

Cannot access SMB share or it doesn't show up on the network

Incorrect ACL or SMB configuration, workgroup mismatch, bad user profile

1. Enable SMB in Services and set it to auto-start.

2. Create a new share under SharingSMB and check permissions.

3. Configure ACLs on the dataset (e.g., Full Control for user/group).

4. Verify the correct workgroup setting.

Snapshot creation/deletion fails

Not enough free space or quota exceeded, or permission issues

1. Check available space in pools.

2. Increase/remove dataset quotas if too strict.

3. Make sure the user has snapshot permissions.

SSH doesn’t work or key authentication fails

SSH service off, keys not in the right place, wrong file permissions

1. Enable SSH under Services.

2. Upload public key under SystemSSH Keypairs, or place it in ~/.ssh/authorized_keys.

3. Set correct permissions (700 for .ssh, 600 for key files).

WebDAV access via password doesn’t work

WebDAV user/password not set or port blocked by firewall

1. Go to ServicesWebDAV and set the webdav user password.

2. Make sure the port (e.g., 8080) is open in the firewall.

3. Verify the correct access path (e.g., http://IP:8080/resource_name).

Conclusion

FreeNAS (TrueNAS) version 11.3 is well-suited for setting up a file server and running additional services. The system offers tools for managing ZFS pools, user permissions, and protocols like SMB, WebDAV, and iSCSI.

If you need extended functionality, check out plugins and built-in virtualization (like VirtualBox or bhyve in newer versions).

ZFS features such as deduplication, snapshots, and replication provide robust data protection. Plugins like Nextcloud or Plex make collaboration and media management much easier.

The FreeNAS project evolved into TrueNAS, but the key principles remain: using ZFS instead of hardware RAID, flexible shared folder configuration, and a user-friendly web interface.

Servers
14.04.2025
Reading time: 10 min

Similar

Servers

How to Correct Server Time

The method you choose for correcting the time on your server depends on how far off the server's clock is. If the difference is small, use the first method. If the clock is significantly behind or ahead, it's better not to adjust it in a single step — it's safer to change the time gradually. Configuration on Ubuntu/Debian Quick Fix To quickly change the time on the server, use the ntpdate utility. You need sudo privileges to install it: apt-get install ntpdate To update the time once: /usr/sbin/ntpdate 1.north-america.pool.ntp.org Here, the NTP pool is the address of a trusted server used to synchronize the time. For the USA, you can use NTP servers from this page. You can find pool zones for other regions at ntppool.org. You can also set up automatic time checks using cron: crontab -e 00 1 * * * /usr/sbin/ntpdate 1.north-america.pool.ntp.org This schedules synchronization once a day. Instead of a set interval, you can specify a condition. For example, to synchronize the time on every server reboot using cron reboot: crontab -e @reboot /usr/sbin/ntpdate 1.north-america.pool.ntp.org Gradual Correction To update the time gradually, install the ntp utility on Ubuntu or Debian. It works as follows: The utility checks data from synchronization servers defined in the configuration. It calculates the difference between the current system time and the reference time. NTP gradually adjusts the system clock. This gradual correction helps avoid issues in other services caused by sudden time jumps. Install NTP: apt-get install ntp For the utility to work correctly, configure it in the file /etc/ntp.conf. Add NTP servers like: server 0.north-america.pool.ntp.org server 1.north-america.pool.ntp.org iburst server 2.north-america.pool.ntp.org server 3.north-america.pool.ntp.org The iburst option improves accuracy by sending multiple packets at once instead of just one. You can also set a preferred data source using the prefer option: server 0.ubuntu.pool.ntp.org iburst prefer After each configuration change, restart the utility: /etc/init.d/ntp restart Configuration on CentOS The method choice rules are the same. If you need to correct a difference of a few seconds, the first method will do. For minutes or hours, the second method is better. Quick Fix To quickly adjust the time, use ntpdate. Install it with: yum install ntpdate For a one-time sync: /usr/sbin/ntpdate 1.north-america.pool.ntp.org Use Crontab to set automatic periodic synchronization. For daily sync: crontab -e 00 1 * * * /usr/sbin/ntpdate 1.north-america.pool.ntp.org To sync on boot instead of at regular intervals: crontab -e @reboot /usr/sbin/ntpdate 1.north-america.pool.ntp.org Gradual Correction To change the time on the server gradually, use ntp in CentOS. Install it: yum install ntp Enable the service on startup: chkconfig ntpd on In the file /etc/ntp.conf, specify accurate time sources, for example: server 0.north-america.pool.ntp.org server 1.north-america.pool.ntp.org iburst server 2.north-america.pool.ntp.org server 3.north-america.pool.ntp.org The iburst parameter works the same as in Ubuntu/Debian — it improves accuracy by sending a burst of packets. Restart the service after making changes: /etc/init.d/ntp restart Then restart the daemon: /etc/init.d/ntpd start Additional Options Time synchronization is usually done with the server closest to your server geographically. But in the configuration, you can specify the desired region directly in the subdomain. For example: asia.pool.ntp.org europe.pool.ntp.org Even if the NTP server is offline, it can still pass on system time. Just add this line: server 127.127.1.0 You can also restrict access for external clients. By default, these parameters are set: restrict -4 default kod notrap nomodify nopeer noquery restrict -6 default kod notrap nomodify nopeer noquery The options notrap, nomodify, nopeer, and noquery prevent changes to the server's configuration. KOD (kiss of death) adds another layer of protection: if a client sends requests too frequently, it receives a warning packet and then is blocked. If you want to allow unrestricted access for the local host: restrict 127.127.1.0 To allow devices in a local network to sync with the server: restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap After any changes, restart the service: service restart ntp To check the service’s operation, use the command: ntpq -p It will display a table showing the time source address, server stratum, last synchronization time, and other useful data.
16 April 2025 · 4 min to read
Servers

Server Hardening

Server hardening is the process of improving security by reducing vulnerabilities and protecting against potential threats. There are several types of hardening: Physical: A method of protection based on the use of physical means, such as access control systems (ACS), video surveillance, safes, motion detectors, and protective enclosures. Hardware: Protection implemented at the hardware level. This includes trusted platform modules (TPM), hardware security modules (HSM, such as Yubikey), and biometric scanners (such as Apple Touch ID or Face ID). Hardware protection measures also include firmware integrity control mechanisms and hardware firewalls. Software: A type of hardening that utilizes software tools and security policies. This involves access restriction, encryption, data integrity control, monitoring anomalous activity, and other measures to secure digital information. We provide these examples of physical and hardware hardening to give a full understanding of security mechanisms for different domains. In this article, we will focus on software protection aspects, as Hostman has already ensured hardware and physical security. Most attacks are financially motivated, as they require high competence and significant time investments. Therefore, it is important to clearly understand what you are protecting and what losses may arise from an attack. Perhaps you need continuous high availability for a public resource, such as a package mirror or container images, and you plan to protect your resource for this purpose. There can be many variations. First, you need to create a threat model, which will consist of the following points: Value: Personal and public data, logs, equipment, infrastructure. Possible Threats: Infrastructure compromise, extortion, system outages. Potential Attackers: Hacktivists, insider threats, competitors, hackers. Attack Methods: Physical access, malicious devices, software hacks, phishing/vishing, supply chain attacks. Protection Measures: Periodic software updates, encryption, access control, monitoring, hardening—what we will focus on in this article. Creating a threat model is a non-trivial but crucial task because it defines the overall “flow” for cybersecurity efforts. After you create the threat model, you might need to perform revisions and clarifications depending on changes in business processes or other related parameters. While creating the threat model, you can use STRIDE, a methodology for categorizing threats (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege), and DREAD, a risk assessment model (Damage, Reproducibility, Exploitability, Affected Users, Discoverability). For a more formalized approach, you can also refer to ISO/IEC 27005 or NIST 800-30 standards. There will always be risks that can threaten both large companies and individual users who recently ordered a server to host a simple web application. The losses and criticality may vary, but from a technical perspective, the most common threats are: DoS/DDoS: Denial of service or infrastructure failure, resulting in financial and/or reputational losses. Supply Chain Attack: For example, infecting an artifact repository, such as a Container Registry: JFrog Artifactory, Sonatype Nexus. Full System Compromise: Includes establishing footholds and horizontal movement within the infrastructure. Using your server as a launchpad for complex technological attacks on other resources. If this leads to serious consequences, you will likely spend many hours in court and incur significant financial costs. Gaining advantages by modifying system resources, bypassing authentication, or altering the logic of the entire application. This can lead to reputational and/or financial losses. Some of these attacks can be cut off early or significantly complicated for potential attackers if the server is properly configured. Hardening is not a one-time procedure; it is an ongoing process that requires continuous monitoring and adaptation to new threats. The main goal of this article is to equip readers with server hardening techniques. However, in the context of this article, we will discuss a more relevant and practical example—server protection. After ordering a server, we would normally perform the initial setup. This is typically done by system administrators or DevOps specialists. In larger organizations, other technical experts (SecOps, NetOps, or simply Ops) may get involved, but in smaller setups, the same person who writes the code usually handles these tasks. This is when the most interesting misconfigurations can arise. Some people configure manually: creating users, groups, setting network configurations, installing the required software; others write and reuse playbooks—automated scripts. In this article, we will go over the following server hardening checklist: Countering port scanning Configuring the Nginx web server Protecting remote connections via SSH Setting up Port Knocking Configuring Linux kernel parameters Hardening container environments If you later require automation, you can easily write your own playbook, as you will already know whether specific security configurations are necessary. Countering Port Scanning Various types of attackers, from botnet networks to APT (Advanced Persistent Threat) groups, use port scanners and other device discovery systems (such as shodan.io, search.censys.io, zoomeye.ai, etc.) that are available on the internet to search for interesting hosts for further exploitation and extortion. One popular network scanner is Nmap. It allows determining "live" hosts in a network and the services running on them through a variety of scanning methods. Nmap also includes the Nmap Script Engine, which offers both out-of-the-box functionality and the possibility to add custom scripts. To scan resources using Nmap, an attacker would execute a command like: nmap -sC -sV -p- -vv --min-rate 10000 $IP Where: $IP is the IP address or range of IP addresses to scan. -sC enables the script engine. -sV detects service versions. -vv (from "double verbose") enables detailed output. --min-rate 10000 is a parameter defining how many requests are sent in one go. In this case, an aggressive mode (10,000 units) is selected. Additionally, the rate modes can be adjusted separately with the flag -T (Aggressive, Insane, Normal, Paranoid, Polite, Sneaky). Example of a scan result is shown below. From this information, we can see that three services are running: SSH on port 22 Web service on port 80 Web service on port 8080 The tool also provides software versions and more detailed information, including HTTP status codes, port status (in this case, "open"), and TTL values, which help to determine if the service is in a container or if there is additional routing that changes the TTL. Thus, an attacker can use a port scanner or search engine results to find your resource and attempt to attack based on the gathered information. To prevent this, we need to break the attacker's pattern and confuse them. Specifically, we can make it so that they cannot identify which port is open and what service is running on it. This can be achieved by opening all ports: 2^16 - 1 = 65535. By "opening," we mean configuring incoming connections so that all connection attempts to TCP ports are redirected to port 4444, on which the portspoof utility dynamically responds with random signatures of various services from the Nmap fingerprint database. To implement this, install the portspoof utility. Clone the appropriate repository with the source code and build it: git clone https://github.com/drk1wi/portspoof.gitcd portspoof./configure && make && sudo make install Note that you may need to install dependencies for building the utility: sudo apt install gcc g++ make Grant execution rights and run the automatic configuration script with the specified network interface. This script will configure the firewall correctly and set up portspoof to work with signatures that mask ports under other services. sudo chmod +x $HOME/portspoof/system_files/init.d/portspoof.shsudo $HOME/portspoof/system_files/init.d/portspoof.sh start $NETWORK_INTERFACE Where $NETWORK_INTERFACE is your network interface (in our case, eth0). To stop the utility, run the command: sudo $HOME/portspoof/system_files/init.d/portspoof.sh stop eth0 Repeating the scan using Nmap or any other similar program, which works based on banner checking of running services, will now look like this: Image source: drk1wi.github.io There is another trick that, while less effective as it does not create believable service banners, allows you to avoid additional utilities like portspoof. First, configure the firewall so that after the configuration, you can still access the server via SSH (port 22) and not disrupt the operation of existing legitimate services. sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 22 -j RETURN Then, initiate the process of redirecting all TCP traffic to port 5555: sudo iptables -t nat -A PREROUTING -i eth0 -p tcp -m conntrack --ctstate NEW -j REDIRECT --to-ports 5555 Now, create a process that generates pseudo-random noise on port 5555 using NetCat: nc -lp 5555 < /dev/urandom These techniques significantly slow down the scan because the scanner will require much more time to analyze each of the 65,535 "services." Now, the primary task of securing the server is complete! Configuring the Nginx Web Server Nmap alone is not sufficient for a comprehensive analysis of a web application. In addition to alternatives like naabu from Project Discovery and rustscan, there are advanced active reconnaissance tools. These not only perform standard port scanning but specialize in subdomain enumeration, directory brute-forcing, HTTP parameter testing (such as dirbuster, gobuster, ffuf), and identifying and exploiting vulnerabilities in popular CMS platforms (wpscan, joomscan) and specific attacks (sqlmap for SQL injections, tplmap for SSTI). These scanners work by searching for endpoints of an application, utilizing techniques like brute-forcing, searching through HTML pages, or connected JavaScript files. During their operation, millions of iterations occur comparing the response with the expected output to identify potential vulnerabilities and expose the service to exploitation. To protect web applications from such scanners, we suggest configuring the web server. In this example, we’ll configure Nginx, as it is one of the most popular web servers. In most configurations, Nginx proxies and exposes an application running on the server or within a cluster. This setup allows for rich configuration options. To enhance security, we can add HTTP Security Headers and the lightweight and powerful ChaCha20 encryption protocol for devices that lack hardware encryption support (such as mobile phones). Additionally, rate limiting may be necessary to prevent DoS and DDoS attacks. HTTP headers like Server and X-Powered-By reveal information about the web server and technologies used, which can help an attacker determine potential attack vectors.We need to remove these headers. To do this, install the Nginx extras collection: sudo apt install nginx-extras Then, configure the Nginx settings in /etc/nginx/nginx.conf: server_tokens off;more_clear_headers Server;more_clear_headers 'X-Powered-By'; Also, add headers to mitigate Cross-Site Scripting (XSS) attack surface: add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;add_header X-XSS-Protection "1; mode=block"; And protect against Clickjacking: add_header X-Frame-Options "SAMEORIGIN"; You can slow down automated attacks by setting request rate limits from a single IP address. Do this only if you are confident it won't impact service availability or functionality. A sample configuration might look like this: http { limit_req_zone $binary_remote_addr zone=req_zone:10m rate=10r/s; server { location /api/ { limit_req zone=req_zone burst=20 nodelay; } } } This configuration limits requests to 10 per second from a single IP, with a burst buffer of 20 requests. To protect traffic from MITM (Man-in-the-Middle) attacks and ensure high performance, enable TLS 1.3 and configure strong ciphers: ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers "TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256"; ssl_prefer_server_ciphers on; You can also implement additional web application protection using a WAF (Web Application Firewall). Some free solutions include: BunkerWeb — Lightweight, popular, and effective WAF. ModSecurity — A powerful Nginx module with flexible rules. To perform basic configuration of ModSecurity, you can install it like this: sudo apt install libnginx-mod-security2 Then, enable ModSecurity in the Nginx configuration: server { modsecurity on; modsecurity_rules_file /etc/nginx/modsecurity.conf; } Use Security Headers to analyze HTTP headers and identify possible configuration errors. When configuring any infrastructure components, it's important to follow best practices. For instance, to create secure Nginx configurations, you can use an online generator, which allows you to easily generate optimal base settings for Nginx, including ciphers, OCSP Stapling, logging, and other parameters. Protecting Remote Connections via SSH If your server is still secured only by a password, this is a quite insecure configuration. Even complex passwords can eventually be compromised, especially when outdated or vulnerable versions of SSH are in use, allowing brute force attacks without restrictions, such as in CVE-2020-1616. Below is a table showing how long it might take to crack a password based on its complexity Image source: security.org It’s recommended to disable password authentication and set up authentication using private and public keys. Generate a SSH key pair (public and private keys) on your workstation: ssh-keygen -t ed25519 -C $EMAIL Where $EMAIL is your email address, and -t ed25519 specifies the key type based on elliptic curve cryptography (using the Curve25519 curve). This provides high performance, compact key sizes (256 bits), and resistance to side-channel attacks. Copy the public key to the server. Read your public key from the workstation and save it to the authorized_keys file on the server, located at $HOME/.ssh/authorized_keys (where $HOME is the home directory of the user on the server you are connecting to). You can manually add the key or use the ssh-copy-id utility, which will prompt for the password. ssh-copy-id user@$IP Alternatively, you can add the key directly through your Hostman panel. Go to the Cloud servers → SSH Keys section and click Add SSH key.   Enter your key and give it a name. Once added, you can upload this key to a specific virtual machine or add it directly during server creation in the 6. Authorization section. To further secure SSH connections, adjust the SSH server configuration file at /etc/ssh/sshd_config by applying the following settings: PermitRootLogin no — Prevents login as the root user. PermitEmptyPasswords no — Disallows the use of empty passwords. X11Forwarding no — Disables forwarding of graphical applications. AllowUsers $USERS — Defines a list of users allowed to log in via SSH. Separate usernames with spaces. PasswordAuthentication no — Disables password authentication. PubkeyAuthentication yes — Enables public and private key authentication. HostbasedAuthentication no — Disables host-based authentication. PermitUserEnvironment no — Disallows changing environment variables to limit exploitation through variables like LD_PRELOAD. After adjusting the configuration file, restart the OpenSSH daemon: systemctl restart sshd Finally, after making these changes, you can conduct a security audit using a service like ssh-audit or this website designed for SSH security checks. This will help ensure your configuration is secure and appropriately hardened. Configuring Port Knocking SSH is a relatively secure protocol, as it was developed by the OpenBSD team, which prides itself on creating an OS focused on security and data integrity. However, even in such widely used and serious software, software vulnerabilities occasionally surface. Some of these vulnerabilities allow attackers to perform user enumeration. Although these issues are typically patched promptly, it doesn't eliminate the fact that recent critical vulnerabilities, like regreSSHion, have allowed for Remote Code Execution (RCE). Although this particular exploit requires special conditions, it highlights the importance of protecting your server's data. One way to further secure SSH is to hide the SSH port from unnecessary visibility. Changing the SSH port seems pointless because, after the first scan by an attacker, they will quickly detect the new port. A more effective strategy is to use Port Knocking, a method of security where a "key" (port knocking sequence) is used to open the port for a short period, allowing authentication. Install knockd using your package manager: sudo apt install knockd -y Configure knockd by editing the /etc/knockd.conf file to set the port knocking sequence and the corresponding actions. For example: [options] UseSyslog [openSSH] sequence = 7000,8000,9000 seq_timeout = 5 command = /usr/sbin/iptables -A INPUT -s %IP% -p tcp --dport 22 -j ACCEPT tcpflags = syn [closeSSH] sequence = 9000,8000,7000 seq_timeout = 5 command = /usr/sbin/iptables -D INPUT -s %IP% -p tcp --dport 22 -j ACCEPT tcpflags = syn sequence: The port sequence that needs to be "knocked" (accessed) in the correct order. seq_timeout: The maximum time allowed to send the sequence (in seconds). command: The command to be executed once the sequence is received correctly. It typically opens or closes the SSH port (or another service). %IP%: The client IP address that sent the sequence (the one "knocking"). tcpflags: The SYN flag is used to filter out other types of packets. Start and enable knockd to run at boot: sudo systemctl enable --now knockd Use knock or nmap to send the correct port knocking sequence: Example command with nmap: nmap -Pn --max-retries 0 -p 7000,8000,9000 $IP Example command with knock: knock $IP 7000 8000 9000 Where $IP is the IP address of the server you're trying to connect to. If everything is configured correctly, once the correct sequence of port knocks is received, the SSH port (port 22) will temporarily open. At this point, you can proceed with the standard SSH authentication process. This technique isn't limited to just SSH; you can configure port knocking for other services if needed (e.g., HTTP, FTP, or any custom service). Port knocking adds an extra layer of security by obscuring the SSH service from the general public and only allowing access to authorized clients who know the correct sequence. Configuring Linux Kernel Parameters In today's insecure world, one of the common types of attack is Living off the Land (LOTL). This is when legitimate tools and resources are used to exploit and escalate privileges on the compromised system. One such tool that attackers frequently leverage is the ability to view kernel system events and message buffers. This technique is even used by advanced persistent threats (APTs). It is important to secure your Linux kernel configurations to mitigate the risk of such exploits. Below are some recommended settings that can enhance the security of your system. To enable ASLR (Address Space Layout Randomization), set these parameters: kernel.randomize_va_space = 2: Randomizes the memory spaces for applications to prevent attackers from knowing where specific processes will run.. kernel.kptr_restrict = 2: Restricts user-space applications from obtaining kernel pointer information. Also, disable system request (SysRq) functionality: kernel.sysrq = 0 And restrict access to kernel message buffer (dmesg): kernel.dmesg_restrict = 1 With this configuration, an attacker will not know a program's memory address and won't be able to infiltrate any important process for exploitation purposes. They will also be unable to view the kernel message buffer (dmesg) or send debugging requests (sysrq), which will further complicate their interaction with the system. Hardening Container Environments In modern architectures, container environments are an essential part of the infrastructure, offering significant advantages for developers, DevOps engineers, and system administrators. However, securing these environments is crucial to protect against potential threats and ensure the integrity of your systems. To protect container environments, it's essential to adopt secure development practices and integrate DevSecOps alongside existing DevOps methodologies. This also includes forming resilient patterns and building strong security behaviors from an information security perspective. Use minimal images, such as Google Distroless, and Software Composition Analysis (SCA) tools to check the security of your images. You can use the following methods to analyze the security of an image. Docker Scout and Docker SBOM for generating a list of artifacts that make up an image. Install Docker Scout and Docker SBOM as plugins for Docker.  Create a directory for Docker plugins (if it doesn't exist): mkdir -pv $HOME/.docker/cli-plugins Install Docker Scout: curl -sSfL https://raw.githubusercontent.com/docker/scout-cli/main/install.sh | sh -s -- Install Docker SBOM: curl -sSfL https://raw.githubusercontent.com/docker/sbom-cli-plugin/main/install.sh | sh -s -- To check for vulnerabilities in an image using Docker Scout: docker scout cves gradle To generate an SBOM using Docker SBOM (which internally uses Syft): docker sbom $IMAGE_NAME $IMAGE_NAME is the name of the container image you wish to analyze. To save the SBOM in JSON format for further analysis: docker sbom alpine:latest --format syft-json --output sbom.txt sbom.txt will be the file containing the generated SBOM. Container Scanning with Trivy Trivy is a powerful security scanner for container images. It helps identify vulnerabilities and misconfigurations. Install Trivy using the following script: curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sudo sh -s -- -b /usr/local/bin v0.59.1 Run a security scan for a container image: trivy image $IMAGE_NAME $IMAGE_NAME is the name of the image you want to analyze. For detailed output in JSON format, use: trivy -q image --ignore-unfixed --format json --list-all-pkgs $IMAGE_NAME Even with the minimal practices listed in this section, you can ensure a fairly decent level of container security. Conclusion Using the techniques outlined in the article, you can significantly complicate or even prevent a hack by increasing entropy. However, it is important to keep in mind that entropy should be balanced with system usability to avoid creating unnecessary difficulties for legitimate users.
19 March 2025 · 18 min to read
Linux

How to Use SSH Keys for Authentication

Many cloud applications are built on the popular SSH protocol—it is widely used for managing network infrastructure, transferring files, and executing remote commands. SSH stands for Secure Socket Shell, meaning it provides a shell (command-line interface) around the connection between multiple remote hosts, ensuring that the connection is secure (encrypted and authenticated). SSH connections are available on all popular operating systems, including Linux, Ubuntu, Windows, and Debian. The protocol establishes an encrypted communication channel within an unprotected network by using a pair of public and private keys. Keys: The Foundation of SSH SSH operates on a client-server model. This means the user has an SSH client (a terminal in Linux or a graphical application in Windows), while the server side runs a daemon, which accepts incoming connections from clients. In practice, an SSH channel enables remote terminal management of a server. In other words, after a successful connection, everything entered in the local console is executed directly on the remote server. The SSH protocol uses a pair of keys for encrypting and decrypting information: public key and private key. These keys are mathematically linked. The public key is shared openly, resides on the server, and is used to encrypt data. The private key is confidential, resides on the client, and is used to decrypt data. Of course, keys are not generated manually but with special tools—keygens. These utilities generate new keys using encryption algorithms fundamental to SSH technology. More About How SSH Works Exchange of Public Keys SSH relies on symmetric encryption, meaning two hosts wishing to communicate securely generate a unique session key derived from the public and private data of each host. For example, host A generates a public and private key pair. The public key is sent to host B. Host B does the same, sending its public key to host A. Using the Diffie-Hellman algorithm, host A can create a key by combining its private key with the public key of host B. Likewise, host B can create an identical key by combining its private key with the public key of host A. This results in both hosts independently generating the same symmetric encryption key, which is then used for secure communication. Hence, the term symmetric encryption. Message Verification To verify messages, hosts use a hash function that outputs a fixed-length string based on the following data: The symmetric encryption key The packet number The encrypted message text The result of hashing these elements is called an HMAC (Hash-based Message Authentication Code). The client generates an HMAC and sends it to the server. The server then creates its own HMAC using the same data and compares it to the client's HMAC. If they match, the verification is successful, ensuring that the message is authentic and hasn't been tampered with. Host Authentication Establishing a secure connection is only part of the process. The next step is authenticating the user connecting to the remote host, as the user may not have permission to execute commands. There are several authentication methods: Password Authentication: The user sends an encrypted password to the server. If the password is correct, the server allows the user to execute commands. Certificate-Based Authentication: The user initially provides the server with a password and the public part of a certificate. Once authenticated, the session continues without requiring repeated password entries for subsequent interactions. These methods ensure that only authorized users can access the remote system while maintaining secure communication. Encryption Algorithms A key factor in the robustness of SSH is that decrypting the symmetric key is only possible with the private key, not the public key, even though the symmetric key is derived from both. Achieving this property requires specific encryption algorithms. There are three primary classes of such algorithms: RSA, DSA, and algorithms based on elliptic curves, each with distinct characteristics: RSA: Developed in 1978, RSA is based on integer factorization. Since factoring large semiprime numbers (products of two large primes) is computationally difficult, the security of RSA depends on the size of the chosen factors. The key length ranges from 1024 to 16384 bits. DSA: DSA (Digital Signature Algorithm) is based on discrete logarithms and modular exponentiation. While similar to RSA, it uses a different mathematical approach to link public and private keys. DSA key length is limited to 1024 bits. ECDSA and EdDSA: These algorithms are based on elliptic curves, unlike DSA, which uses modular exponentiation. They assume that no efficient solution exists for the discrete logarithm problem on elliptic curves. Although the keys are shorter, they provide the same level of security. Key Generation Each operating system has its own utilities for quickly generating SSH keys. In Unix-like systems, the command to generate a key pair is: ssh-keygen -t rsa Here, the type of encryption algorithm is specified using the -t flag. Other supported types include: dsa ecdsa ed25519 You can also specify the key length with the -b flag. However, be cautious, as the security of the connection depends on the key length: ssh-keygen -b 2048 -t rsa After entering the command, the terminal will prompt you to specify a file path and name for storing the generated keys. You can accept the default path by pressing Enter, which will create standard file names: id_rsa (private key) and id_rsa.pub (public key). Thus, the public key will be stored in a file with a .pub extension, while the private key will be stored in a file without an extension. Next, the command will prompt you to enter a passphrase. While not mandatory (it is unrelated to the SSH protocol itself), using a passphrase is recommended to prevent unauthorized use of the key by a third-party user on the local Linux system. Note that if a passphrase is used, you must enter it each time you establish the connection. To change the passphrase later, you can use: ssh-keygen -p Or, you can specify all parameters at once with a single command: ssh-keygen -p old_password -N new_password -f path_to_files For Windows, there are two main approaches: Using ssh-keygen from OpenSSH: The OpenSSH client provides the same ssh-keygen command as Linux, following the same steps. Using PuTTY: PuTTY is a graphical application that allows users to generate public and private keys with the press of a button. Installing the Client and Server Components The primary tool for an SSH connection on Linux platforms (both client and server) is OpenSSH. While it is typically pre-installed on most operating systems, there may be situations (such as with Ubuntu) where manual installation is necessary. The general command for installing SSH, followed by entering the superuser password, is: sudo apt-get install ssh However, in some operating systems, SSH may be divided into separate components for the client and server. For the Client To check whether the SSH client is installed on your local machine, simply run the following command in the terminal: ssh If SSH is supported, the terminal will display a description of the command. If nothing appears, you’ll need to install the client manually: sudo apt-get install openssh-client You will be prompted to enter the superuser password during installation. Once completed, SSH connectivity will be available. For the Server Similarly, the server-side part of the OpenSSH toolkit is required on the remote host. To check if the SSH server is available on your remote host, try connecting locally via SSH: ssh localhost If the SSH daemon is running, you will see a message indicating a successful connection. If not, you’ll need to install the SSH server: sudo apt-get install openssh-server As with the client, the terminal will prompt you to enter the superuser password. After installation, you can check whether SSH is active by running: sudo service ssh status Once connected, you can modify SSH settings as needed by editing the configuration file: ./ssh/sshd_config For example, you might want to change the default port to a custom one. Don’t forget that after making changes to the configuration, you must manually restart the SSH service to apply the updates: sudo service ssh restart Copying an SSH Key to the Server On Hostman, you can easily add SSH keys to your servers using the control panel. Using a Special Copy Command After generating a public SSH key, it can be used as an authorized key on a server. This allows quick connections without the need to repeatedly enter a password. The most common way to copy the key is by using the ssh-copy-id command: ssh-copy-id -i ~/.ssh/id_rsa.pub name@server_address This command assumes you used the default paths and filenames during key generation. If not, simply replace ~/.ssh/id_rsa.pub with your custom path and filename. Replace name with the username on the remote server. Replace server_address with the host address. If the usernames on both the client and server are the same, you can shorten the command: ssh-copy-id -i ~/.ssh/id_rsa.pub server_address If you set a passphrase during the SSH key creation, the terminal will prompt you to enter it. Otherwise, the key will be copied immediately. In some cases, the server may be configured to use a non-standard port (the default is 22). If that’s the case, specify the port using the -p flag: ssh-copy-id -i ~/.ssh/id_rsa.pub -p 8129 name@server_address Semi-Manual Copying There are operating systems where the ssh-copy-id command may not be supported, even though SSH connections to the server are possible. In such cases, the copying process can be done manually using a series of commands: ssh name@server_address 'mkdir -pm 700 ~/.ssh; echo ' $(cat ~/.ssh/id_rsa.pub) ' >> ~/.ssh/authorized_keys; chmod 600 ~/.ssh/authorized_keys' This sequence of commands does the following: Creates a special .ssh directory on the server (if it doesn’t already exist) with the correct permissions (700) for reading and writing. Creates or appends to the authorized_keys file, which stores the public keys of all authorized users. The public key from the local file (id_rsa.pub) will be added to it. Sets appropriate permissions (600) on the authorized_keys file to ensure it can only be read and written by the owner. If the authorized_keys file already exists, it will simply be appended with the new key. Once this is done, future connections to the server can be made using the same SSH command, but now the authentication will use the public key added to authorized_keys: ssh name@server_address Manual Copying Some hosting platforms offer server management through alternative interfaces, such as a web-based control panel. In these cases, there is usually an option to manually add a public key to the server. The web interface might even simulate a terminal for interacting with the server. Regardless of the method, the remote host must contain a file named ~/.ssh/authorized_keys, which lists all authorized public keys. Simply copy the client’s public key (found in ~/.ssh/id_rsa.pub by default) into this file. If the key pair was generated using a graphical application (typically PuTTY on Windows), you should copy the public key directly from the application and add it to the existing content in authorized_keys. Connecting to a Server To connect to a remote server on a Linux operating system, enter the following command in the terminal: ssh name@server_address Alternatively, if the local username is identical to the remote username, you can shorten the command to: ssh server_address The system will then prompt you to enter the password. Type it and press Enter. Note that the terminal will not display the password as you type it. Just like with the ssh-copy-id command, you can explicitly specify the port when connecting to a remote server: ssh client@server_address -p 8129 Once connected, you will have control over the remote machine via the terminal; any command you enter will be executed on the server side. Conclusion Today, SSH is one of the most widely used protocols in development and system administration. Therefore, having a basic understanding of its operation is crucial. This article aimed to provide an overview of SSH connections, briefly explain the encryption algorithms (RSA, DSA, ECDSA, and EdDSA), and demonstrate how public and private key pairs can be used to establish secure connections with a personal server, ensuring that exchanged messages remain inaccessible to third parties. We covered the primary commands for UNIX-like operating systems that allow users to generate key pairs and grant clients SSH access by copying the public key to the server, enabling secure connections.
30 January 2025 · 10 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support