Sign In
Sign In

Setting Up a DNS Server

Setting Up a DNS Server
Hostman Team
Technical writer
Linux DNS
10.04.2025
Reading time: 4 min

A personal DNS server can be useful if your provider doesn't offer this service or if existing solutions don't suit your needs. The easiest way to set one up is via a control panel (cPanel, CloudPanel, HestiaCP, etc), but you can also do it manually using the terminal and the Linux DNS Server BIND 9.

Preparing the Server

Let's say you've rented a Hostman Linux VPS and want to use your own DNS servers. To do that, you need to meet two conditions:

  1. Order another public IP address — DNS setup requires at least two IPs.
  2. Open DNS port 53, which is necessary for the nameserver to work.

Ubuntu/Debian

Update the package list:

apt update

Allow incoming packets on port 53 UDP in the firewall:

iptables -I INPUT -p udp --dport 53 -j ACCEPT

Save the firewall rules:

iptables-save

CentOS

Install system updates:

yum update

Install time synchronization utility:

yum install chrony

Set your timezone, for example:

timedatectl set-timezone Europe/Cyprus

Enable and start the time synchronization service:

systemctl enable chronyd --now

Open port 53:

firewall-cmd --permanent --add-port=53/udp

Apply the updated firewall rules:

firewall-cmd --reload

Installing the DNS Server

This guide uses BIND 9 to create an IP-based DNS server.

Ubuntu/Debian

Install required packages:

apt-get install bind9 dnsutils

Enable autostart:

systemctl enable bind9

Start the service:

systemctl start bind9

Check if it's running:

systemctl status bind9

Look for active status in the output.

CentOS

Install the DNS utility:

yum install bind

Enable autostart:

systemctl enable named

Start the service:

systemctl start named

Check its status:

systemctl status named

You should see active in the output.

Basic DNS Server Configuration

The settings are defined in the configuration file.

Ubuntu/Debian

Open the config file:

vi /etc/bind/named.conf.options

In the listen-on block, specify the networks, e.g.:

listen-on {
    10.10.10.0/24;
    10.1.0.0/16;
};

To allow the DNS server to listen on all interfaces, either omit this line or use any.

In the allow-query line, specify who can make queries:

allow-query { any; };

Restart the service for changes to take effect:

systemctl restart bind9

CentOS

Open the config file:

vi /etc/named.conf

Find these lines:

listen-on port 53 { 127.0.0.1; localhost; 192.172.160.14; };
...
allow-query     { any; };

In the listen-on line, after localhost, specify the DNS IP address. This is the IP on which the host will accept queries. Use any to listen on all addresses.

In the allow-query line, define query permissions. any allows queries from everyone. You can also restrict it to a specific subnet, e.g., 192.172.160.0/24.

Apply the config:

systemctl restart named

Global Options

Besides the basics, you can fine-tune the server using other global parameters:

Argument

What It Configures

directory

Working directory (default is /var/named if not specified)

forwarders

IPs to forward unresolved queries to (e.g., Google's DNS)

forwarders { 
8.8.8.8; 
8.8.4.4; 
};

forward

Options: FIRST or ONLY. FIRST tries forwarders first, then internal. ONLY skips internal search.

listen-on

Interfaces that BIND listens on (usually port 53 UDP)

allow-transfer

Hosts allowed for zone transfers

allow-query

Who is allowed to send DNS queries

allow-notify

Hosts allowed to receive zone change notifications

allow-recursion

Hosts that can make recursive queries. Default is unrestricted.

Testing

To check if the DNS server accepts queries from clients, use the nslookup utility.

From another computer:

nslookup site-example.com 192.172.160.14

This checks the IP address of site-example.com using DNS server 192.172.160.14.

Alternatively, use dig:

dig @192.172.160.14 site-example.com

It works similarly, just a different syntax.

BIND Zones

Basic DNS server setup is complete. Now, let’s talk about usage. For that, you configure zones:

  • Primary zone – You create and edit domain records directly on this host.
  • Secondary zone – This host pulls data from a primary DNS server.
  • Stub zone – Stores only NS records used for redirection.
  • Caching-only zone – Doesn’t store records; only caches query results for performance.

Zone management is handled in the config file and is a larger topic. Creating your own zone lets you assign friendly names to each host, which is helpful when dealing with many nodes instead of using IPs.

Linux DNS
10.04.2025
Reading time: 4 min

Similar

Linux

Installing Arch Linux in a Cloud Environment

Arch Linux is a lightweight and flexible Linux distribution that provides users with extensive opportunities for customizing and optimizing their systems. It includes a minimal amount of preinstalled software and offers a console-based interface. In most cases, it is used by experienced users: professional developers, system administrators, or hackers. This is due to the complexity of its installation and subsequent configuration, which involves adding the required packages and components to the system. However, these difficulties are justified, because in the end the user gets exactly the system and services they need. In this article, we will explain how to install Arch Linux on your cloud server and perform its basic configuration. Advantages of Arch Linux It is worth noting that Arch Linux is ideally suited as an OS for a cloud server due to its low resource requirements. This distribution also has several other advantages: System UpdatesArch Linux updates automatically when a new OS version is released. Software InstallationPackages can be downloaded both over the network and from a local disk. In addition, the installed software does not need to be specifically compatible with Arch Linux. Rich RepositoriesArch Linux offers a wide variety of packages. Today, there are over 12,000 packages in the official repositories alone. In the community repository, there are even more — over 83,000. Up-to-date DocumentationThe official Arch Linux documentation is actively updated to reflect the latest changes and innovations. This ensures accurate and relevant system information. Active CommunityThis distribution has an active user community ready to help and share their experience. There are many forums, wikis, and repositories where you can find detailed instructions and guides for installation, configuration, and troubleshooting. 1. Preparing for Installation To follow this guide and install Arch Linux, you will need: A cloud server with any operating system (in our case, Debian 11); A link to the Arch Linux image from an official source; An additional disk, which you can attach under the Plan tab in the control panel. Step 1. To install Arch Linux on the server, you must first upload its installation image from an official source in .iso format. For example: wget https://mirror.rackspace.com/archlinux/iso/2025.06.01/archlinux-2025.06.01-x86_64.iso Step 2. Next, add a new disk where the installation image will be stored. It will appear in the system as /dev/sdb. You can specify the minimum disk size. Step 3. Write the installation image to the new disk: dd if=archlinux-2025.06.01-x86_64.iso of=/dev/sdb The writing process will take some time. When finished, verify it with the following command: fdisk -l In the output, you will see that the installation image has been written to the new disk, creating two necessary partitions. Step 4. After writing the installation image, proceed to boot from it. To do this, go to the Access tab and boot the server from the recovery disk. Open the console in the control panel.  Step 5. In the console window, go to the Boot existing OS menu item and press Tab on your keyboard. This will allow you to edit the text at the bottom of the screen. Here, you need to manually replace hd0 with hd1, as shown in the figure below. After that, press Enter to launch the installation program. Step 6. In the system bootloader that appears, select the first option. 2. Partitioning the Disk Now we can partition the main disk (sda). In our case, there will be 3 partitions: a 300 MB UEFI partition (type EFI), a 700 MB swap partition (type Linux swap), and a main filesystem partition taking up all remaining space (type Linux). In your own installation, the number and size of partitions may differ depending on your requirements. Make sure there are no important files on the server’s disk, because it will be formatted later. You may also wish to back it up to preserve important data. Step 1. First, check whether there are any files on the disk you need to save: lsblk The screenshot below shows the list. For creating the described partitions, we will use a 25 GB disk — sda. It currently has Debian 11 installed, which does not contain important files. Step 2. To partition the disk, enter the following command: cfdisk /dev/sda Step 3. In the window that opens, you need to delete all existing partitions. To do this, select a partition and use the Delete button in the lower menu. Step 4. Next, select the New button in the lower menu to create a new partition. Step 5. Then specify the size of the partition to be created. In our case, this is 300 MB for UEFI. Step 6. In the next window, choose Primary. Step 7. The partition is now created, and you need to specify its type. Go to the Type menu and select EFI. Step 8. Now move to the Free space and create 2 more partitions, repeating steps 4 through 7. Partition details were listed at the beginning of this chapter. Step 9. Once all partitions have been created, go to the Write button and select it. To confirm, type yes in the field that appears. Step 10. Partitioning is now complete. To exit the tool, select the Quit button in the lower menu. Step 11. You can verify your work using the lsblk command again. Check in the output that all changes have been successfully applied. 3. Formatting and Mounting the Created Partitions At this stage, the created partitions will be formatted and mounted. Remember, all data will be erased in this process! Step 1. For the first partition, format it using the following command: mkfs.fat -F32 /dev/sda1 This command will create a FAT32 filesystem, which is the recommended format for the UEFI partition. Step 2. Next, assign it a mount point: mkdir /mnt/efi mount /dev/sda1 /mnt/efi Step 3. For the second partition, perform special formatting: mkswap /dev/sda2 Step 4. Then activate the swap partition: swapon /dev/sda2 Step 5. Finally, format the system’s root partition: mkfs.ext4 /dev/sda3 Step 6. After formatting, create its mount point: mount /dev/sda3 /mnt After completing the formatting and mounting, your partitions will be ready for installing and configuring Arch Linux and its main components. 4. Installing the Main Arch Linux Components Step 1. First, let’s install the OS and its core components: pacstrap /mnt base linux grub openssh nano dhcpcd Step 2. Once the installation finishes, you need to generate the fstab file: genfstab -U /mnt >> /mnt/etc/fstab Generating the fstab file makes partition mounting management easier and ensures automatic and consistent mounting at system startup. 5. System Configuration Step 1. To configure Arch Linux after installation, you need to chroot into the OS without rebooting: arch-chroot /mnt Step 2. First, install the nano text editor: pacman -S nano Step 3. Uncomment the encoding for English in the relevant file (you would edit locale.gen): nano /etc/locale.gen Uncomment the line for en_US.UTF-8. After this, save the changes and exit nano, then generate the locales: locale-gen To enable the English language, execute: echo "LANG=en_US.UTF-8" > /etc/locale.conf Step 4. At this step, set up the system clock. For example:  ln -sf /usr/share/zoneinfo/Europe/Nicosia /etc/localtime The region is set. Now synchronize the hardware clock: hwclock --systohc Step 5. Next, set the hostname for your system: echo "hostname" > /etc/hostname Step 6. As the second-to-last step, set the root password. Run: passwd You will be prompted to enter and confirm the password. Step 7. Lastly, set up the previously installed GRUB bootloader to boot the server: grub-install --target=i386-pc /dev/sda Then create the GRUB configuration file: grub-mkconfig -o /boot/grub/grub.cfg This command will automatically configure GRUB. Step 8. Arch Linux is now successfully installed. Exit the chroot: exit Then go to the Access tab in your control panel and switch the server to standard boot mode. After that, click Save and Reboot. You can remove the additional disk after this step. Step 9. The system will boot, but it is not ready for use yet. First, connect to the server and enable the DHCP client daemon: systemctl enable dhcpcd Then start it: systemctl start dhcpcd Make sure the service shows the status active (running). Step 10. Next, configure the SSH connection. First, create a backup of the sshd configuration: cp /etc/ssh/sshd_config /etc/ssh/backup.sshdconf Then set PermitRootLogin to Yes in the /etc/ssh/sshd_config file: nano /etc/ssh/sshd_config Finally, enable the SSH daemon: systemctl enable sshd And start it: systemctl start sshd When checking with systemctl status sshd, the service should show active (running) status. Don’t forget to add and configure SSH keys before connecting to the server. 6. Additional Configuration The installation is complete, but you can also perform additional system configuration by reviewing the official Arch Linux setup documentation. To install packages, use the command: pacman -S package_name To update the system, use: pacman -Suy Conclusion In this guide, we reviewed the process of installing Arch Linux on your cloud server and performed its basic configuration. We used a temporary Debian 11 OS and an additional disk for the installation image. By following these steps, you can create a powerful and flexible virtual environment for developing, testing, and running applications based on Arch Linux.
03 July 2025 · 8 min to read
Linux

How to Open a Port on Linux

Opening ports in Linux is an important task that allows certain services or applications to exchange data over the network. Ports act as communication gateways, allowing access to authorized services while blocking unauthorized connections. Managing ports is key to secure access, smooth app functionality, and reliable performance. Understanding Ports and Their Purpose Ports are the logical endpoints of network communication, where devices can send and receive information. HTTP uses port 80, HTTPS uses port 443, and SSH uses port 22. An open port means the service that listens for incoming network traffic is associated with it. A closed port, on the other hand, stops communication via that gateway. Maintaining availability and security requires proper management of Linux open ports. Check Existing Open Ports on Linux Before opening a port, check the open ports in Linux to see which ones are currently active. You may achieve this using several Linux commands. netstat To display open ports, run: netstat -tuln The netstat utility provides a real-time view of active network connections, displaying all listening endpoints. The -tuln flags refine the output to show only TCP and UDP ports without resolving hostnames. Note: In case netstat isn’t installed, install it via: sudo apt install net-tools ss The ss utility can also be utilized to check ports: ss -tuln Compared to netstat, the ss command is more recent and fast. It shows the ports that are in use as well as socket information. nmap For a detailed analysis of Linux open ports, use: nmap localhost The nmap utility scans the given host (localhost in this case) for open ports. This is useful for finding ports exposed to public networks. Note: You can install nmap on Linux via: sudo apt install nmap Opening Ports on Linux Firewall modification is required to grant access through a chosen endpoint. Linux provides several options for handling these tasks, including iptables, ufw, and firewalld. Here are the methods to open ports with these utilities. Method 1: Via iptables Iptables is a robust and lower level firewall utility that grants fine-grained control over network traffic. To open a port with iptables, take these steps: Add a Rule to Allow Traffic from a Specific Port  Enable HTTP access on port 8080 with this command: sudo iptables -A INPUT -p tcp --dport 8080 -j ACCEPT sudo: Execute the command as superuser. iptables: Refers to the firewall utility. -A INPUT: Inserts a rule in the input chain, controlling incoming traffic. -p tcp: Shows that the rule is for TCP traffic. --dport 8080: Points to port 8080 for the rule. ACCEPT: Specifies that incoming traffic matching the rule is accepted. This permits incoming TCP on port 8080. However, iptables changes are volatile and will be undone after reboot. Note: The iptables can be installed with persistent packages using: sudo apt install iptables iptables-persistent Save the Configuration For making the rule permanent and remain even after a system restart, store iptables rules via: sudo netfilter-persistent save This directive preserves current iptables or nftables rules such that they are preserved during reboots. Reload Changes Reload the firewall configuration as needed with: sudo netfilter-persistent reload Method 2: Via UFW Ufw (Uncomplicated Firewall) is a minimal front-end for managing iptables rules. It allows you to easily open ports with simple commands. This is how you can do it: Enable Ufw  First, ensure the ufw firewall is activated: sudo ufw enable Executing this command allows UFW to modify firewall settings. Note: UFW can be installed with: sudo apt install ufw Allow Traffic Via Specific Port  For instance, to open port 22 for SSH, use: sudo ufw allow 22/tcp sudo: Grants superuser privileges. ufw allow: Adds a rule to permit traffic. 22/tcp: Sets port 22 for communication while restricting the rule to TCP protocol. This permits access on port 22, enabling remote SSH connections. Verify the Firewall Status  To ensure the port is accessible and the rule is active, execute: sudo ufw status The status command displays all active rules, including the allowed ports. Method 3: Via Firewalld Firewalld is a dynamic firewall daemon present on Linux. It is simpler to customize the firewall rules compared to using iptables. Here’s how to enable port access via firewalld: Add a Permanent Rule for the Desired Port  To enable HTTPS access on port 443, run: sudo firewall-cmd --permanent --add-port=443/tcp firewall-cmd: Invokes the firewalld command. --permanent: Ensures the rule stays active after the firewall reloads or the system boots. --add-port=443/tcp: Opens port 443 to accept incoming TCP traffic. Note: Install firewalld on Linux via: sudo apt install firewalld Once installed, you should activate and run it: sudo systemctl enable firewalld sudo systemctl start firewalld Reload the Firewall  Finalize the settings to enable the newly defined policy: sudo firewall-cmd --reload Applying firewall modifications makes recent policy updates functional without rebooting. Verification Check whether the port is opened successfully: sudo firewall-cmd --list-all The --list-all command provides a complete list of rules, helping you determine if port 443 is open. Testing the Newly Opened Port Always check if the newly opened port is available for incoming connections. Here’s how: Using telnet Test the port opening via: telnet localhost port_number Successful access means the port is open and responsive. Using nmap Analyze the host to verify if the specified endpoint is accessible.: nmap -p port_number localhost The -p flag specifies the port to scan. Using curl Check HTTP service availability: curl localhost:port_number A successful response confirms the service is running on the opened port. Troubleshooting Common Issues Ports opening may occasionally fail due to configuration errors or conflicting software settings. Follow these tips: Verify Firewall Rules: Run iptables -L or ufw status to assess firewall restrictions and permissions. Check Service Status: Check if the assigned service is active with systemctl status <service-name>. Opening Specific Ports Based on Protocol Understanding the protocol used by the service can help configure ports more effectively. For instance, web traffic typically uses TCP (Transmission Control Protocol) for stable communication, while certain gaming services may require UDP (User Datagram Protocol) for faster packet transmission. Opening a TCP Port To access port 3306 for MySQL traffic: sudo ufw allow 3306/tcp This explicitly permits TCP traffic through port 3306, ensuring stable communication for database queries. Opening a UDP Port To access port 161 for SNMP (Simple Network Management Protocol), run: sudo ufw allow 161/udp UDP provides faster, connectionless communication, ideal for monitoring tools like SNMP. Managing Port Accessibility Once a port is opened, controlling its visibility ensures security and prevents unauthorized access. Restricting Access to Specific IPs To limit port access to a specific IP address (e.g., 192.168.1.100): sudo ufw allow from 192.168.1.100 to any port 22 This allows SSH access via port 22 only from the specified IP address, enhancing security. Closing Ports To revoke access to port 80: sudo ufw deny 80/tcp This denies incoming traffic on port 80, effectively closing it for HTTP services. Conclusion Confirming open ports in Linux is a key step for optimizing network functionality and deploying services effectively. With the use of utilities such as iptables, ufw, or firewalld, you can control traffic securely for your apps. You need to test and debug in order to confirm the port is open and working as expected. From web servers to SSH access, to other network services, port management skills ensure smooth operations and better security.
01 July 2025 · 7 min to read
Linux

NATS Installation, Configuration, and Usage Guide

NATS is a simple, fast, and lightweight message broker written in the Go programming language. NATS has several data organization features: Key-Value: Data within NATS is stored in "key-value" format, where each key corresponds to a specific value. Subjects: Data within NATS is organized into so-called "Subjects," which are named channels for message transmission. Subjects can be divided into segments with hierarchical structures. Publish/Subscribe (Pub/Sub): Data within NATS is transmitted through a model where "Publishers" send messages to "Subjects," and "Subscribers" can subscribe to these "Subjects" to receive messages. Unlike many other message brokers (such as Apache Kafka or RabbitMQ), NATS has several significant advantages: Simplicity and Performance: Messages are transmitted through a simple and fast Pub/Sub protocol. When a message is sent to a subject, all subscribers immediately receive it. This minimizes delays and other overhead costs. Stateless: Information about the state of messages transmitted through the broker is not stored within it, nor is data about subject subscribers. The absence of complex state synchronization allows NATS to scale easily. No Default Queues: In standard configuration, NATS does not form message queues. This is important in cases where data timeliness is more important than persistence. It also eliminates queue management overhead. Reliable Protocol: Messages within the broker are transmitted using the "at-most-once delivery" method. This means a subscriber either receives a message once or not at all. This increases communication reliability and prevents duplicate responses to forwarded messages. Thus, NATS enables building fast and reliable communication between multiple different services. In this guide, we will thoroughly examine how to install, configure, and correctly use NATS in projects running on Ubuntu 22.04. Downloading NATS Package Updates Before installation, it's recommended to update the list of available repositories in the system: sudo apt update Downloading the Archive Next, you need to manually download the ZIP archive with NATS from its official GitHub repository: wget https://github.com/nats-io/nats-server/releases/download/v2.10.22/nats-server-v2.10.22-linux-amd64.zip After the download is complete, you can check the file list: ls Among them will be the NATS archive: nats-server-v2.10.22-linux-amd64.zip  resize.log  snap Extracting the Archive Next, install the package that performs ZIP archive extraction: sudo apt install unzip -y The -y flag is added so that the installer automatically answers 'yes' to all questions. Now extract the NATS archive using the installed extractor: unzip nats-server-v2.10.22-linux-amd64.zip Check the file list: ls As you can see, a new folder with the archive contents has appeared: nats-server-v2.10.22-linux-amd64  nats-server-v2.10.22-linux-amd64.zip  resize.log  snap We no longer need the archive, so delete it: rm nats-server-v2.10.22-linux-amd64.zip Installing NATS Server Installation Let's look at the contents of the created folder: ls nats-server-v2.10.22-linux-amd64 Inside it is the main directory with the NATS server: LICENSE  nats-server  README.md This is what we need to copy to the system catalog with binary files: sudo mv nats-server-v2.10.22-linux-amd64/nats-server /usr/local/bin/ After copying, you need to set the appropriate access permissions: sudo chmod +x /usr/local/bin/nats-server The folder with NATS contents, like the archive, can now also be deleted: rm nats-server-v2.10.22-linux-amd64 -R Server Verification Let's verify that the NATS server is installed by requesting its version: nats-server -v A similar output should appear in the console terminal: nats-server: v2.10.22 However, this command doesn't start the server; it only returns its version. You can start the server as follows: nats-server [3704] 2024/11/07 02:59:53.908362 [INF] Starting nats-server [3704] 2024/11/07 02:59:53.908623 [INF] Version: 2.10.22 [3704] 2024/11/07 02:59:53.908669 [INF] Git: [240e9a4] [3704] 2024/11/07 02:59:53.908701 [INF] Name: NC253DIPURNIY4HUXYQYC5LLAFA6UZEBKUIWTBLLPSMICFH3E2FMSXB7 [3704] 2024/11/07 02:59:53.908725 [INF] ID: NC253DIPURNIY4HUXYQYC5LLAFA6UZEBKUIWTBLLPSMICFH3E2FMSXB7 [3704] 2024/11/07 02:59:53.909430 [INF] Listening for client connections on 0.0.0.0:4222 [3704] 2024/11/07 02:59:53.909679 [INF] Server is ready In this case, the server starts with binding to the console terminal, not as a background service. Therefore, to return to command input mode, you need to press Ctrl + C. NATS Configuration Creating a Configuration File After the broker server is started, you can create a separate directory for the NATS configuration file: mkdir /etc/nats And then create the configuration file itself: sudo nano /etc/nats/nats-server.conf Its contents will be as follows: cluster { name: "test-nats" } store_dir: "/var/lib/nats" listen: "0.0.0.0:4222" Specifically in this configuration, the most basic parameters are set: name: Server name within the NATS cluster store_dir: Path to the directory where working data will be stored listen: IP address and port that the NATS server will occupy Creating a Separate User For all directories related to NATS, you need to create a separate user: useradd -r -c 'NATS service' nats Now create the directories specified in the configuration file: mkdir /var/log/nats /var/lib/nats For each directory, assign appropriate access permissions to the previously created user: chown nats:nats /var/log/nats /var/lib/nats Creating a Background Service Earlier we started the NATS server with binding to the console terminal. In this case, when exiting the console, the server will stop working. To prevent this, you need to create a file for the systemd service: sudo nano /etc/systemd/system/nats-server.service Its contents will be: [Unit] Description=NATS message broker server After=syslog.target network.target [Service] Type=simple ExecStart=/usr/local/bin/nats-server -c /etc/nats/nats-server.conf User=nats Group=nats LimitNOFILE=65536 ExecReload=/bin/kill -HUP $MAINPID Restart=on-failure [Install] WantedBy=multi-user.target This file contains several key parameters: Description: Short description of the service ExecStart: NATS server startup command with the configuration file explicitly specified User: Name of the user created for NATS Now we need to set up the service to start up at boot:  systemctl enable nats-server --now The --now flag immediately starts the specified service. The corresponding message will appear in the console: Created symlink /etc/systemd/system/multi-user.target.wants/nats-server.service → /etc/systemd/system/nats-server.service. Now check the status of the running service: systemctl status nats-server If the NATS server service started successfully, the corresponding message will be among the console output: ... Active: active (running) ... Connecting to NATS You can connect to the NATS server through the console terminal and thus perform message broker testing. For example, publish messages or subscribe to subjects. Client Installation To manage the NATS server, you need to install the natscli client. You can download it from the official GitHub repository: wget https://github.com/nats-io/natscli/releases/download/v0.1.5/nats-0.1.5-amd64.deb After this, the downloaded archive can be extracted and installed: dpkg -i nats-0.1.5-amd64.deb The archive itself can be deleted as it's no longer needed: rm nats-0.1.5-amd64.deb Sending Messages Now you can send a message to the message broker: nats pub -s 127.0.0.1 "someSubject" "Some message" In this command, we send the message "Some message" to the subject "someSubject" to the message broker running on IP address 127.0.0.1 and located on the standard NATS port - 4222. After this, information about the sent data will appear in the console terminal: 10:59:51 Published 12 bytes to "someSubject" Reading Messages Currently, no one will see this message since there's no agent subscribed to the specified subject. We can simulate a service subscribed to the subject and reading messages using another SSH session. To do this, you need to open another console terminal, connect to the remote machine, and subscribe to the previously specified subject: nats sub -s 127.0.0.1 "someSubject" A message about successful subscription will appear in the terminal: 11:11:10 Subscribing on someSubject Now repeat sending the message from the first terminal: nats pub -s 127.0.0.1 "someSubject" "Some message" Information about the new message will appear in the second terminal: [#1] Received on "someSubject" Some message Let's send another message from the first terminal: nats pub -s 127.0.0.1 "someSubject" "Some message again" The corresponding notification will appear in the second terminal: [#2] Received on "someSubject" Some message again Note that the console output of received messages has numbering in square brackets. Go Program + NATS Let's create a small program in the Golang programming language using the NATS message broker. Installing Go First, you need to ensure that the Go compiler is installed in the system: go version If the following message appears in the console terminal, then Go is not yet installed: Command 'go' not found, but can be installed with: snap install go # version 1.23.2, or apt install golang-go # version 2:1.18~0ubuntu2 apt install gccgo-go # version 2:1.18~0ubuntu2 See 'snap info go' for additional versions. In this case, you need to download it as an archive from the official website: wget https://go.dev/dl/go1.23.3.linux-amd64.tar.gz -O go.tar.gz And then extracted: sudo tar -xzvf go.tar.gz -C /usr/local As we no longer need the downloaded archive, we can delete it: rm go.tar.gz Next, you need to add the Go compiler to the PATH variable so it can be called from the console terminal: echo export PATH=$HOME/go/bin:/usr/local/go/bin:$PATH >> ~/.profile Then apply the changes: source ~/.profile Verify that Go is installed successfully by requesting its version: go version You will see a similar output: go version go1.23.3 linux/amd64 Creating a Project Let's create a separate folder for the Golang program: mkdir nats_go Then navigate to it: cd nats_go And initialize the Go project: go mod init nats_go Installing the Module After project initialization, you need to install the NATS client from the official GitHub repository. You don't need to download anything manually; it's enough to use the built-in Golang function: go get github.com/nats-io/nats.go/ Writing Code Now you can create a file with the program code: nano nats_go.go Its contents will be: package main import ( "fmt" // module for working with console "os" // module for working with system functions "time" // module for working with time "github.com/nats-io/nats.go" // module for working with NATS server ) func main() { // get NATS server address from environment variable url := os.Getenv("NATS_URL") // if there's no address in environment variable, use default address if url == "" { url = nats.DefaultURL } // connect to NATS server nc, _ := nats.Connect(url) // defer message broker cleanup until main() function completion defer nc.Drain() // send message to subject without subscribers to ensure it disappears nc.Publish("people.philosophers", []byte("Hello, Socrates!")) // subscribe to all sub-subjects in "people" subject sub, _ := nc.SubscribeSync("people.*") // extract message msg, _ := sub.NextMsg(10 * time.Millisecond) // output message status (it's not there because it was sent before subscribing to subjects) fmt.Printf("No message? Answer: %v\n", msg == nil) // send message to "philosophers" sub-subject of "people" subject nc.Publish("people.philosophers", []byte("Hello, Socrates!")) // send message to "physicists" sub-subject of "people" subject nc.Publish("people.physicists", []byte("Hello, Feynman!")) // extract message and output to console msg, _ = sub.NextMsg(10 * time.Millisecond) fmt.Printf("Message: %q in subject %q\n", string(msg.Data), msg.Subject) // extract message and output to console msg, _ = sub.NextMsg(10 * time.Millisecond) fmt.Printf("Message: %q in subject %q\n", string(msg.Data), msg.Subject) // send message to "biologists" sub-subject of "people" subject nc.Publish("people.biologists", []byte("Hello, Darwin!")) // extract message and output to console msg, _ = sub.NextMsg(10 * time.Millisecond) fmt.Printf("Message: %q in subject %q\n", string(msg.Data), msg.Subject) } Now you can run the created program: go run . The program's output will appear in the console terminal: No message? Answer: true Message: "Hello, Socrates!" in subject "people.philosophers" Message: "Hello, Feynman!" in subject "people.physicists" Message: "Hello, Darwin!" in subject "people.biologists" Python Program + NATS As another example, let's consider using the NATS message broker in the Python programming language. First, you need to ensure that the Python interpreter is installed in the system by requesting its version: python --version The corresponding message will appear in the console: Python 3.10.12 Note that this guide uses Python version 3.10.12. Installing PIP To download the NATS client for Python, you first need to install the PIP package manager: apt install python3-pip -y The -y flag helps automatically answer positively to all questions during installation. Installing the Client Now you can install the NATS client for Python: pip install nats-py Creating a Project For the Python program, let's create a separate directory: mkdir nats_python And navigate to it: cd nats_python Writing Code Let's create a file with the program code: nano nats_python.py Its contents will be: import os import asyncio # import NATS client import nats from nats.errors import TimeoutError # get environment variable containing NATS server address servers = os.environ.get("NATS_URL", "nats://localhost:4222").split(",") async def main(): # connect to NATS server nc = await nats.connect(servers=servers) # send message to subject without subscribers to ensure it disappears await nc.publish("people.philosophers", "Hello, Socrates!".encode()) # subscribe to all sub-subjects in "people" subject sub = await nc.subscribe("people.*") try: # extract message msg = await sub.next_msg(timeout=0.1) except TimeoutError: pass # send message to "philosophers" sub-subject of "people" subject await nc.publish("people.philosophers", "Hello, Socrates!".encode()) # send message to "physicists" sub-subject of "people" subject await nc.publish("people.physicists", "Hello, Feynman!".encode()) # extract message and output to console msg = await sub.next_msg(timeout=0.1) print(f"{msg.data.decode('utf-8')} in subject {msg.subject}") # extract message and output to console msg = await sub.next_msg(timeout=0.1) print(f"{msg.data.decode('utf-8')} in subject {msg.subject}") # send message to "biologists" sub-subject of "people" subject await nc.publish("people.biologists", "Hello, Darwin!".encode()) # extract message and output to console msg = await sub.next_msg(timeout=0.1) print(f"{msg.data.decode('utf-8')} in subject {msg.subject}") # unsubscribe from subjects await sub.unsubscribe() # clean up message broker await nc.drain() if __name__ == '__main__': asyncio.run(main()) Now you can run the created script: python nats_python.py The result of its operation will be the following output in the console terminal: Hello, Socrates! in subject people.philosophers Hello, Feynman! in subject people.physicists Hello, Darwin! in subject people.biologists As you can notice, the logic of this Python program doesn't differ from the logic of the Go program. The difference is only in the syntactic constructions of the specific programming language. Conclusion This guide examined the use of the NATS message broker in sequential stages: Downloading and installing NATS from the official GitHub repository Minimal NATS server configuration Managing the NATS server through the console terminal client Using NATS in a Golang program Using NATS in a Python program We downloaded all NATS clients used in this guide (for terminal, Go, and Python) from the official NATS repository on GitHub, which hosts modules and libraries for all programming languages supported by NATS. You can find more detailed information about configuring and using NATS in the official documentation. There are also many examples of using NATS in different programming languages.
24 June 2025 · 13 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support