Sign In
Sign In

Setting Up a BIND DNS Server

Setting Up a BIND DNS Server
Hostman Team
Technical writer
Ubuntu
19.07.2024
Reading time: 14 min

The DNS (Domain Name System) is a system where all domain names of servers are organized in a specific hierarchy. Why do we need it? Imagine needing to connect to a device with the IP address 91.206.179.207. You could enter this address in the command line to get the information you need, but remembering many such numeric combinations is very difficult. Therefore, special servers were created to convert domain names into IP addresses. So, for example, when you enter hostman.com in your browser’s search bar, the request data is sent to a DNS server, which looks for matches in its database. The DNS server then sends the necessary IP address to your device, and only then does the browser directly access the resource.

Configuring your own DNS allows for more flexible and precise system configuration and avoids reliance on third parties. In this article, we will look at how to set up DNS using the BIND nameserver on Ubuntu.

Terms

  • Zone: A part of the DNS hierarchy hosted on a DNS server. It establishes the boundaries within which a specific server or group of servers is responsible.

  • Root Servers: DNS servers containing information about top-level domains (.ru, .com, etc.).

  • Domain: A named part of the DNS hierarchy, a specific node that includes other nodes. DNS addresses are read from right to left and start with a dot, with domains also separated by dots. For example, the domain poddomen.domen.ru should be read as .ru.domen.poddomen. Usually, the domain name reflects the DNS hierarchy structure, but the final dot is omitted.

  • FQDN (Fully Qualified Domain Name): A full domain name including the names of all parent domains.

  • Resource Record: A unit of information storage, essentially a record that links a name to some service information. It consists of:

    • Name (NAME): The name or IP address that owns the zone.

    • Time to Live (TTL): The duration a record is stored in the DNS cache before being deleted.

    • Class (CLASS): Network type, usually IN (Internet).

    • Type (TYPE): The record's purpose.

    • Various Information (DATA): Additional details.

Common Resource Records

  • A: Maps a hostname to an IPv4 address. Each network interface can have only one A record.
website.com.              520    IN     A      91.206.179.207
  • AAAA: The same as an A record, but for IPv6.
  • CNAME: Canonical name record, an alias for a real name for redirection.

  • MX: Specifies mail hosts for the domain. The NAME field contains the destination domain, and the DATA field contains the priority and host for receiving mail.
website.com.             17790   IN      MX      10 mx.website.com.
website.com.             17790   IN      MX      20 mx2.website.com.
  • NS: Points to the DNS server servicing the domain.

  • PTR: IP address to domain name mapping, needed for reverse name resolution.

  • SOA: Describes the main zone settings.

  • SRV: Contains addresses of servers providing internal domain services, such as Jabber.

Requirements

To follow the instructions in this article, you need at least two Ubuntu cloud servers in the same data center. Any of these servers can be ordered from Hostman. 

We will need two Ubuntu 20.04 servers, used as the primary and secondary DNS servers, ns1 and ns2, respectively. Additionally, there will be extra servers using our configured servers.

You must have superuser privileges on each server.

Installing BIND on DNS Servers

We will use bind9 as the DNS server. Install the bind9 package from the Linux repository:

sudo apt update && sudo apt upgrade -y
sudo apt install bind9

Additionally, it is recommended to install network monitoring tools:

sudo apt install dnsutils

After installation, start the bind9 service:

sudo service bind9 start

The main configuration file of the server is /etc/bind/named.conf. It describes the general settings and is usually split into several others for convenience. DNS setup begins by working with the parameters inside this file.

named.conf.options

This file contains the general server parameters. We will specify the DNS configuration data in it.

options {
        dnssec-validation auto;
        auth-nxdomain no;
        directory "/var/cache/bind";
        recursion no; # disallow recursive queries to the nameserver

        listen-on {
                     172.16.0.0/16; 
                     127.0.0.0/8;    
        };

        forwarders { 
            172.16.0.1;
            8.8.8.8;  
        };
};

To verify that everything is entered correctly, use one of the named daemon utilities, named-checkconf.

sudo named-checkconf

If everything is correct, the bind server starts working.

Primary DNS Server

The primary DNS server stores the main copy of the zone data file. All zones will be stored in the /etc/bind/master-zones directory of the primary DNS server. Create the directory:

sudo mkdir /etc/bind/master-zones

Create a file to describe the zone:

sudo touch /etc/bind/master-zones/test.example.com.local.zone

And add SOA, NS, and A records to it:

$ttl 3600 
$ORIGIN test.example.com. 
test.example.com.               IN              SOA  (      
ns.test.example.com.    
abuse.test.example.com.  
                                2022041201 
                                10800 
                                1200 
                                604800 
                                3600   ) 

@                               IN              NS              ns.test.example.com. 
@                               IN              NS              ns2.test.example.com.

@                               IN              A                172.16.101.3 
ns                              IN               A                172.16.0.5 
ns2                             IN              A                172.16.0.6

Next, run the check with the utility named-checkzone.

sudo named-checkzone test.example.com. /etc/bind/master-zones/test.example.com.local.zone

named.conf.local

This is another file included in the server's main configuration. We will specify local zones in it:

zone "test.example.com." {
                type master;
                file "/etc/bind/master-zones/test.example.com.local.zone";
};

After entering the necessary data, check the config and restart bind9 (the -z flag checks zone files):

sudo named-checkconf
sudo named-checkconf -z
sudo service bind9 restart
sudo service bind9 status

Setting Up Views

Views allow flexible management of name resolution from different subnets. Specify in the /etc/bind/named.conf file:

include "/etc/bind/named.conf.options";

acl "local" { 172.16.0.0/16; };
view "local" {
                include "/etc/bind/named.conf.local";
                match-clients { local; };
};

In the same file, you can specify directives for indicating which nodes and network addresses to accept or reject requests from. Then, restart bind9:

sudo service bind9 restart

After the server restarts, you can request the SOA record for the server 172.16.0.5 from another computer on the local network:

dig @172.16.0.5 -t SOA test.example.com

At this stage, the primary DNS server setup is complete. The next sections cover the secondary server, mail server setup, and reverse zone configuration.

Secondary Server

The initial steps are the same as for the primary server — installing bind9 and network utilities:

sudo apt update && sudo apt upgrade -y
sudo apt install bind9
sudo apt install dnsutils
sudo service bind9 start

Next, to store zone files, create the /etc/bind/slave directory and grant the necessary permissions:

sudo mkdir /etc/bind/slave
sudo chmod g+w /etc/bind/slave

Proceed to configure the zone on the secondary server. Add the zone to the /etc/bind/named.conf.local file:

zone "test.example.com." {
        type slave;
        file "/etc/bind/slave/test.example.com.local.zone";
        masters { 172.16.0.5; };
};

And set up views in the main configuration file named.conf:

include "/etc/bind/named.conf.options";
acl "local" { 172.16.0.0/16; };
view "local" {
        match-clients { local; };
        include "/etc/bind/named.conf.local";
};

After adding the settings, check the syntax, and then restart bind9:

sudo named-checkconf
sudo named-checkconf -z
sudo service bind9 restart

If there are no errors, perform the zone transfer:

sudo rndc retransfer test.example.com

The rndc retransfer command allows for a zone transfer without checking serial numbers. Briefly, the primary (ns1) and secondary (ns2) DNS servers work as follows: ns2 only checks the serial number of the zone and ignores the content of the entire zone file. If the serial number decreases, the zone transfer will be stopped. Therefore, it is crucial to increment the serial number every time you edit the zone. It is recommended to use the current date and an incremental number as the serial number.

Once you have set up the server and performed the zone transfer, you need to restrict the transfer to the secondary server’s IP address in the named.conf configuration on the primary server. To do this, add the allow-transfer directive with the IP address of the secondary DNS server in named.conf:

zone "test.example.com." {
    type master;
    allow-transfer { 172.168.0.6; };
    file "/etc/bind/master-zones/test.example.com.local.zone";
};

Then restart the server:

sudo service bind9 restart

After this step, all further operations will be performed on the primary server.

Adding an MX Record

In this example, we use mx as the hostname since it is a commonly accepted designation. Therefore, the FQDN (Fully Qualified Domain Name) will be mx.test.example.com.

To add an MX record:

1) Add the mail resource records to the zone file located at /etc/bind/master-zones/test.example.com.local.zone.

; Add the MX records to the zone file
@   IN  MX  10 mx.test.example.com.
@   IN  MX  20 mx2.test.example.com.

This adds two MX records with different priorities for the domain test.example.com.

2)  Update the serial number in the SOA (Start of Authority) record to reflect the changes.

$TTL 3600
@   IN  SOA ns.test.example.com. admin.test.example.com. (
        2024071101  ; Serial number
        10800       ; Refresh
        1200        ; Retry
        604800      ; Expire
        3600        ; Minimum TTL
)

3) Verify the zone file syntax with the following command:

sudo named-checkzone test.example.com. /etc/bind/master-zones/test.example.com.local.zone

This command checks the syntax of the zone file to ensure there are no errors.

4) Apply the changes by reloading BIND:

sudo service bind9 reload

This command reloads the BIND DNS server configuration to apply the updates made to the zone file.

Reverse DNS Setup

Reverse DNS is the reverse of the forward DNS resolution, converting IP addresses back to domain names.

For example, the IP address 192.168.1.10 is represented in reverse notation as 10.1.168.192.in-addr.arpa.

Because a hierarchical model is used, the management of the zone can be delegated to the owner of the IP address range. Essentially, a PTR record defines a domain name based on an IP address, which is conceptually similar to an A record. PTR records are primarily used for verifying mail servers.

To configure the reverse lookup zone, create a new zone file:

sudo nano /etc/bind/master-zones/16.172/in-addr.arpa.zone

And add the following data:

$TTL    3600 
16.172.in-addr.arpa.            IN      SOA  ( 
ns.test.example.com. 
admin.test.example.com. 
                                2022041202 
                                10800 
                                1200 
                                604800 
                                3600  )
                                IN      NS            ns.test.example.com. 
                                IN      NS           ns2.test.example.com. 

3.101.16.172.in-addr.arpa.      IN      PTR              test.example.com. 
5.0.16.172.in-addr.arpa.        IN      PTR           ns.test.example.com. 
6.0.16.172.in-addr.arpa.        IN      PTR          ns2.test.example.com. 
2.101.16.172.in-addr.arpa.      IN      PTR         mail.test.example.com.

Check the configuration:

sudo named-checkzone 16.172.in-addr.arpa /etc/bind/master-zones/16.172.in-addr.arpa.zone

Then, open named.conf.local:

sudo nano /etc/bind/named.conf.local

And specify the following zone:

zone "16.172.in-addr.arpa." {
                type master;
                file "/etc/bind/master-zones/16.172.in-addr.arpa.zone";
                allow-transfer { 172.16.0.6; };
        };

Restart the bind9 service:

sudo named-checkconf
sudo named-checkconf -z
sudo service bind9 restart

Check with the dig utility:

dig @172.16.0.5 -x 172.16.0.5

Now you can perform a similar setup on the secondary server. Add the following configuration to named.conf.local:

zone "16.172.in-addr.arpa." { 
    type slave; 
    file "/etc/bind/slave/16.172.in-addr.arpa.zone"; 
    masters { 172.16.0.5; }; 
};

At this stage, we have completed work with local domain zones. You can now proceed to configure the external domain zone.

External Domain Zone

First, to handle queries from the external network, add the external IP address to the listen-on directive in the named.conf.options configuration file:

listen-on {
    aaa.bbb.ccc.ddd/32; # our external IP
    172.16.0.0;
    127.0.0.0/8
}

Next, create the zone file (don't forget to change the serial number!) and add the external IP addresses to it:

sudo nano /etc/bind/master-zones/test.example.com.zone

Add the following content to the file:

$TTL 3600
$ORIGIN test.example.com.
test.example.com.               IN              SOA  (     
    ns.test.example.com.
    admin.test.example.com.
                                2022041205
                                10800
                                1200
                                604800
                                3600   )
@                               IN              NS              ns.test.example.com.
@                               IN              NS              ns2.test.example.com.
@                               IN              A               aaa.bbb.ccc.ddd # first external address
ns                              IN              A               aaa.bbb.ccc.ddd
ns2                             IN              A               eee.fff.ggg.hhh # second external address

Then, create a separate file for the external view zones to serve different domain zones to clients from different subnets:

sudo nano /etc/bind/named.conf.external

Add the following content to the file:

zone "test.example.com." { 
    type master; 
    file "/etc/bind/master-zones/test.example.com.zone";
    allow-transfer { 172.16.0.6; };
};

After this, include the file in named.conf by adding the following block:

acl "external-view" { aaa.bbb.ccc.ddd; };
view "external-view" {
    recursion no;
    match-clients { external-view; };
    include "/etc/bind/named.conf.external";
};

Now check this zone and restart BIND9:

sudo named-checkconf -z
sudo named-checkzone test.example.com. /etc/bind/master-zones/test.example.com.zone
sudo service bind9 restart
sudo service bind9 status

On the secondary DNS server, you need to specify the external server address in named.conf.options:

sudo nano /etc/bind/named.conf.options

Add the following configuration:

options {
    dnssec-validation auto;
    auth-nxdomain no;
    recursion no;
    directory "/var/cache/bind";
    listen-on {
        eee.fff.ggg.hhh/24;
        172.16.0.0/16;
        127.0.0.0/8;
    };
};

Similarly to the primary server, create a new named.conf.external file:

sudo nano /etc/bind/named.conf.external

Add the following content to the file:

zone "test.example.com." {
    type slave;
    file "/etc/bind/slave/test.example.com.zone"; 
    masters { 172.16.0.5; };
};

Then add the following block to named.conf:

acl "external-view" { eee.fff.ggg.hhh; }; 
view "external-view" { 
    recursion no; 
    match-clients { external-view; }; 
    include "/etc/bind/named.conf.external"; 
};

And perform the transfer:

sudo rndc retransfer test.example.com IN external-view

Debugging

When setting up a DNS server, it is very important to pay close attention to query logging. This helps with initial troubleshooting, and during normal server operation, it allows you to fully control the services.

BIND9 allows for comprehensive logging rules configuration—writing to a single file, separating different categories into different logs, and so on.

To write debugging information to one file, you need to create logging rules and include them in the main configuration. Create a log.conf file:

sudo nano /etc/bind/log.conf

Add the following content:

logging {
    channel bind.log {
        file "/var/lib/bind/bind.log" versions 10 size 20m;
        severity debug;
        print-category yes;
        print-severity yes;
        print-time yes;
    };
    category queries { bind.log; };
    category default { bind.log; };
    category config { bind.log; };
};

Then include the file in the main configuration:

include "/etc/bind/log.conf";

And restart BIND9:

sudo service bind9 restart

You can create multiple such files with different settings and include them depending on the development stage or server load.

Conclusion

In this guide, we configured DNS on a server running Ubuntu OS using the bind9 package. After following the steps, the two configured DNS servers can be used for name resolution on the network. To use the custom DNS servers, configure your other servers to use 172.16.0.5 and 172.16.0.6 as their DNS servers. 

This setup can serve as the foundation for further enhancements, such as setting up an email server.

Ubuntu
19.07.2024
Reading time: 14 min

Similar

Ubuntu

Installing and Configuring Samba on Ubuntu 22.04

Let’s look at the process of installing Samba software on a cloud server with the Ubuntu 22.04 operating system. This guide is also suitable for installing Samba on Debian. Let’s start with a brief description of this software. What is Samba Samba is a software package developed to provide compatibility and interaction between UNIX-like systems and Windows. The software has been distributed under a free license for over 30 years. Samba ensures seamless integration of servers and PCs running UNIX into an AD (Active Directory) system. This software can be used as a controller and as a standard component of a domain. Thus, users can flexibly configure cloud file storages. Samba provides extensive functionality for managing file and database access rights by assigning specific user groups. Creating a New Server Go to the control panel and create a new server.  Select the Ubuntu 22.04 image and then the minimum server configuration.  After creating the server, connect to it via SSH, and you can begin configuration. Adding a User This is simple — enter the command: sudo useradd -p new_server_pass new_server_user Instead of new_server_pass and new_server_user, you can use any password and any username. Enter your own data instead of the example ones. Note that we immediately set the password, which was possible thanks to the -p command. Installing Samba on Ubuntu For convenience, we have broken the installation process into separate steps. Step 1. Preparation To start the installation process, use the following command: sudo apt install samba -y Now you need to remember the system name of the service. In most cases, it is smbd. Therefore, if you want to call the service, use this name. First, let’s configure autostart, which is done with the command: sudo systemctl enable smbd Now start it using the familiar command: sudo systemctl start smbd Then check the system status using: sudo systemctl status smbd To stop Samba, use: sudo systemctl stop smbd To restart the service, enter: sudo systemctl restart smbd If you want Samba to no longer start automatically, use the command: sudo systemctl disable smbd The reload command is used to refresh the configuration. The following command will forcibly open port 445, as well as 137–139. To allow them in the ufw firewall, use: sudo ufw allow Samba Step 2. Configuring Anonymous Access Suppose you have some remote server located outside your cloud. Network security rules require that you never open direct access to it through its IP. You can only do this through a tunnel, which is already set up. Typically, servers with granted access have the address 10.8.0.1, and this is the address we will use further. To share data and grant anonymous access to it, first open the configuration file. It is located here: /etc/samba/smb.conf. We recommend making a backup of the clean file — this will help you quickly restore the original program state without needing to reinstall. Now remove all comments, leaving only the code, and enter the command testparm to ensure the program works properly. In the shared folder settings, enter the following parameters: [share]     comment = share     path = /data/public_share     public = yes     writable = yes     read only = no     guest ok = yes Also, make sure that the following four fields (mask and mode) have matching numeric values (for example, 0777). Regarding the specific lines: [share] — the name of the shared folder, which will be visible to everyone connecting to your server; comment — a comment that can be anything; path — the path to the data storage folder; public — gives permission for public access: if you do not want users to view the folder contents, set this to no; writable — determines whether data can be written to the folder; read only — specifies that the folder is read-only: to allow users to create new files, set it to no; guest ok — determines whether guests can access the folder. Thus, the folder name and path may differ depending on what values you specify for the shared folder. The comment can also be anything, and for the last four parameters, values are set as yes or no. Now restart the program and check if you can connect to the server from Windows. Step 3. Configuring Access by User Credentials To create access by login and password, you first need to create a new directory and configure permissions. In the configuration file, set all parameters to no (see above), except writable: in this line, the value should be yes, meaning that writing in the folder should be enabled. Use the mkdir command to create a new directory, then create a user with useradd someone (where someone can be any username) and set a password for them with the command passwd. For example: passwd something Now, with the command below, add the new user and try to log in: if everything is configured correctly, you will have access to the folder. sudo smbpasswd -a someone Step 4. Configuring Group Access Configuring group access is necessary when you need to create restricted access for specific user groups. In smb.conf, after the line guest ok, additionally specify the following lines (all usernames here are generated simply for example): valid users = admin, mary_smith, jane_jameson, maria ortega, nathalie_brown write list = admin, nathalie_brown In the valid users line, list the users who are granted access to the directory. And in the write list, list those who can modify data in the folder. In addition, after the force directory mode line, add another line with the following value: inherit owner = yes This enables inheritance of created objects. Now save the settings and restart the service, after which the new settings should take effect. Step 5. Connecting to Samba from Windows and Linux For quick connection to Samba from Windows, press Ctrl+E and enter the path. Note that you need to use \\ to indicate the network path to the resource. And to avoid reconnecting to the server each time, you can choose the option to connect the resource as a drive, if your security policy allows it. In the new window, specify the drive letter and fill in the required data. For connecting to Samba from Linux, you use the cifs utilities, which are installed with the command: sudo apt install cifs-utils -y Next, the resource is mounted and connected. This is done with: sudo mount.cifs //10.8.0.1/our_share /share The path and resource name can be anything. You can also perform automatic mounting using the configuration file fstab with its own settings. Step 6. Configuring the Network Trash Bin This operation is needed to avoid accidental permanent deletion of files. For this, create the following directory: [Recycle]     comment = Trash for temporary file storage     path = /directory/recycle     public = yes     browseable = yes     writable = yes     vfs objects = recycle     recycle:repository = .recycle/%U     recycle:keeptree = Yes     recycle:touch = Yes     recycle:versions = Yes     recycle:maxsize = 0     recycle:exclude = *.tmp, ~$*     recycle:exclude_dir = /tmp Now, let’s review line by line what these parameters mean: vfs objects = recycle — indicates use of the corresponding subsystem; repository — the path for storing deleted data; keeptree — whether to keep the directory tree after deletion; touch — whether to change the timestamps of files when they are moved to the trash; versions — whether to assign a version number if files with identical names are deleted; maxsize — the maximum size of a file placed in the trash. A value of 0 disables limits; exclude — which file types to exclude; exclude_dir — which directories to exclude. Conclusion That’s it — now you know how to install Samba on an Ubuntu cloud server and configure it for your own needs.
04 July 2025 · 7 min to read
Ubuntu

Deleting a User in Ubuntu 22.04

A server administrator often has to work with user accounts — adding, deleting, and configuring access modes. Removing outdated user accounts is one security measure that can significantly reduce the number of vulnerabilities in the system. The Linux utilities deluser and userdel are used for deletion. However, before proceeding directly to deleting a user, we must take certain steps. In this article, we will explore how to delete a user in Ubuntu without compromising the system. At the same time, we will preserve the ability to access the user’s home directory files after deletion. In this article, we will work with the user hostman, which was created beforehand. This article will primarily focus on removing an Ubuntu user via the terminal, but we will also provide instructions for deleting a user account through the graphical interface. Please note that you will need superuser privileges to work with user accounts. The instructions will be suitable for any cloud server running Ubuntu OS. Checking the User Account First, you need to check whether the user is currently logged into the system. This will affect further steps: if the user is currently authorized on the server, you will need to terminate their connection and change the password. Check the list of users authorized in the system using the who utility or its alias w: who If you see that the user hostman is authorized, you need to check which processes are running under this user. This is a necessary step because if background operations are being performed, Ubuntu 22.04 will not allow us to delete the user. Check with the ps utility: sudo ps -u hostman As a result, you might see a response like this:    PID TTY          TIME CMD 1297129 pts/2    00:00:00 bash 1297443 pts/2    00:00:00 htop For testing purposes, we launched the htop utility under the hostman account, which is running in the background. Blocking Access Before stopping the user’s processes, you need to block their access to the system. You can do this by changing their password. User passwords are stored in the system in encrypted form in the /etc/shadow file. This file is readable only by the root user, and in addition to password hashes, it contains their expiration information. There is a special utility that allows you to remove a user’s password in Ubuntu — passwd. To restrict access, we will use the passwd utility with the -l (or --lock) flag, which puts the utility into lock mode: sudo passwd -l hostman As a result, the utility will add an exclamation mark at the beginning of the encrypted password string. That is all that is needed to prevent the user from logging in again since the hashes will no longer match. Killing Processes In Ubuntu, you cannot delete a user via the console if any processes are running under their name. To terminate a process, you can use one of the following commands: kill — deletes a process by its identifier. You can determine the IDs of the hostman user processes with: top -U hostman or ps -u hostman pkill — deletes a process by its name. For example, if the user hostman has launched the top process, you can terminate it with: sudo pkill top killall — deletes all processes, including child processes. Often, a process will launch many so-called subprocesses; stopping them by name or identifier can be complex and time-consuming. We will use the last command to reliably kill all user processes: sudo killall -9 -u hostman The -9 flag means the processes will receive a SIGKILL signal. This means the process will be forcibly terminated, since this signal cannot be ignored or blocked. Essentially, it is equivalent to a “force quit” of a non-responding program in graphical operating systems. After completing the user’s processes, they will no longer be authorized in the system. You can verify this using the who command. Since we locked the login in the previous step, the hostman user will not be able to log in again. Optional — Archiving the Home Directory Quite often, when deleting a Linux user account, you may need to keep its home directory, which might contain important files required either by the user or by the organization you are serving as an administrator. The built-in Ubuntu utilities allow you to remove a user while keeping their home directory. However, this is not recommended for two reasons: Disk Space — the user’s home directory may contain a large amount of data. It is irrational and excessive to store data from all outdated accounts on the main work disk. Over time, you might run out of space for new users. Data Relevance — it is good practice to keep the /home directory containing only the directories corresponding to active user accounts. Keeping this list in order helps with administration. We will use the tar utility to archive the home directory of the hostman user: sudo tar -cvjf /mnt/nobackup/hostman.homedir.tar.gz /home/hostman Let’s go over the arguments and flags: -c — creates the resulting .tar archive file -v — enables verbose mode, showing debugging information and listing archived files -z — creates a compressed .gz archive -f — indicates that the first argument will be used as the archive name The first argument is the final location of the archive. In our example, we place the archive with the user’s home directory on the nobackup disk, which, as the name implies, is not subject to backup. The second argument is the path to the directory from which the archive is created. Stopping Scheduled Jobs Before deleting a user in Ubuntu, it is recommended to stop all cron scheduler tasks launched by that user. You can do this with the crontab command. We will launch it under the hostman user with the -u flag and switch it to delete mode with the -r flag: sudo crontab -r -u hostman Now you can be sure that after deleting the user account, no unknown scripts will be executed for which no one is responsible. Deleting the User Once all the previous steps have been completed, it is time to proceed with the main task: deleting the Ubuntu user. There are two ways to do this: the deluser and userdel utilities. To delete the user account, we will use the deluser utility. Running it without parameters will delete the user account but leave their home directory and other user files intact. You can use the following flags: --remove-home — as the name suggests, deletes the user’s home directory --remove-all-files — deletes all system files belonging to the user, including the home directory --backup — creates an archive of the home directory and mail files and places it in the root directory. To specify a folder for saving the archive, use the --backup-to flag. As you can see from the parameter descriptions above, manually archiving the user’s home directory is not strictly necessary — deluser can do everything for you. In addition, with deluser you can remove a user from a group in Ubuntu or delete the group itself: sudo deluser hostman administrators The command above removes the user hostman from the administrators group. Let’s proceed with the complete deletion of the user and the hostman group without preserving the home directory: sudo deluser --remove-home hostman Deleting the User via Graphical Interface The entire article above is about how to delete a user in the Ubuntu terminal. But if you have a system with a graphical interface, you can delete a user in just a few simple steps. Open the Users section in System Settings. To switch to superuser mode, click the Unlock button. After that, the Delete User button will become active. When you click it, a dialog box will appear, offering to delete the user’s files, specifically those in the home directory. Conclusion Deleting a user in Ubuntu is not difficult; you just need to use the deluser utility with the required parameters. However, in this article, we described several steps that will help you safely delete a user account while preserving the system’s stability.
04 July 2025 · 7 min to read
Wordpress

How to Install WordPress with Nginx and Let’s Encrypt SSL on Ubuntu

WordPress is a simple, popular, open-source, and free CMS (content management system) for creating modern websites. Today, WordPress powers nearly half of the websites worldwide. Hostman offers Wordpress cloud hosting with quick load times, robust security, and simplified management.  However, having just a content management system is not enough. Modern websites require an SSL certificate, which provides encryption and allows using a secure HTTPS connection. This short guide will show how to install WordPress on a cloud server, perform initial CMS configuration, and add an SSL certificate to the completed site, enabling users to access the website via HTTPS. The Nginx web server will receive user requests and then proxied to WordPress for processing and generating response content. A few additional components are also needed: a MySQL database, which serves as the primary data storage in WordPress, and PHP, which WordPress is written in. This technology stack is known as LEMP: Linux, Nginx, MySQL, PHP. Step 1. Creating the Server First, you will need a cloud server with Ubuntu 22.04 installed. Go to the Hostman control panel. Select the Cloud servers tab on the left side of the control panel. Click the Create button. You’ll need to configure a range of parameters that ultimately determine the server rental cost. The most important of these parameters are: The operating system distribution and its version (in our case, Ubuntu 22.04). Data center location. Physical configuration. Server information. Once all the data is filled in, click the Order button. Upon completion of the server setup, you can view the IP address of the cloud server in the Dashboard tab, and also copy the command for connecting to the server via SSH along with the root password: Next, open a terminal in your local operating system and connect via SSH with password authentication: ssh root@server_ip Replace server_ip with the IP address of your cloud server. You will then be prompted to enter the password, which you can either type manually or paste from the clipboard. After connecting, the terminal will display information about the operating system. Now you can create a user with sudo priviliges or keep using root. Step 2. Updating the System Before beginning the WordPress installation, it’s important to update the list of repositories available through the APT package manager: sudo apt update -y It’s also a good idea to upgrade already installed packages to their latest versions: sudo apt upgrade -y Now, we can move on to downloading and installing the technology stack components required for running WordPress. Step 3. Installing PHP Let's download and install the PHP interpreter. First, add a specialized repository that provides up-to-date versions of PHP: sudo add-apt-repository ppa:ondrej/php In this guide, we are using PHP version 8.3 in FPM mode (FastCGI Process Manager), along with an additional module to enable PHP’s interaction with MySQL: sudo apt install php8.3-fpm php-mysql -y The -y flag automatically answers “yes” to any prompts during the installation process. To verify that PHP is now installed on the system, you can check its version: php -v The console output should look like this: PHP 8.3.13 (cli) (built: Oct 30 2024 11:27:41) (NTS)Copyright (c) The PHP GroupZend Engine v4.3.13, Copyright (c) Zend Technologies    with Zend OPcache v8.3.13, Copyright (c), by Zend Technologies You can also check the status of the FPM service: sudo systemctl status php8.3-fpm In the console output, you should see a green status indicator: Active: active (running) Step 4. Installing MySQL The MySQL database is an essential component of WordPress, as it stores all site and user information for the CMS. Installation We’ll install the MySQL server package: sudo apt install mysql-server -y To verify the installation, check the database version: mysql --version If successful, the console output will look something like this: mysql  Ver 8.0.39-0ubuntu0.22.04.1 for Linux on x86_64 ((Ubuntu)) Also, ensure that the MySQL server is currently running by checking the database service status: sudo systemctl status mysql The console output should display a green status indicator: Active: active (running) MySQL Security This step is optional in this guide, but it’s worth mentioning. After installing MySQL, you can configure the database’s security settings: mysql_secure_installation This command will prompt a series of questions in the terminal to help you configure the appropriate level of MySQL security. Creating a Database Next, prepare a dedicated database specifically for WordPress. First, log in to MySQL: mysql Then, execute the following SQL command to create a database: CREATE DATABASE wordpress_database; You’ll also need a dedicated user for accessing this database: CREATE USER 'wordpress_user'@'localhost' IDENTIFIED BY 'wordpress_password'; Grant this user the necessary access permissions: GRANT ALL PRIVILEGES ON wordpress_database.* TO 'wordpress_user'@'localhost'; Finally, exit MySQL: quit Step 5. Downloading and Configuring Nginx The Nginx web server will handle incoming HTTP requests from users and proxy them to PHP via the FastCGI interface. Download and Installation We’ll download and install the Nginx web server using APT: sudo apt install nginx -y Next, verify that Nginx is indeed running as a service: systemctl status nginx In the console output, you should see a green status indicator: Active: active (running) You can also check if the web server is functioning correctly by making an HTTP request through a browser. Enter the IP address of the remote server in the address bar, where you are installing Nginx. For example: http://166.1.227.189 If everything is set up correctly, Nginx will display its default welcome page. For good measure, let’s add Nginx to the system’s startup list (though this is typically done automatically during installation): sudo systemctl enable nginx Now, you can proceed to make adjustments to the web server configuration. Configuration In this example, we’ll slightly modify the default Nginx configuration. For this, we need a text editor. We will use nano. sudo apt install nano Now open the configuration file: sudo nano /etc/nginx/sites-enabled/default If you remove all the comments, the basic configuration will look like this: server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; index index.html index.htm index.nginx-debian.html; server_name _; location / { try_files $uri $uri/ =404; } } To this configuration, we’ll add the ability to proxy requests to PHP through FastCGI: server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; # added index.php to index files index index.html index.htm index.nginx-debian.html index.php; # specify the domain name to obtain an SSL certificate later server_name mydomain.com www.mydomain.com; location / { # try_files $uri $uri/ =404; # direct root requests to /index.php try_files $uri $uri/ /index.php?$args; } # forward all .php requests to PHP via FastCGI location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php8.3-fpm.sock; } } Note that the server_name parameter should contain the domain name, with DNS settings including an A record that directs to the configured server with Nginx. Now, let’s check the configuration syntax for errors: sudo nginx -t If everything is correct, you’ll see a confirmation message in the console: nginx: the configuration file /etc/nginx/nginx.conf syntax is oknginx: configuration file /etc/nginx/nginx.conf test is successful Then, reload the Nginx service to apply the new configuration: sudo systemctl reload nginx Step 6. Installing an SSL Certificate To obtain an SSL certificate from Let’s Encrypt, we’ll use a special utility called Certbot. In this guide, Certbot will automate several tasks: Request the SSL certificate. Create an additional Nginx configuration file. Edit the existing Nginx configuration file (which currently describes the HTTP server setup). Restart Nginx to apply the changes. Obtaining the Certificate Like other packages, install Certbot via APT: sudo apt install certbotsudo apt install python3-certbot-nginx The first command installs Certbot, and the second adds a Python module for Certbot’s integration with Nginx. Alternatively, you can install python3-certbot-nginx directly, which will automatically include Certbot as a dependency: sudo apt install python3-certbot-nginx -y Now, let’s initiate the process to obtain and install the SSL certificate: sudo certbot --nginx First, Certbot will prompt you to register with Let’s Encrypt. You’ll need to provide an email address, agree to the Terms of Service, and optionally opt-in for email updates (you may decline this if desired). Then, enter the list of domain names, separated by commas or spaces, for which the certificate should be issued. Specify the exact domain names that are listed in the Nginx configuration file under the server_name directive: mydomain.com www.mydomain.com After the certificate is issued, Certbot will automatically configure it by adding the necessary SSL settings to the Nginx configuration file: listen 443 ssl; # managed by Certbot # RSA certificate ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot # Redirect non-https traffic to https if ($scheme != "https") { return 301 https://$host$request_uri; } # managed by Certbot So, the complete Nginx configuration file will look as follows: server { listen 80 default_server; listen [::]:80 default_server; listen 443 ssl; # managed by Certbot # RSA certificate ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot root /var/www/html; index index.html index.htm index.nginx-debian.html index.php; server_name domain.com www.domain.com; # Redirect non-https traffic to https if ($scheme != "https") { return 301 https://$host$request_uri; } # managed by Certbot location / { # try_files $uri $uri/ =404; # direct root requests to /index.php try_files $uri $uri/ /index.php?$args; } # forward all .php requests to PHP via FastCGI location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php8.3-fpm.sock; } } Automatic Certificate Renewal Let’s Encrypt certificates expire every 90 days, so they need to be renewed regularly. Instead of manually renewing them, you can set up an automated task. For this purpose, we’ll use Crontab, a scheduling tool in Unix-based systems that uses a specific syntax to define when commands should run. Install Crontab: sudo apt install cron And enable it: sudo systemctl enable cron Now open the Crontab file: crontab -e Add the following line to schedule the Certbot renewal command: 0 12 * * * /usr/bin/certbot renew --quiet In this configuration: The command runs at noon (12:00) every day. Certbot will check the certificate’s expiration status and renew it if necessary. The --quiet flag ensures that Certbot runs silently without generating output. Step 7. Downloading WordPress In this guide, we’ll use WordPress version 6.5.3, which can be downloaded from the official website: wget https://wordpress.org/wordpress-6.5.3.tar.gz Once downloaded, unpack the WordPress archive: tar -xvf wordpress-*.tar.gz After unpacking, you can delete the archive file: rm wordpress-*.tar.gz This will create a wordpress folder containing the WordPress files. Most core files are organized in the wp-content, wp-includes, and wp-admin directories. The main entry point for WordPress is index.php. Moving WordPress Files to the Web Server Directory You need to copy all files from the wordpress folder to the web server’s root directory (/var/www/html/) so that Nginx can serve the PHP-generated content based on user HTTP requests. Clear the existing web server directory (as it currently contains only the default Nginx welcome page, which we no longer need): rm /var/www/html/* Copy WordPress files to the web server directory: cp -R wordpress/* /var/www/html/ The -R flag enables recursive copying of files and folders. Set ownership and permissions. Ensure that Nginx can access and modify these files by setting the www-data user and group ownership, as well as appropriate permissions, for the WordPress directory: sudo chown -R www-data:www-data /var/www/html/sudo chmod -R 755 /var/www/html/ This allows Nginx to read, write, and modify WordPress files as needed, avoiding permission errors during the WordPress installation process. Step 8. Configuring WordPress WordPress configuration is managed through an intuitive web-based admin panel. No programming knowledge is necessary, though familiarity with languages like JavaScript, PHP, HTML, and CSS can be helpful for creating or customizing themes and plugins. Accessing the Admin Panel Open a web browser and go to the website using the domain specified in the Nginx configuration, such as: https://mydomain.com If all components were correctly set up, you should be redirected to WordPress’s initial configuration page: https://mydomain.com/wp-admin/setup-config.php Select Language: Choose your preferred language and click Continue. Database Configuration: WordPress will prompt you to enter database details. Click Let’s go! and provide the following information: Database Name: wordpress_database (from the previous setup) Database Username: wordpress_user Database Password: wordpress_password Database Host: localhost Table Prefix: wp_ (or leave as default) Click Submit. If the credentials are correct, WordPress will confirm access to the database. Run Installation: Click Run the installation. WordPress will then guide you to enter site and admin details: Site Title Admin Username Admin Password Admin Email Option to discourage search engine indexing (recommended for development/testing sites) Install WordPress: Click Install WordPress. After installation, you’ll be prompted to log in with the admin username and password you created. Accessing the Dashboard Once logged in, you'll see the WordPress Dashboard, which contains customizable widgets. The main menu on the left allows access to core WordPress functions, including: Posts and Pages for content creation Comments for moderating discussions Media for managing images and files Themes and Plugins for design and functionality Users for managing site members and roles Your WordPress site is now fully configured, and you can begin customizing and adding content as needed. Conclusion This guide showed how to install WordPress along with all its dependencies and how to connect a domain and add a SSL certificate from Let’s Encrypt to an already functioning website, enabling secure HTTPS connections with the remote server. The key dependencies required for WordPress to function include: PHP: The scripting language WordPress is written in. MySQL: The database system used by WordPress to store content and user data. Nginx (or Apache in other implementations): The web server that processes user requests initially. For more detailed information on managing site content through the WordPress admin panel, as well as creating custom themes and plugins, refer to the official WordPress documentation. Frequently Asked Questions How do I install WordPress on Ubuntu? First set up Nginx, PHP, and MySQL. Then either download WordPress manually or use a deployment script. How do I enable HTTPS with Let’s Encrypt? Use Certbot to generate a certificate, then automate renewal with a simple cron job. Is Nginx better than Apache for WordPress? For performance and memory efficiency, yes. Nginx handles high traffic with fewer resources.
16 June 2025 · 13 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support