Sign In
Sign In

Setting Up a BIND DNS Server

Setting Up a BIND DNS Server
Hostman Team
Technical writer
Ubuntu
19.07.2024
Reading time: 14 min

The DNS (Domain Name System) is a system where all domain names of servers are organized in a specific hierarchy. Why do we need it? Imagine needing to connect to a device with the IP address 91.206.179.207. You could enter this address in the command line to get the information you need, but remembering many such numeric combinations is very difficult. Therefore, special servers were created to convert domain names into IP addresses. So, for example, when you enter hostman.com in your browser’s search bar, the request data is sent to a DNS server, which looks for matches in its database. The DNS server then sends the necessary IP address to your device, and only then does the browser directly access the resource.

Configuring your own DNS allows for more flexible and precise system configuration and avoids reliance on third parties. In this article, we will look at how to set up DNS using the BIND nameserver on Ubuntu.

Terms

  • Zone: A part of the DNS hierarchy hosted on a DNS server. It establishes the boundaries within which a specific server or group of servers is responsible.

  • Root Servers: DNS servers containing information about top-level domains (.ru, .com, etc.).

  • Domain: A named part of the DNS hierarchy, a specific node that includes other nodes. DNS addresses are read from right to left and start with a dot, with domains also separated by dots. For example, the domain poddomen.domen.ru should be read as .ru.domen.poddomen. Usually, the domain name reflects the DNS hierarchy structure, but the final dot is omitted.

  • FQDN (Fully Qualified Domain Name): A full domain name including the names of all parent domains.

  • Resource Record: A unit of information storage, essentially a record that links a name to some service information. It consists of:

    • Name (NAME): The name or IP address that owns the zone.

    • Time to Live (TTL): The duration a record is stored in the DNS cache before being deleted.

    • Class (CLASS): Network type, usually IN (Internet).

    • Type (TYPE): The record's purpose.

    • Various Information (DATA): Additional details.

Common Resource Records

  • A: Maps a hostname to an IPv4 address. Each network interface can have only one A record.
website.com.              520    IN     A      91.206.179.207
  • AAAA: The same as an A record, but for IPv6.
  • CNAME: Canonical name record, an alias for a real name for redirection.

  • MX: Specifies mail hosts for the domain. The NAME field contains the destination domain, and the DATA field contains the priority and host for receiving mail.
website.com.             17790   IN      MX      10 mx.website.com.
website.com.             17790   IN      MX      20 mx2.website.com.
  • NS: Points to the DNS server servicing the domain.

  • PTR: IP address to domain name mapping, needed for reverse name resolution.

  • SOA: Describes the main zone settings.

  • SRV: Contains addresses of servers providing internal domain services, such as Jabber.

Requirements

To follow the instructions in this article, you need at least two Ubuntu cloud servers in the same data center. Any of these servers can be ordered from Hostman. 

We will need two Ubuntu 20.04 servers, used as the primary and secondary DNS servers, ns1 and ns2, respectively. Additionally, there will be extra servers using our configured servers.

You must have superuser privileges on each server.

Installing BIND on DNS Servers

We will use bind9 as the DNS server. Install the bind9 package from the Linux repository:

sudo apt update && sudo apt upgrade -y
sudo apt install bind9

Additionally, it is recommended to install network monitoring tools:

sudo apt install dnsutils

After installation, start the bind9 service:

sudo service bind9 start

The main configuration file of the server is /etc/bind/named.conf. It describes the general settings and is usually split into several others for convenience. DNS setup begins by working with the parameters inside this file.

named.conf.options

This file contains the general server parameters. We will specify the DNS configuration data in it.

options {
        dnssec-validation auto;
        auth-nxdomain no;
        directory "/var/cache/bind";
        recursion no; # disallow recursive queries to the nameserver

        listen-on {
                     172.16.0.0/16; 
                     127.0.0.0/8;    
        };

        forwarders { 
            172.16.0.1;
            8.8.8.8;  
        };
};

To verify that everything is entered correctly, use one of the named daemon utilities, named-checkconf.

sudo named-checkconf

If everything is correct, the bind server starts working.

Primary DNS Server

The primary DNS server stores the main copy of the zone data file. All zones will be stored in the /etc/bind/master-zones directory of the primary DNS server. Create the directory:

sudo mkdir /etc/bind/master-zones

Create a file to describe the zone:

sudo touch /etc/bind/master-zones/test.example.com.local.zone

And add SOA, NS, and A records to it:

$ttl 3600 
$ORIGIN test.example.com. 
test.example.com.               IN              SOA  (      
ns.test.example.com.    
abuse.test.example.com.  
                                2022041201 
                                10800 
                                1200 
                                604800 
                                3600   ) 

@                               IN              NS              ns.test.example.com. 
@                               IN              NS              ns2.test.example.com.

@                               IN              A                172.16.101.3 
ns                              IN               A                172.16.0.5 
ns2                             IN              A                172.16.0.6

Next, run the check with the utility named-checkzone.

sudo named-checkzone test.example.com. /etc/bind/master-zones/test.example.com.local.zone

named.conf.local

This is another file included in the server's main configuration. We will specify local zones in it:

zone "test.example.com." {
                type master;
                file "/etc/bind/master-zones/test.example.com.local.zone";
};

After entering the necessary data, check the config and restart bind9 (the -z flag checks zone files):

sudo named-checkconf
sudo named-checkconf -z
sudo service bind9 restart
sudo service bind9 status

Setting Up Views

Views allow flexible management of name resolution from different subnets. Specify in the /etc/bind/named.conf file:

include "/etc/bind/named.conf.options";

acl "local" { 172.16.0.0/16; };
view "local" {
                include "/etc/bind/named.conf.local";
                match-clients { local; };
};

In the same file, you can specify directives for indicating which nodes and network addresses to accept or reject requests from. Then, restart bind9:

sudo service bind9 restart

After the server restarts, you can request the SOA record for the server 172.16.0.5 from another computer on the local network:

dig @172.16.0.5 -t SOA test.example.com

At this stage, the primary DNS server setup is complete. The next sections cover the secondary server, mail server setup, and reverse zone configuration.

Secondary Server

The initial steps are the same as for the primary server — installing bind9 and network utilities:

sudo apt update && sudo apt upgrade -y
sudo apt install bind9
sudo apt install dnsutils
sudo service bind9 start

Next, to store zone files, create the /etc/bind/slave directory and grant the necessary permissions:

sudo mkdir /etc/bind/slave
sudo chmod g+w /etc/bind/slave

Proceed to configure the zone on the secondary server. Add the zone to the /etc/bind/named.conf.local file:

zone "test.example.com." {
        type slave;
        file "/etc/bind/slave/test.example.com.local.zone";
        masters { 172.16.0.5; };
};

And set up views in the main configuration file named.conf:

include "/etc/bind/named.conf.options";
acl "local" { 172.16.0.0/16; };
view "local" {
        match-clients { local; };
        include "/etc/bind/named.conf.local";
};

After adding the settings, check the syntax, and then restart bind9:

sudo named-checkconf
sudo named-checkconf -z
sudo service bind9 restart

If there are no errors, perform the zone transfer:

sudo rndc retransfer test.example.com

The rndc retransfer command allows for a zone transfer without checking serial numbers. Briefly, the primary (ns1) and secondary (ns2) DNS servers work as follows: ns2 only checks the serial number of the zone and ignores the content of the entire zone file. If the serial number decreases, the zone transfer will be stopped. Therefore, it is crucial to increment the serial number every time you edit the zone. It is recommended to use the current date and an incremental number as the serial number.

Once you have set up the server and performed the zone transfer, you need to restrict the transfer to the secondary server’s IP address in the named.conf configuration on the primary server. To do this, add the allow-transfer directive with the IP address of the secondary DNS server in named.conf:

zone "test.example.com." {
    type master;
    allow-transfer { 172.168.0.6; };
    file "/etc/bind/master-zones/test.example.com.local.zone";
};

Then restart the server:

sudo service bind9 restart

After this step, all further operations will be performed on the primary server.

Adding an MX Record

In this example, we use mx as the hostname since it is a commonly accepted designation. Therefore, the FQDN (Fully Qualified Domain Name) will be mx.test.example.com.

To add an MX record:

1) Add the mail resource records to the zone file located at /etc/bind/master-zones/test.example.com.local.zone.

; Add the MX records to the zone file
@   IN  MX  10 mx.test.example.com.
@   IN  MX  20 mx2.test.example.com.

This adds two MX records with different priorities for the domain test.example.com.

2)  Update the serial number in the SOA (Start of Authority) record to reflect the changes.

$TTL 3600
@   IN  SOA ns.test.example.com. admin.test.example.com. (
        2024071101  ; Serial number
        10800       ; Refresh
        1200        ; Retry
        604800      ; Expire
        3600        ; Minimum TTL
)

3) Verify the zone file syntax with the following command:

sudo named-checkzone test.example.com. /etc/bind/master-zones/test.example.com.local.zone

This command checks the syntax of the zone file to ensure there are no errors.

4) Apply the changes by reloading BIND:

sudo service bind9 reload

This command reloads the BIND DNS server configuration to apply the updates made to the zone file.

Reverse DNS Setup

Reverse DNS is the reverse of the forward DNS resolution, converting IP addresses back to domain names.

For example, the IP address 192.168.1.10 is represented in reverse notation as 10.1.168.192.in-addr.arpa.

Because a hierarchical model is used, the management of the zone can be delegated to the owner of the IP address range. Essentially, a PTR record defines a domain name based on an IP address, which is conceptually similar to an A record. PTR records are primarily used for verifying mail servers.

To configure the reverse lookup zone, create a new zone file:

sudo nano /etc/bind/master-zones/16.172/in-addr.arpa.zone

And add the following data:

$TTL    3600 
16.172.in-addr.arpa.            IN      SOA  ( 
ns.test.example.com. 
admin.test.example.com. 
                                2022041202 
                                10800 
                                1200 
                                604800 
                                3600  )
                                IN      NS            ns.test.example.com. 
                                IN      NS           ns2.test.example.com. 

3.101.16.172.in-addr.arpa.      IN      PTR              test.example.com. 
5.0.16.172.in-addr.arpa.        IN      PTR           ns.test.example.com. 
6.0.16.172.in-addr.arpa.        IN      PTR          ns2.test.example.com. 
2.101.16.172.in-addr.arpa.      IN      PTR         mail.test.example.com.

Check the configuration:

sudo named-checkzone 16.172.in-addr.arpa /etc/bind/master-zones/16.172.in-addr.arpa.zone

Then, open named.conf.local:

sudo nano /etc/bind/named.conf.local

And specify the following zone:

zone "16.172.in-addr.arpa." {
                type master;
                file "/etc/bind/master-zones/16.172.in-addr.arpa.zone";
                allow-transfer { 172.16.0.6; };
        };

Restart the bind9 service:

sudo named-checkconf
sudo named-checkconf -z
sudo service bind9 restart

Check with the dig utility:

dig @172.16.0.5 -x 172.16.0.5

Now you can perform a similar setup on the secondary server. Add the following configuration to named.conf.local:

zone "16.172.in-addr.arpa." { 
    type slave; 
    file "/etc/bind/slave/16.172.in-addr.arpa.zone"; 
    masters { 172.16.0.5; }; 
};

At this stage, we have completed work with local domain zones. You can now proceed to configure the external domain zone.

External Domain Zone

First, to handle queries from the external network, add the external IP address to the listen-on directive in the named.conf.options configuration file:

listen-on {
    aaa.bbb.ccc.ddd/32; # our external IP
    172.16.0.0;
    127.0.0.0/8
}

Next, create the zone file (don't forget to change the serial number!) and add the external IP addresses to it:

sudo nano /etc/bind/master-zones/test.example.com.zone

Add the following content to the file:

$TTL 3600
$ORIGIN test.example.com.
test.example.com.               IN              SOA  (     
    ns.test.example.com.
    admin.test.example.com.
                                2022041205
                                10800
                                1200
                                604800
                                3600   )
@                               IN              NS              ns.test.example.com.
@                               IN              NS              ns2.test.example.com.
@                               IN              A               aaa.bbb.ccc.ddd # first external address
ns                              IN              A               aaa.bbb.ccc.ddd
ns2                             IN              A               eee.fff.ggg.hhh # second external address

Then, create a separate file for the external view zones to serve different domain zones to clients from different subnets:

sudo nano /etc/bind/named.conf.external

Add the following content to the file:

zone "test.example.com." { 
    type master; 
    file "/etc/bind/master-zones/test.example.com.zone";
    allow-transfer { 172.16.0.6; };
};

After this, include the file in named.conf by adding the following block:

acl "external-view" { aaa.bbb.ccc.ddd; };
view "external-view" {
    recursion no;
    match-clients { external-view; };
    include "/etc/bind/named.conf.external";
};

Now check this zone and restart BIND9:

sudo named-checkconf -z
sudo named-checkzone test.example.com. /etc/bind/master-zones/test.example.com.zone
sudo service bind9 restart
sudo service bind9 status

On the secondary DNS server, you need to specify the external server address in named.conf.options:

sudo nano /etc/bind/named.conf.options

Add the following configuration:

options {
    dnssec-validation auto;
    auth-nxdomain no;
    recursion no;
    directory "/var/cache/bind";
    listen-on {
        eee.fff.ggg.hhh/24;
        172.16.0.0/16;
        127.0.0.0/8;
    };
};

Similarly to the primary server, create a new named.conf.external file:

sudo nano /etc/bind/named.conf.external

Add the following content to the file:

zone "test.example.com." {
    type slave;
    file "/etc/bind/slave/test.example.com.zone"; 
    masters { 172.16.0.5; };
};

Then add the following block to named.conf:

acl "external-view" { eee.fff.ggg.hhh; }; 
view "external-view" { 
    recursion no; 
    match-clients { external-view; }; 
    include "/etc/bind/named.conf.external"; 
};

And perform the transfer:

sudo rndc retransfer test.example.com IN external-view

Debugging

When setting up a DNS server, it is very important to pay close attention to query logging. This helps with initial troubleshooting, and during normal server operation, it allows you to fully control the services.

BIND9 allows for comprehensive logging rules configuration—writing to a single file, separating different categories into different logs, and so on.

To write debugging information to one file, you need to create logging rules and include them in the main configuration. Create a log.conf file:

sudo nano /etc/bind/log.conf

Add the following content:

logging {
    channel bind.log {
        file "/var/lib/bind/bind.log" versions 10 size 20m;
        severity debug;
        print-category yes;
        print-severity yes;
        print-time yes;
    };
    category queries { bind.log; };
    category default { bind.log; };
    category config { bind.log; };
};

Then include the file in the main configuration:

include "/etc/bind/log.conf";

And restart BIND9:

sudo service bind9 restart

You can create multiple such files with different settings and include them depending on the development stage or server load.

Conclusion

In this guide, we configured DNS on a server running Ubuntu OS using the bind9 package. After following the steps, the two configured DNS servers can be used for name resolution on the network. To use the custom DNS servers, configure your other servers to use 172.16.0.5 and 172.16.0.6 as their DNS servers. 

This setup can serve as the foundation for further enhancements, such as setting up an email server.

Ubuntu
19.07.2024
Reading time: 14 min

Similar

Ubuntu

How to Install Google Chrome on Ubuntu 24.04

If you started using the internet post 2008, it is very likely that your first interaction over the internet was via Google Chrome web browser. People were frustrated with Microsoft Internet Explorer (which has reached its end of life and has now been discontinued), so when Google launched its proprietary product, Google Chrome, it was met with great demand, and hundreds of thousands of people switched to Chrome from Internet Explorer.  The reason for this switch was obvious, Chrome was definitely much faster and sleek in comparison to Internet Explorer and it offered a unique user experience. Within 4 years after its launch date, Chrome overtook Internet Explorer in terms of having the most users. Let’s switch gears now and move to the crucial part where we’ll talk about downloading and installing Chrome on Ubuntu 24.04 LTS which happens to be the latest OS at the time. Method 1: Installing Google Chrome via Graphical Interface (GUI) The first method is straight as an arrow and needs no extra skills other than the ability to operate a personal computer. Go ahead and search the term ‘Google Chrome’ in the browser bar.  Of course, you need a browser for this. Nothing to worry about as Ubuntu has a browser that comes built-in, this built-in browser is Firefox. Follow along, see where the arrows are pointing in the screenshots and download the 64 bit .deb (For Debian/Ubuntu).  Once you select the right version, go ahead and click on Accept and Install. Go to the directory where this package is downloaded, in my case, it is downloaded within my Downloads directory. Click on the file twice so it opens up in the Software Center where you will see a green Install button. Click that. Again, click on Install. After following along, complete the authentication by putting in your password. After installation is done, go to apps and search for ‘Google Chrome’. You can click on it to open it and then you can start using it.  Method 2: Installing Google Chrome via Terminal Update Package Information Updating package information is easy, run the update command:  sudo apt update Download Chrome with wget Use the wget utility to download Chrome from the provided URL: wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb This URL is the external source from where you can acquire the stable version of Chrome. Chrome is now downloaded but not yet installed. Install Chrome using dpkg To install this package you need to use the Debian package manager dpkg with the -i flag which indicates the installation. sudo dpkg -i google-chrome-stable_current_amd64.deb Fix Dependency Errors During our procedure, we didn’t come across any dependency error, if you face any then you can use the following command: sudo apt install -f Or: sudo apt-get install -f Run Google Chrome You can either open the browser from GUI or you can run this command and open the browser from within the terminal: google-chrome-stable Method 3: Installing Beta or Unstable Versions of Google Chrome Installing Google Chrome Beta Some developers get super excited when it comes to testing the versions of different products before the general public. If you are one of them, you can install Google Chrome’s beta version. Download Beta Google Chrome  Use wget with the direct URL pointing to an external source from where you can download the beta package of the browser: wget https://dl.google.com/linux/direct/google-chrome-beta_current_amd64.deb Install Beta Google Chrome sudo dpkg -i google-chrome-beta_current_amd64.deb If dependency errors pop up, just use the command shown in Method 2. Run Beta Google Chrome  Open beta version using terminal: google-chrome-beta The beta version of this browser runs smoothly without any issues, if you see any warnings in the terminal simply ignore it and you can use the beta version without any hassle.  Install Unstable Google Chrome If you are someone who likes to do testing way in advance and you are okay with multiple crashes, you can install Unstable Google Chrome.  Unstable Google Chrome has feature access before Beta Chrome. Main difference between Beta Google Chrome and Unstable Google Chrome is that Beta is updated every 4 weeks while Unstable is updated every day. Download Unstable Google Chrome  wget https://dl.google.com/linux/direct/google-chrome-unstable_current_amd64.deb Install Unstable Google Chrome sudo dpkg -i google-chrome-unstable_current_amd64.deb Run Unstable Google Chrome google-chrome-unstable Unstable versions of Chrome run smoothly, warnings or errors might pop up but you can ignore those, it works ok.  Additional Tips As Ubuntu’s default repository does not have Chrome due to proprietary rights, Google Chrome creates its own repo in your system and it updates each time you update your default repository. sudo apt update && sudo apt upgrade Conclusion A vast number of Linux users prioritize their privacy and prefer open-source products. If this is you, you might be aware that Google Chrome is a proprietary product and is owned by Alphabet (parent company of Google) which means it's not open source. If you are looking for something similar and also open source then Chromium is a great browser to consider. Google Chrome came with the concept of extensions and Google enabled them by default in 2009. These extensions extended the performance of the Chrome web browser and offered additional options to accomplish many things in much easier ways than previously. The main thing that really made Chrome “The King of The Market” was its speed and the ability to get updates for new versions. Google Chrome was able to fix issues much faster than competitors and users had a fine way to access all Google Products in one place.  The birth of the Chrome browser was the result of the problems Google workers faced with the browsers in the market at the time. They created a ‘Just Built For Them’ product which was actually what was needed in the market. Internet Explorer was the most used browser at the time but it was slow. It took Google Chrome just a few years to beat Internet Explorer in the market and in the upcoming decade, it completely wiped it off. 
23 May 2025 · 5 min to read
Apache

How to Install Let’s Encrypt with Apache

In the current environment of the internet, the use of HTTPS to secure web traffic is a must. With a free and automated Certificate Authority (CA) service like Let’s Encrypt, adoption of SSL/TLS has changed dramatically because you can quickly obtain trusted certificates at no cost. This guide will walk you through installing a Let’s Encrypt certificate on an Apache web server running Ubuntu 22.04 (Jammy Jellyfish). You will configure Certbot (the official Let’s Encrypt client), set up renewal procedures, and establish good security practices. Prerequisites Before proceeding, ensure you have: An Ubuntu 22.04 system. Update it with: sudo apt updatesudo apt upgrade Apache Installed: Confirm with apache2 -v. If not present, install via: sudo apt updatesudo apt install apache2 A registered domain (e.g., example.com) pointing to your server’s public IP. Check with: ping example.com Firewall Configured: Allow HTTP/HTTPS traffic: sudo ufw allow 'Apache Full'sudo ufw enable   Sudo Privileges: Access to a user account with administrative rights. Step 1: Installing Certbot via Snap Let’s Encrypt recommends using Certbot through Snap for seamless updates. Ubuntu 22.04 includes Snap by default, but make sure it’s updated: sudo snap install coresudo snap refresh core Install Certbot: sudo snap install --classic certbot Create a symbolic link to the Certbot binary for easy access: sudo ln -s /snap/bin/certbot /usr/bin/certbot Step 2: Generating SSL Certificate with Certbot Certbot integrates with Apache to automate certificate issuance and configuration. Run: sudo certbot --apache Follow the interactive prompts: Email Address: Enter for urgent renewal notifications. Terms of Service: Accept by typing A. Domain Selection: Choose the domain(s) to secure (e.g., example.com, www.example.com). HTTP to HTTPS Redirect: Select 2 to enforce HTTPS universally. Certbot will: Generate certificates in /etc/letsencrypt/live/exple.com/. Modify virtual host files to activate SSL. Reload Apache to apply changes. Step 3: Verifying Apache Configuration Certbot updates automatically your configuration. Inspect the virtual host file for your domain: sudo nano /etc/apache2/sites-available/example.com-le-ssl.conf Look for directives like: SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pemSSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pemInclude /etc/letsencrypt/options-ssl-apache.conf Ensure the SSL module is enabled: sudo a2enmod sslsudo systemctl restart apache2 Step 4: Testing SSL/TLS Configuration Validate your setup: Visit https://example.com. Look for the padlock icon. Use curl to check headers: sudo apt install curlcurl -I https://example.com Confirm HTTP/2 200 or HTTP/1.1 200 OK. Run a free analysis at SSL Server Test to discover vulnerabilities. Step 5: Automating Renewal Let’s Encrypt certificates expire every 90 days. Certbot automates renewal via a systemd timer. Test renewal manually: sudo certbot renew --dry-run If successful, Certbot’s timer will handle future renewals. Verify the timer status: systemctl list-timers | grep certbot Troubleshooting Common Issues Port Blocking: Ensure ports 80 and 443 are open: sudo ufw status Incorrect Domain Resolution: Verify DNS records with: dig example.com Configuration Errors: Check logs via: sudo journalctl -u apache2 Certificate Renewal Failures: Inspect Certbot logs at /var/log/letsencrypt/. Advanced Configurations Enforcing HTTPS with HSTS Add the Strict-Transport-Security header to your SSL config: sudo a2enmod headerssudo systemctl restart apache2 Then in the Apache config (/etc/apache2/apache2.conf) configure: Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" Redirecting HTTP to HTTPS Certbot usually handles this, but manually update non-SSL virtual hosts: <VirtualHost *:80> # Define the primary domain name for this virtual host ServerName example.com # Redirect all HTTP traffic to HTTPS permanently (status code 301) # This ensures users always access the site securely Redirect permanent / https://example.com/ </VirtualHost> Optimizing Cipher Suites Edit /etc/letsencrypt/options-ssl-apache.conf to prioritize strong ciphers: SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDHSSLProtocol All -SSLv2 -SSLv3 -TLSv1 -TLSv1.1 To further enhance your Apache and Let’s Encrypt setup, consider implementing the following advanced optimizations. These steps will not only improve security but also ensure your server performs efficiently under high traffic and adheres to modern web standards. Implementing OCSP Stapling Online Certificate Status Protocol (OCSP) stapling improves SSL/TLS performance by allowing the server to provide proof of validity, reducing client-side verification delays. Enable OCSP stapling in your configuration (/etc/apache2/apache.conf): SSLUseStapling onSSLStaplingCache "shmcb:logs/stapling-cache(150000)" After making these changes, restart the web server: sudo systemctl restart apache2 Verify OCSP stapling is working: openssl s_client -connect example.com:443 -status -servername example.com Look for OCSP Response Status: successful in the output. Configuring HTTP/2 for Improved Performance HTTP/2 enhances web performance by enabling multiplexing, header compression, and server push. To enable HTTP/2 in Apache, first ensure the http2 module is enabled: sudo a2enmod http2 Then, add the following directive to your SSL virtual host: Protocols h2 http/1.1 Restart Apache to apply the changes: sudo systemctl restart apache2 Verify HTTP/2 is active by inspecting the response headers using browser developer tools or a tool like curl: curl -I -k --http2 https://example.com Setting Up Wildcard Certificates If you manage multiple subdomains, a wildcard certificate simplifies management. To obtain a wildcard certificate with Certbot, use the DNS challenge method. First, install the DNS plugin for your DNS provider (e.g., Cloudflare): sudo snap set certbot trust-plugin-with-root=ok sudo snap install certbot-dns-cloudflare Install pip and the cloudflare package: sudo apt updatesudo apt install python3-pipsudo pip install cloudflare Create a credentials file for your DNS provider: sudo nano /etc/letsencrypt/cloudflare.ini Add your API credentials: dns_cloudflare_api_token = your_api_key Secure the file: sudo chmod 600 /etc/letsencrypt/cloudflare.ini Request the wildcard certificate: sudo certbot certonly --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini -d example.com -d *.example.com Update your configuration to use the wildcard certificate. Monitoring and Logging SSL/TLS Usage Regularly monitoring SSL/TLS usage helps identify potential issues and enhance performance. Apache’s mod_ssl module provides detailed logs. Enable logging by integrating the following to your SSL virtual host configuration: LogLevel info ssl:warnCustomLog ${APACHE_LOG_DIR}/ssl_access.log combinedErrorLog ${APACHE_LOG_DIR}/ssl_error.log Analyze logs for errors or unusual activity: sudo tail -f /var/log/apache2/ssl_error.log For advanced monitoring, consider tools like GoAccess or ELK Stack to visualize traffic patterns and SSL/TLS performance. Enhancing Security with Security Headers Adding security headers to your configuration can protect your site from common vulnerabilities like cross-site scripting (XSS) and clickjacking. Include the following directives in your virtual host file: Header set X-Content-Type-Options "nosniff"Header set X-Frame-Options "DENY"Header set X-XSS-Protection "1; mode=block"Header set Content-Security-Policy "default-src 'self';" These headers make sure that browsers enforce strict security policies, minimizing the risk of attacks. Final Thoughts Securing your Apache as of Ubuntu 22.04 using Let's Encrypt is a must-do to create a trusted quality web presence. In this tutorial, we have learned how to fine-tune some of the advanced configuration options, such as OCSP stapling, HTTP/2, wildcard certificates, as well as monitoring and security headers. These configurations will help you protect your server while increasing its efficiency and scalability. Note that web security is an ongoing process! Stay informed about new and developing threats, updated SSL/TLS standards, and audit your setup and logs regularly to maintain your server security after securing it.
27 March 2025 · 7 min to read
Docker

How To Install and Use Docker Compose on Ubuntu

Docker Compose has fundamentally changed how developers approach containerized applications, particularly when coordinating services that depend on one another. This tool replaces manual container management with a structured YAML-driven workflow, enabling teams to define entire application architectures in a single configuration file.  For Ubuntu environments, this translates to reproducible deployments, simplified scaling, and reduced operational overhead. This guide provides a fresh perspective on Docker Compose installation and usage, offering deeper insights into its practical implementation. Prerequisites Before you begin this tutorial, you'll need a few things in place: Deploy an Ubuntu cloud server instance on Hostman. Ensure you have a user account with sudo privileges or root access. This allows you to install packages and manage Docker. Install Docker and have it running on your server, as Docker Compose works on top of Docker Engine. Why Docker Compose Matters Modern applications often involve interconnected components like APIs, databases, and caching layers. Managing these elements individually with Docker commands becomes cumbersome as complexity grows. Docker Compose addresses this by allowing developers to declare all services, networks, and storage requirements in a docker-compose.yml file. This approach ensures consistency across environments—whether you’re working on a local Ubuntu machine or a cloud server. For example, consider a web application comprising a Node.js backend, PostgreSQL database, and Redis cache. Without Docker Compose, each component requires separate docker run commands with precise networking flags. With Compose, these relationships are organized once, enabling one-command setups and teardowns. Docker Compose Installation Follow these steps to install Docker Compose on your Ubuntu machine: Step 1: Verify that the Docker Engine is Installed and Running Docker Compose functions as an extension of Docker, so verify its status with: sudo systemctl status docker Example output: ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2025-02-20 08:55:04 GMT; 5min ago TriggeredBy: ● docker.socket Docs: https://docs.docker.com Main PID: 2246435 (dockerd) Tasks: 9 Memory: 53.7M CPU: 304ms CGroup: /system.slice/docker.service └─2246435 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock If inactive, start it using sudo systemctl start docker. Step 2: Update System Packages Refresh your package lists to ensure access to the latest software versions: sudo apt-get update You will see: Hit:1 https://download.docker.com/linux/ubuntu jammy InRelease Hit:2 http://archive.ubuntu.com/ubuntu jammy InRelease Hit:4 http://security.ubuntu.com/ubuntu jammy-security InRelease Hit:5 http://repo.hostman.com/ubuntu focal InRelease Hit:6 http://archive.ubuntu.com/ubuntu jammy-updates InRelease Hit:7 http://archive.ubuntu.com/ubuntu jammy-backports InRelease Hit:3 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.31/deb InRelease Hit:8 https://packages.redis.io/deb jammy InRelease Reading package lists... Done Step 3: Install Foundational Utilities Secure communication with Docker’s repositories requires these packages: sudo apt-get install ca-certificates curl  Step 4: Configure Docker’s GPG Key Authenticate Docker packages by adding their cryptographic key: sudo install -m 0755 -d /etc/apt/keyringssudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.ascsudo chmod a+r /etc/apt/keyrings/docker.asc This step ensures packages haven’t been altered during transit. Step 5: Integrate Docker’s Repository Add the repository tailored to your Ubuntu version: echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null The command auto-detects your OS version using VERSION_CODENAME. Step 6: Install the Docker Compose Plugin Update repositories and install the Compose extension: sudo apt updatesudo apt-get install docker-compose-plugin Step 7: Validate the Installation Confirm successful setup with: docker compose version The output displays the Docker Compose version: Docker Compose version v2.33.0 Building a Practical Docker Compose Project Let’s deploy a web server using Nginx to demonstrate Docker Compose’s capabilities. Step 1. Initialize the Project Directory Create a dedicated workspace: mkdir ~/compose-demo && cd ~/compose-demo Step 2. Define Services in docker-compose.yml Create the configuration file: nano docker-compose.yml Insert the following content: services: web: image: nginx:alpine ports: - "8080:80" volumes: - ./app:/usr/share/nginx/html In the above YAML file: services: Root element declaring containers. web: Custom service name. image: Uses the Alpine-based Nginx image for reduced footprint. ports: Maps host port 8080 to container port 80. volumes: Syncs the local app directory with the container’s web root. Step 3. Create Web Content Build the HTML structure: mkdir app nano app/index.html Add this HTML snippet: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Docker Compose Test</title> </head> <body> <h1>Hello from Docker Compose!</h1> </body> </html> Orchestrating Containers: From Launch to Shutdown Let’s explore how you can use Docker Compose for container orchestration: Start Services in Detached Mode Launch containers in the background: docker compose up -d Example output: [+] Running 2/2 ✔ Network compose-demo_default Created ✔ Container compose-demo-web-1 Started Docker Compose automatically pulls the Nginx image if missing and configures networking. Verify Container Status Check operational containers: docker compose ps -a Access the Web Application Visit http://localhost:8080 locally or http://<SERVER_IP>:8080 on remote servers. The test page should display your HTML content. Diagnose Issues via Logs If the page doesn’t load or if you encounter any issues, you can inspect container logs: docker compose logs web Example output: web-1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration web-1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ web-1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh web-1 | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf web-1 | 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf web-1 | /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh … Graceful Shutdown and Cleanup Stop containers temporarily: docker compose stop Example output: [+] Stopping 1/1 ✔ Container compose-demo-web-1  Stopped Remove all project resources: docker compose down Example output: [+] Running 2/2 ✔ Container compose-demo-web-1  Removed ✔ Network compose-demo_default  Removed Command Reference: Beyond Basic Operations While the workflow above covers fundamentals, these commands enhance container management: docker compose up --build: Rebuild images before starting containers. docker compose pause: Freeze containers without terminating them. docker compose top: Display running processes in containers. docker compose config: Validate and view the compiled configuration. docker compose exec: Execute commands in running containers (e.g., docker compose exec web nginx -t tests Nginx’s configuration). Conclusion Docker Compose transforms multi-container orchestration from a manual chore into a streamlined, repeatable process. By adhering to the steps outlined—installing Docker Compose, defining services in YAML, and leveraging essential commands—you can manage complex applications with confidence.
26 February 2025 · 7 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support