Sign In
Sign In

How to Install and Configure SSH on an Ubuntu Server

How to Install and Configure SSH on an Ubuntu Server
Hostman Team
Technical writer
Ubuntu
24.11.2023
Reading time: 10 min

Secure Shell (SSH) is a network protocol for secure client-server communication. Each interaction is encrypted. It allows you to securely manage the server, transfer files, and perform other tasks. 

For example, you have ordered a cloud server on Hostman and want to manage it from your laptop. To do this, you only need to set up SSH access. Through a secure connection, you will be able to perform all necessary administration actions.

For successful configuration, you need to: 

  1. Install the SSH server components on your server. The openssh-server package will cover that.

  2. Have the SSH client on your local machine from which you will connect to the remote host. 

    For this purpose, the openssh-client package is usually used. It's pre-installed in most Linux and BSD distributions and also in the latest Windows versions. On older versions of Windows, you'll need to install additional utilities. One of the most popular solutions is PuTTY.

Enabling SSH

By default, remote access via a secure network protocol is forbidden. However, installing SSH in Ubuntu is very easy.

Start the console of the server where you need to configure SSH. 

Update the package manager:

sudo apt update

Install the software:

sudo apt install openssh-server

Both operations require superuser rights, which you get with sudo.

On Ubuntu, the OpenSSH starts automatically after installation but you can check its status using the command:

sudo systemctl status ssh

The output should indicate that the service is running and allowed to start on system boot: 

ssh.service - OpenBSD Secure Shell server
   Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2022-03-21 12:34:00 CEST; 1m ago

This means that the installation was successful. To return to the command prompt, press the q key.

If the service is not active, start it manually with the command:

sudo systemctl enable --now ssh

Ubuntu comes with a firewall configuration tool called UFW. If you have a firewall enabled on your system, be sure to open the SSH port:

sudo ufw allow ssh

Now you can connect to your Ubuntu system via SSH from any remote computer.

-

Creating an SSH key pair

To make the connection even more secure and authentication more convenient, use an SSH key pair: a public and a private SSH keys. The public key is stored on the host, and the private key is stored on the user's computer.

Let's see how to create keys in different operating systems. Let's start with Ubuntu.

To generate a new 2048-bit RSA key pair, open a terminal and run the command below:

ssh-keygen -t rsa

A prompt will appear asking you where to save the keys. If you press Enter, the system will save the key pair in the default .ssh subdirectory of the home folder. You can also specify an alternate path where you want to save the key pair. However, it is recommended to use the default directory. It makes further management much easier.

If you have already created a key pair on the client computer, the system will prompt you to overwrite it. The choice is entirely up to you, but be careful. If you choose to overwrite it, you will not be able to use the previous key pair to log in to the server. It will be deleted. Fixing the conflict is easy; just specify a unique name for each new pair. The storage folder can remain the same.

You will also be prompted to enter a passphrase to add an extra layer of security that prevents unauthorized users from accessing the host. Press Enter if you do not want to use it.

To verify that the keys have been created, run the command:

ls -l ~/.ssh/id_*.pub. 

The terminal will display a list of keys.

Similarly, you can generate a pair on macOS or newer Windows versions.

If you're using an older Windows OS, you'll need to download the PuTTY utility suite. It contains the PuTTYgen application. To create an SSH key pair, all you need to do is run the PuTTYgen and swipe with your mouse. You can also select a folder to store the keys and add a passphrase for maximum protection.

Adding the SSH key to the server

The private key is stored on the computer. You should never transfer it to anyone. But you need to transmit the public part to the server.

If you have password access to the host, you can transfer the public key using ssh-copy-id. Example command:

ssh-copy-id hostman@123.456.78.99 

Instead of hostman enter your username, instead of 123.456.78.99 enter the server IP address. Enter the password when prompted, and after which the public key will be transferred to the host.

To connect to the server using the SSH keys, run the command:

ssh hostman@123.456.78.99

Instead of hostman enter your username, instead of 123.456.78.99 enter the server IP address. If you have not set a passphrase, you will log in without further authentication. The security system will check the public and private parts of the key and establish a connection if they match. 

Configuring SSH

You can configure your Ubuntu Server through the /etc/ssh/sshd_config file. Before making changes to it, make a backup copy. It will keep you from wasting time on reinstallation if you suddenly make a mistake.

To make a copy, run the command:

sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.factory-defaults

The /etc/ssh/sshd_config.factory-defaults will store the default settings. You will be editing the /etc/ssh/sshd_config file.

Disabling password authentication

SSH password authentication on the Ubuntu Server isn't bad. But if you create long, complex passwords, you can be tempted to store them insecurely. Using encryption keys to authenticate the connection is a more secure alternative. In this case, the password may be unnecessary and you can disable it.

Before proceeding, keep the following in mind:

Disabling password authentication increases the likelihood of being locked out of your server. You can be locked out if you lose your private key or break the ~/.authorized_keys file .

If you are locked out, you can no longer access any application files.

You should only disable password authentication if you are very familiar with the key authentication mechanism and understand the potential consequences of losing access to your server.

To disable password authentication, connect to the server as root and edit the sshd_config file. Change the PasswordAuthentication parameter value to No instead of Yes

Then restart the SSH service by running the following command:

sudo systemctl restart sshd

After that, you will no longer be able to use passwords for authentication. You will only be able to connect using Linux SSH keys.

Disabling root access

To improve security on your remote Ubuntu system, consider disabling root user login via SSH.

To do this, edit the configuration file:

sudo vi /etc/ssh/sshd_config

Change the PermitRootLogin value to No.

Another option is allowing the root user to log in using any authentication mechanism other than a password. To do this, set the PermitRootLogin parameter to prohibit-password.

This configuration lets you log in as the root user with a private key. The main thing is to ensure that you have copied the public key to the system before restarting the SSH service.

To apply the updated configuration, restart the service:

sudo systemctl restart sshd

Changing the default port

By default, the SSH server uses port 22. To increase security, you can set it to any other value. We recommend using ports from the upper range, from 50000 to 65000. It is also preferable to pick numbers in which all digits are different, for example, 56713.

Open the configuration file:

sudo vi /etc/ssh/sshd_config

Uncomment the line Port 22. Instead of 22, specify another number, for example, Port 56713. Save the changes and close the file.

To apply the configuration, restart the service:

sudo systemctl restart sshd

After a successful restart, verify that the connection is now on a different port:

ssh -p 56713 user@server_ip

Remember to restart the service after each change. Otherwise, SSH connections will follow the old rules.

Configuring tunneling

Tunneling is a method of transmitting unencrypted traffic or data over an encrypted channel. In addition to file transfers, tunneling can also be used to access internal network services through firewalls and to create a VPN.

There are three types of tunneling (forwarding):

  • Local,

  • remote,

  • dynamic.

To configure some of them, you will need to edit the SSH configuration file.

Local forwarding

It is a port forwarding from a client computer to a remote computer. The connection is then redirected to another port on the target computer.

The SSH client checks for a connection on the given port. When it receives a connection request, it tunnels it with the specified port on the remote host. The host then connects to another target computer through the configured port.

Mostly, local forwarding is used to connect externally to a service from an internal network. For example, this is how you can configure access to a database. It is also used for remote file sharing.

The -L argument is used for local forwarding. For example:

ssh hostman@server.example -L 8080:server1.example:3000 

Now open a browser on the local computer. You can use localhost:8080 to access the remote application instead of accessing it using the address server.example:3000.

Remote redirection

Remote redirection allows you to connect to a local computer from a remote computer. SSH does not support remote port forwarding by default. Therefore, you need to enable it in the SSH configuration file. It will require some additional configuration of the Ubuntu server. 

Open the configuration file:

sudo vi /etc/ssh/sshd_config 

Set the GatewayPorts parameter to Yes.

Save the changes and restart the service:

sudo systemctl restart sshd

Use the -R argument to configure forwarding. Example command:

ssh -R 8080:127.0.0.0.1:3000 -N -f user@remote.host 

After running this command, the host will listen on port 8080 and redirect all traffic to port 3000, which is open on the local computer.

Remote redirection is mainly used to give someone from outside access to an internal service.

Dynamic forwarding

Local and remote forwarding methods allow you to tunnel and communicate with a single port. With dynamic forwarding, you can tunnel and communicate with multiple ports.

Dynamic tunneling creates a socket on the local computer. It works like a SOCKS proxy server. Basically, your local computer is used as a SOCKS proxy server and listens on port 1080 by default. When the host connects to this port, it is redirected to the remote machine and then to the dynamic machine through the dynamic port.

The -D argument is used to configure dynamic tunneling. Example command:

ssh -D 9090 -N -f user@remote.host

Once you have set up tunneling, you can configure your application to use it. For example, to add a proxy to the browser. You'll need to configure redirection separately for each application you want to tunnel traffic for.

Disabling SSH

To disable the Open SSH server, stop the SSH service by running the command:

sudo systemctl disable --now ssh

To start the service back up, run the command:

sudo systemctl enable --now ssh

The enable command in Ubuntu does not reinstall the software, so you don't have to reconfigure anything. It simply starts up the previously installed and configured service.

Conclusion

In this article, we have covered the basics of using SSH on an Ubuntu machine. Now you know how to install the necessary software to set up a secure connection, configure it, route the tunnel, and even disable the service when it is not in use.

Connecting via SSH in Ubuntu is a common task, so you'll definitely need this knowledge. If not in development and administration, then for personal purposes, such as establishing a secure connection between devices in a local network.

Ubuntu
24.11.2023
Reading time: 10 min

Similar

Ubuntu

How to Install Google Chrome on Ubuntu 24.04

If you started using the internet post 2008, it is very likely that your first interaction over the internet was via Google Chrome web browser. People were frustrated with Microsoft Internet Explorer (which has reached its end of life and has now been discontinued), so when Google launched its proprietary product, Google Chrome, it was met with great demand, and hundreds of thousands of people switched to Chrome from Internet Explorer.  The reason for this switch was obvious, Chrome was definitely much faster and sleek in comparison to Internet Explorer and it offered a unique user experience. Within 4 years after its launch date, Chrome overtook Internet Explorer in terms of having the most users. Let’s switch gears now and move to the crucial part where we’ll talk about downloading and installing Chrome on Ubuntu 24.04 LTS which happens to be the latest OS at the time. Method 1: Installing Google Chrome via Graphical Interface (GUI) The first method is straight as an arrow and needs no extra skills other than the ability to operate a personal computer. Go ahead and search the term ‘Google Chrome’ in the browser bar.  Of course, you need a browser for this. Nothing to worry about as Ubuntu has a browser that comes built-in, this built-in browser is Firefox. Follow along, see where the arrows are pointing in the screenshots and download the 64 bit .deb (For Debian/Ubuntu).  Once you select the right version, go ahead and click on Accept and Install. Go to the directory where this package is downloaded, in my case, it is downloaded within my Downloads directory. Click on the file twice so it opens up in the Software Center where you will see a green Install button. Click that. Again, click on Install. After following along, complete the authentication by putting in your password. After installation is done, go to apps and search for ‘Google Chrome’. You can click on it to open it and then you can start using it.  Method 2: Installing Google Chrome via Terminal Update Package Information Updating package information is easy, run the update command:  sudo apt update Download Chrome with wget Use the wget utility to download Chrome from the provided URL: wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb This URL is the external source from where you can acquire the stable version of Chrome. Chrome is now downloaded but not yet installed. Install Chrome using dpkg To install this package you need to use the Debian package manager dpkg with the -i flag which indicates the installation. sudo dpkg -i google-chrome-stable_current_amd64.deb Fix Dependency Errors During our procedure, we didn’t come across any dependency error, if you face any then you can use the following command: sudo apt install -f Or: sudo apt-get install -f Run Google Chrome You can either open the browser from GUI or you can run this command and open the browser from within the terminal: google-chrome-stable Method 3: Installing Beta or Unstable Versions of Google Chrome Installing Google Chrome Beta Some developers get super excited when it comes to testing the versions of different products before the general public. If you are one of them, you can install Google Chrome’s beta version. Download Beta Google Chrome  Use wget with the direct URL pointing to an external source from where you can download the beta package of the browser: wget https://dl.google.com/linux/direct/google-chrome-beta_current_amd64.deb Install Beta Google Chrome sudo dpkg -i google-chrome-beta_current_amd64.deb If dependency errors pop up, just use the command shown in Method 2. Run Beta Google Chrome  Open beta version using terminal: google-chrome-beta The beta version of this browser runs smoothly without any issues, if you see any warnings in the terminal simply ignore it and you can use the beta version without any hassle.  Install Unstable Google Chrome If you are someone who likes to do testing way in advance and you are okay with multiple crashes, you can install Unstable Google Chrome.  Unstable Google Chrome has feature access before Beta Chrome. Main difference between Beta Google Chrome and Unstable Google Chrome is that Beta is updated every 4 weeks while Unstable is updated every day. Download Unstable Google Chrome  wget https://dl.google.com/linux/direct/google-chrome-unstable_current_amd64.deb Install Unstable Google Chrome sudo dpkg -i google-chrome-unstable_current_amd64.deb Run Unstable Google Chrome google-chrome-unstable Unstable versions of Chrome run smoothly, warnings or errors might pop up but you can ignore those, it works ok.  Additional Tips As Ubuntu’s default repository does not have Chrome due to proprietary rights, Google Chrome creates its own repo in your system and it updates each time you update your default repository. sudo apt update && sudo apt upgrade Conclusion A vast number of Linux users prioritize their privacy and prefer open-source products. If this is you, you might be aware that Google Chrome is a proprietary product and is owned by Alphabet (parent company of Google) which means it's not open source. If you are looking for something similar and also open source then Chromium is a great browser to consider. Google Chrome came with the concept of extensions and Google enabled them by default in 2009. These extensions extended the performance of the Chrome web browser and offered additional options to accomplish many things in much easier ways than previously. The main thing that really made Chrome “The King of The Market” was its speed and the ability to get updates for new versions. Google Chrome was able to fix issues much faster than competitors and users had a fine way to access all Google Products in one place.  The birth of the Chrome browser was the result of the problems Google workers faced with the browsers in the market at the time. They created a ‘Just Built For Them’ product which was actually what was needed in the market. Internet Explorer was the most used browser at the time but it was slow. It took Google Chrome just a few years to beat Internet Explorer in the market and in the upcoming decade, it completely wiped it off. 
23 May 2025 · 5 min to read
Apache

How to Install Let’s Encrypt with Apache

In the current environment of the internet, the use of HTTPS to secure web traffic is a must. With a free and automated Certificate Authority (CA) service like Let’s Encrypt, adoption of SSL/TLS has changed dramatically because you can quickly obtain trusted certificates at no cost. This guide will walk you through installing a Let’s Encrypt certificate on an Apache web server running Ubuntu 22.04 (Jammy Jellyfish). You will configure Certbot (the official Let’s Encrypt client), set up renewal procedures, and establish good security practices. Prerequisites Before proceeding, ensure you have: An Ubuntu 22.04 system. Update it with: sudo apt updatesudo apt upgrade Apache Installed: Confirm with apache2 -v. If not present, install via: sudo apt updatesudo apt install apache2 A registered domain (e.g., example.com) pointing to your server’s public IP. Check with: ping example.com Firewall Configured: Allow HTTP/HTTPS traffic: sudo ufw allow 'Apache Full'sudo ufw enable   Sudo Privileges: Access to a user account with administrative rights. Step 1: Installing Certbot via Snap Let’s Encrypt recommends using Certbot through Snap for seamless updates. Ubuntu 22.04 includes Snap by default, but make sure it’s updated: sudo snap install coresudo snap refresh core Install Certbot: sudo snap install --classic certbot Create a symbolic link to the Certbot binary for easy access: sudo ln -s /snap/bin/certbot /usr/bin/certbot Step 2: Generating SSL Certificate with Certbot Certbot integrates with Apache to automate certificate issuance and configuration. Run: sudo certbot --apache Follow the interactive prompts: Email Address: Enter for urgent renewal notifications. Terms of Service: Accept by typing A. Domain Selection: Choose the domain(s) to secure (e.g., example.com, www.example.com). HTTP to HTTPS Redirect: Select 2 to enforce HTTPS universally. Certbot will: Generate certificates in /etc/letsencrypt/live/exple.com/. Modify virtual host files to activate SSL. Reload Apache to apply changes. Step 3: Verifying Apache Configuration Certbot updates automatically your configuration. Inspect the virtual host file for your domain: sudo nano /etc/apache2/sites-available/example.com-le-ssl.conf Look for directives like: SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pemSSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pemInclude /etc/letsencrypt/options-ssl-apache.conf Ensure the SSL module is enabled: sudo a2enmod sslsudo systemctl restart apache2 Step 4: Testing SSL/TLS Configuration Validate your setup: Visit https://example.com. Look for the padlock icon. Use curl to check headers: sudo apt install curlcurl -I https://example.com Confirm HTTP/2 200 or HTTP/1.1 200 OK. Run a free analysis at SSL Server Test to discover vulnerabilities. Step 5: Automating Renewal Let’s Encrypt certificates expire every 90 days. Certbot automates renewal via a systemd timer. Test renewal manually: sudo certbot renew --dry-run If successful, Certbot’s timer will handle future renewals. Verify the timer status: systemctl list-timers | grep certbot Troubleshooting Common Issues Port Blocking: Ensure ports 80 and 443 are open: sudo ufw status Incorrect Domain Resolution: Verify DNS records with: dig example.com Configuration Errors: Check logs via: sudo journalctl -u apache2 Certificate Renewal Failures: Inspect Certbot logs at /var/log/letsencrypt/. Advanced Configurations Enforcing HTTPS with HSTS Add the Strict-Transport-Security header to your SSL config: sudo a2enmod headerssudo systemctl restart apache2 Then in the Apache config (/etc/apache2/apache2.conf) configure: Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" Redirecting HTTP to HTTPS Certbot usually handles this, but manually update non-SSL virtual hosts: <VirtualHost *:80> # Define the primary domain name for this virtual host ServerName example.com # Redirect all HTTP traffic to HTTPS permanently (status code 301) # This ensures users always access the site securely Redirect permanent / https://example.com/ </VirtualHost> Optimizing Cipher Suites Edit /etc/letsencrypt/options-ssl-apache.conf to prioritize strong ciphers: SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDHSSLProtocol All -SSLv2 -SSLv3 -TLSv1 -TLSv1.1 To further enhance your Apache and Let’s Encrypt setup, consider implementing the following advanced optimizations. These steps will not only improve security but also ensure your server performs efficiently under high traffic and adheres to modern web standards. Implementing OCSP Stapling Online Certificate Status Protocol (OCSP) stapling improves SSL/TLS performance by allowing the server to provide proof of validity, reducing client-side verification delays. Enable OCSP stapling in your configuration (/etc/apache2/apache.conf): SSLUseStapling onSSLStaplingCache "shmcb:logs/stapling-cache(150000)" After making these changes, restart the web server: sudo systemctl restart apache2 Verify OCSP stapling is working: openssl s_client -connect example.com:443 -status -servername example.com Look for OCSP Response Status: successful in the output. Configuring HTTP/2 for Improved Performance HTTP/2 enhances web performance by enabling multiplexing, header compression, and server push. To enable HTTP/2 in Apache, first ensure the http2 module is enabled: sudo a2enmod http2 Then, add the following directive to your SSL virtual host: Protocols h2 http/1.1 Restart Apache to apply the changes: sudo systemctl restart apache2 Verify HTTP/2 is active by inspecting the response headers using browser developer tools or a tool like curl: curl -I -k --http2 https://example.com Setting Up Wildcard Certificates If you manage multiple subdomains, a wildcard certificate simplifies management. To obtain a wildcard certificate with Certbot, use the DNS challenge method. First, install the DNS plugin for your DNS provider (e.g., Cloudflare): sudo snap set certbot trust-plugin-with-root=ok sudo snap install certbot-dns-cloudflare Install pip and the cloudflare package: sudo apt updatesudo apt install python3-pipsudo pip install cloudflare Create a credentials file for your DNS provider: sudo nano /etc/letsencrypt/cloudflare.ini Add your API credentials: dns_cloudflare_api_token = your_api_key Secure the file: sudo chmod 600 /etc/letsencrypt/cloudflare.ini Request the wildcard certificate: sudo certbot certonly --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini -d example.com -d *.example.com Update your configuration to use the wildcard certificate. Monitoring and Logging SSL/TLS Usage Regularly monitoring SSL/TLS usage helps identify potential issues and enhance performance. Apache’s mod_ssl module provides detailed logs. Enable logging by integrating the following to your SSL virtual host configuration: LogLevel info ssl:warnCustomLog ${APACHE_LOG_DIR}/ssl_access.log combinedErrorLog ${APACHE_LOG_DIR}/ssl_error.log Analyze logs for errors or unusual activity: sudo tail -f /var/log/apache2/ssl_error.log For advanced monitoring, consider tools like GoAccess or ELK Stack to visualize traffic patterns and SSL/TLS performance. Enhancing Security with Security Headers Adding security headers to your configuration can protect your site from common vulnerabilities like cross-site scripting (XSS) and clickjacking. Include the following directives in your virtual host file: Header set X-Content-Type-Options "nosniff"Header set X-Frame-Options "DENY"Header set X-XSS-Protection "1; mode=block"Header set Content-Security-Policy "default-src 'self';" These headers make sure that browsers enforce strict security policies, minimizing the risk of attacks. Final Thoughts Securing your Apache as of Ubuntu 22.04 using Let's Encrypt is a must-do to create a trusted quality web presence. In this tutorial, we have learned how to fine-tune some of the advanced configuration options, such as OCSP stapling, HTTP/2, wildcard certificates, as well as monitoring and security headers. These configurations will help you protect your server while increasing its efficiency and scalability. Note that web security is an ongoing process! Stay informed about new and developing threats, updated SSL/TLS standards, and audit your setup and logs regularly to maintain your server security after securing it.
27 March 2025 · 7 min to read
Docker

How To Install and Use Docker Compose on Ubuntu

Docker Compose has fundamentally changed how developers approach containerized applications, particularly when coordinating services that depend on one another. This tool replaces manual container management with a structured YAML-driven workflow, enabling teams to define entire application architectures in a single configuration file.  For Ubuntu environments, this translates to reproducible deployments, simplified scaling, and reduced operational overhead. This guide provides a fresh perspective on Docker Compose installation and usage, offering deeper insights into its practical implementation. Prerequisites Before you begin this tutorial, you'll need a few things in place: Deploy an Ubuntu cloud server instance on Hostman. Ensure you have a user account with sudo privileges or root access. This allows you to install packages and manage Docker. Install Docker and have it running on your server, as Docker Compose works on top of Docker Engine. Why Docker Compose Matters Modern applications often involve interconnected components like APIs, databases, and caching layers. Managing these elements individually with Docker commands becomes cumbersome as complexity grows. Docker Compose addresses this by allowing developers to declare all services, networks, and storage requirements in a docker-compose.yml file. This approach ensures consistency across environments—whether you’re working on a local Ubuntu machine or a cloud server. For example, consider a web application comprising a Node.js backend, PostgreSQL database, and Redis cache. Without Docker Compose, each component requires separate docker run commands with precise networking flags. With Compose, these relationships are organized once, enabling one-command setups and teardowns. Docker Compose Installation Follow these steps to install Docker Compose on your Ubuntu machine: Step 1: Verify that the Docker Engine is Installed and Running Docker Compose functions as an extension of Docker, so verify its status with: sudo systemctl status docker Example output: ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2025-02-20 08:55:04 GMT; 5min ago TriggeredBy: ● docker.socket Docs: https://docs.docker.com Main PID: 2246435 (dockerd) Tasks: 9 Memory: 53.7M CPU: 304ms CGroup: /system.slice/docker.service └─2246435 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock If inactive, start it using sudo systemctl start docker. Step 2: Update System Packages Refresh your package lists to ensure access to the latest software versions: sudo apt-get update You will see: Hit:1 https://download.docker.com/linux/ubuntu jammy InRelease Hit:2 http://archive.ubuntu.com/ubuntu jammy InRelease Hit:4 http://security.ubuntu.com/ubuntu jammy-security InRelease Hit:5 http://repo.hostman.com/ubuntu focal InRelease Hit:6 http://archive.ubuntu.com/ubuntu jammy-updates InRelease Hit:7 http://archive.ubuntu.com/ubuntu jammy-backports InRelease Hit:3 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.31/deb InRelease Hit:8 https://packages.redis.io/deb jammy InRelease Reading package lists... Done Step 3: Install Foundational Utilities Secure communication with Docker’s repositories requires these packages: sudo apt-get install ca-certificates curl  Step 4: Configure Docker’s GPG Key Authenticate Docker packages by adding their cryptographic key: sudo install -m 0755 -d /etc/apt/keyringssudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.ascsudo chmod a+r /etc/apt/keyrings/docker.asc This step ensures packages haven’t been altered during transit. Step 5: Integrate Docker’s Repository Add the repository tailored to your Ubuntu version: echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null The command auto-detects your OS version using VERSION_CODENAME. Step 6: Install the Docker Compose Plugin Update repositories and install the Compose extension: sudo apt updatesudo apt-get install docker-compose-plugin Step 7: Validate the Installation Confirm successful setup with: docker compose version The output displays the Docker Compose version: Docker Compose version v2.33.0 Building a Practical Docker Compose Project Let’s deploy a web server using Nginx to demonstrate Docker Compose’s capabilities. Step 1. Initialize the Project Directory Create a dedicated workspace: mkdir ~/compose-demo && cd ~/compose-demo Step 2. Define Services in docker-compose.yml Create the configuration file: nano docker-compose.yml Insert the following content: services: web: image: nginx:alpine ports: - "8080:80" volumes: - ./app:/usr/share/nginx/html In the above YAML file: services: Root element declaring containers. web: Custom service name. image: Uses the Alpine-based Nginx image for reduced footprint. ports: Maps host port 8080 to container port 80. volumes: Syncs the local app directory with the container’s web root. Step 3. Create Web Content Build the HTML structure: mkdir app nano app/index.html Add this HTML snippet: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Docker Compose Test</title> </head> <body> <h1>Hello from Docker Compose!</h1> </body> </html> Orchestrating Containers: From Launch to Shutdown Let’s explore how you can use Docker Compose for container orchestration: Start Services in Detached Mode Launch containers in the background: docker compose up -d Example output: [+] Running 2/2 ✔ Network compose-demo_default Created ✔ Container compose-demo-web-1 Started Docker Compose automatically pulls the Nginx image if missing and configures networking. Verify Container Status Check operational containers: docker compose ps -a Access the Web Application Visit http://localhost:8080 locally or http://<SERVER_IP>:8080 on remote servers. The test page should display your HTML content. Diagnose Issues via Logs If the page doesn’t load or if you encounter any issues, you can inspect container logs: docker compose logs web Example output: web-1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration web-1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ web-1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh web-1 | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf web-1 | 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf web-1 | /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh … Graceful Shutdown and Cleanup Stop containers temporarily: docker compose stop Example output: [+] Stopping 1/1 ✔ Container compose-demo-web-1  Stopped Remove all project resources: docker compose down Example output: [+] Running 2/2 ✔ Container compose-demo-web-1  Removed ✔ Network compose-demo_default  Removed Command Reference: Beyond Basic Operations While the workflow above covers fundamentals, these commands enhance container management: docker compose up --build: Rebuild images before starting containers. docker compose pause: Freeze containers without terminating them. docker compose top: Display running processes in containers. docker compose config: Validate and view the compiled configuration. docker compose exec: Execute commands in running containers (e.g., docker compose exec web nginx -t tests Nginx’s configuration). Conclusion Docker Compose transforms multi-container orchestration from a manual chore into a streamlined, repeatable process. By adhering to the steps outlined—installing Docker Compose, defining services in YAML, and leveraging essential commands—you can manage complex applications with confidence.
26 February 2025 · 7 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support