Deploying a Node.js Application Using Docker

Deploying a Node.js Application Using Docker
Hostman Team
Technical writer
Docker Node.js
20.05.2024
Reading time: 12 min

Have you ever tried to deploy your application somewhere outside of your local machine? Running a developed product (for example, a Node.js server) on another computer is sometimes a difficult task.

Software dependencies, environment variables, configuration files—all of these need to be configured to get the simplest application running. And doing it manually is a routine and unreliable job. We need automation.

Many modern technologies strive to solve the problem of different environments. Containerization is one of these options. And Docker is the most commonly used tool here.

Prerequisites

Before you start following this guide, make sure you already have:

This tutorial assumes that the reader already has experience with the Node.js platform and is familiar with Docker.

Why Docker?

Docker allows you to package your application, environment, and dependencies into a container.

First, we create an application image: code, libraries, configuration files, environment variables, and environment. Everything inside the image is needed to build and run the application.

A container directly refers to an instance of this image. If we draw an analogy from programming languages, then an image is a class, and a container is an instance of this class.

Unlike a virtual machine, a container is just an operating system process.

Essentially, Docker creates an abstraction over low-level operating system tools, allowing one or more containerized processes to run inside virtualized instances of the Linux operating system.

Despite the fact that Docker is by no means a panacea for deployment automation, it solves many important problems:

  • Deploys applications quickly

  • Provides portability between machines

  • Has version control

  • Allows you to build a flexible architecture using components

  • Reduces maintenance overhead due to its compact size

Step 1. Create a Node.js Application

Configuration and dependencies

First, you need to create a directory for the application source files. Let's call it node_app:

mkdir node_app

Now, move to this directory:

cd node_app

As with any Node.js project, we will need a configuration file. Let's create and open it. On Linux, this can be done via nano:

nano package.json

Our project's details are standard:

{
   "name": "node-app-by-hostman",
   "description": "node with docker",
   "version": "1.0.0",
   "main": "hostman.js",
   "keywords": [
     "nodejs"
     "express",
     "docker"
   ],
   "dependencies": {
     "express": "^4.16.4"
   },
   "scripts": {
     "start": "node hostman.js"
   }
}

This file contains general information about the project, author, and license. It is needed for the NPM package manager, which is responsible for installing dependencies and publishing projects to the official library.

The most important parameters in this package.json are:

  • The main entry point for the application is the hostman.js file.

  • In the dependencies, we specify the Express framework. 

You can now save and close the file. All that remains is to install the dependencies:

npm install

Application source code

For our example, we will create a simple Node.js application that displays a static web page: index.html.

The file structure is like this:

  • hostman.js is the entry point that processes requests and performs routing;

  • index.html is web page markup.

It is worth noting that to simplify the example, we will write CSS styles directly in HTML. Of course, in real projects, the visual description of a web page is located in separate files like style.css, often using SASS, LESS, or SCSS transpilers.

Using nano, we will create and open hostman.js:

nano hostman.js

It will only contain the bare minimum code to run the web server:

const express = require('express'); // requiring the Express framework (module)
const app = express(); // creating an application instance
const router = express.Router(); // creating a router instance
const path = __dirname; // path to the working directory
const port = 8080; // server port
// print HTTP METHOD to the console on every request
router.use(function (req,res,next) {
   console.log('/' + req.method);
   next();
});
// respond to the main page request with the index.html file
router.get('/', function(req,res){
   res.sendFile(path + 'index.html');
});
// connect the router to the application
app.use('/', router);
// start listening on port 8080, thereby starting the http server
app.listen(port, function () {
   console.log('Listening on port 8080')
})

The official Express documentation provides a more detailed description of the framework's functionality and examples of its use.

The HTML markup file index.html looks pretty trivial:

<!DOCTYPE html>
<html lang="en">
<head>
     <title>NodeJS app with Docker by Hostman</title>
     <meta charset="utf-8">
     <meta name="viewport" content="width=device-width, initial-scale=1">
</head>
<body>
     <div>Hello World from Hostman!</div>
</body>
<style>
body
{
    height: 100vh;
    display: flex;
    align-items: center;
    justify-content: center;
}
body > div
{
    padding: 12px;
    color: white;
    font-weight: bold;
    background: black;
}
</style>
</html>

To make sure everything displays correctly, open the index.html file in your browser. You should see "Hello World from Hostman!" in the central part of the page, along with a dark border.

At this point, our improvised application can be considered complete. Now, we can move on to dockerization itself.

Step 2. Create a Dockerfile

A Dockerfile is a text document that contains instructions for building a Docker image.

All instructions are executed exactly in the order in which they are written in this file. The recording format is simple: the instruction name and its arguments. This is somewhat similar to functions in programming languages. Comments are written after #.

# comment
INSTRUCTION arguments

Although instruction names are not case-sensitive, they are usually written in capital letters so they do not visually blend with the arguments.

Let's create and open a Dockerfile, after which we can move on to editing it:

nano Dockerfile

Installing the Node.js image

Docker will consecutively execute the instructions in the Dockerfile each time an end user deploys your Node.js application.

Therefore, the first thing they will need is Node itself. Add the following instruction to the Dockerfile:

FROM node:19-alpine

In this case, the FROM command installs the official image of Node.js Alpine Linux version 19 on the machine.

By the way, Docker has an official Docker Hub library that stores container images from developers all over the world. Of course, it also has Node.js.

If you look at the Node.js code on GitHub, you will notice a similar Dockerfile that sets up the environment for running Node on the user's machine.

To make a very simple analogy, a Dockerfile in Docker is almost the same as package.json in NPM. It sets up the project and recursively "drags" all the dependencies: a higher-level Dockerfile installs an image with a lower-level Dockerfile, and so on.

Setting the working directory

The Docker image (which will later turn into a container) needs to be told in which directory to run the rest of the commands that will operate on files and folders. For example, RUN, CMD, ENTRYPOINT, COPY or ADD commands.

To do this, there is a WORKDIR instruction, which passes the directory path as an argument:

WORKDIR /app

Copying configuration files

Using the COPY command, you need to copy the package.json and package-lock.json files from the project directory on the local computer to the container's file system - to the directory specified earlier:

COPY package.json package-lock.json ./

Because the Dockerfile is located in the project directory, the container image contains all the necessary files. However, an image is not a container. Therefore, using the COPY command, we tell Docker which specific files need to be transferred to the "virtual space" of the container.

Installing NPM dependencies

Since the previously installed application directory already contains package.json and package-lock.json, you can download the required dependencies from the NPM registry.

For these purposes, you usually run the npm install command. For Docker to do this automatically, you need to specify the RUN instruction:

RUN npm install

Docker will execute this command in the previously specified /app directory.

Note that the RUN statement executes commands while installing the image (not starting the container), which will later exist as a container. By the way, you can enter commands like this:

RUN [“command1”, “command2”, “command3”]

Copying other files

Once all dependencies are installed, you can copy all the remaining project files to the /app directory. To do this, use the same COPY command, but specify the entire directory rather than specific files:

COPY. ./

Launching the application

Now you can enter the command to deploy your Node.js app. Use the CMD instruction for this. Unlike the RUN instruction, CMD executes the specified commands while the container is running, not during the image installation: 

cmd npm start

Don't forget that you already have the start command defined in package.json:

   "scripts": {
    "start": "node hostman.js"
  }

Final image configuration in Dockerfile

So, after we outlined the entire sequence of actions for Docker, the complete Dockerfile code should look like this:

# install the official Node.js image
FROM node:19-alpine
# specify the working (root) directory
WORKDIR /app
# copy the main application files to the working directory
COPY package.json package-lock.json ./
# install the specified NPM dependencies during the image installation stage
RUN npm install
# after installation, copy all project files to the root directory
COPY. ./
# run the main script when the container starts
cmd npm start

That's it! The minimum set of instructions is specified. Now you can try to create an image and run a container based on it.

About the .dockerignore file

.dockerignore is another configuration file. It contains directories that should be excluded when creating the Docker image. Your project folder probably has a lot of files that are in no way related to the image we create, although they are important during development.

In fact, .dockerignore is much more important than it might seem at first glance. It prevents files that are too large or sensitive from getting into the image. It also limits the effect of ADD or COPY commands used in the Dockerfile.

For example, every time you use the docker build command, Docker checks the image cache against the file system's state. If there are changes, it builds the image again.

However, if some files in your directory are updated quite often, but are not needed to build the image, they should be excluded so as not to perform a pointless rebuild.

Creating and editing .dockerignore

The .dockerignore file should be created in the root directory of your project. Inside the file, on each new line, indicate the names of files and directories to be excluded.

# this is a comment
README.md

Like in a Dockerfile, the # symbol marks a comment. 

There are also ways to specify files more generally:

*/folder

In this case, all directories (or files without an extension) with the name folder in any directory one level below will be excluded from the build.

However, you can ignore directories and files recursively, at the root and at all levels below:

**/folder

At the same time, using !, you can exculde a specific file from the exception. 

In this case, we will exclude all .md files except README.md:

*.md
!README.md

Step 3. Build the Docker Image

The Docker image is created based on the description in the Dockerfile. The command for this should be run from the root of the project, where the Dockerfile is located:

dockerbuild. -t nodeproject

The -t flag is required to set the tag name of the new image. It can later be referenced via nodeproject:latest.

After this, you can check that the image was actually created:

docker images nodeproject:latest

This command displays information about a specific Docker image:

REPOSITORY TAG IMAGE ID CREATED SIZE
nodeproject latest gk8orf8fre489 3 minutes ago 15MB

You do not need to enter a specific name, the console will display information about all images on the computer.

Step 4. Starting the Docker Container

Each created image can be run as a container:

docker run nodeproject

Running as a container, a Docker image is an operating system process in which the file system, network, and process tree are separate from the host computer.

All console output from your Node.js application will be printed to the same terminal in which the container was launched. However, binding a container process to a specific terminal instance is not the best solution.

A better practice is to run the container in the background using the special --detach or -d flags.

docker run -d nodeproject

Docker will start the container by outputting a special identifier in the terminal. It can be used to access the container in subsequent commands:

9341f8b2532b121e9dea8aeb55563c24302df96710c411c699a612e794e89ee4

Before each container run, always check whether the container has already been launched unless you are sure it is not. 

For this, Docker has a command that displays a list of all containers running on the computer:

docker ps

This way you can see the container ID, the image that the container is running on, the command used to start the container, the time it was created, the current status, the ports exposed by the container, and the name of the container itself. By default, Docker assigns a random name to the container, but this can be changed using the --name flag.

Please note that we are talking about the name of the container, not the image. Let's say you run a container called myname:

docker run -d --name myname nodeproject

Now you can stop it by specifying a name:

docker stop myname

And also remove:

docker rm myname

Autonomous container logs

A container running in the background does not explicitly display output to the console. However, it exists and can be seen like this:

docker logs myname

Now everything that your application managed to output to the console will be printed in the terminal.

Conclusion

This article very briefly explains what Docker is, how it works, and why it can be useful when developing Node applications.

By understanding how to properly format a Dockerfile and deploy a Node.js application using Docker, you can automate the process of deploying software to end-user machines.

Such solutions are most relevant in DevOps development, in particular when building CI/CD pipelines.

Docker Node.js
20.05.2024
Reading time: 12 min

Similar

Docker

Using Traefik in Docker as a Reverse Proxy for Docker Containers

Docker containers allow for quick and easy deployment of services and applications. However, as the number of deployed applications grows, and when multiple instances of a single service are required (especially relevant for microservices architecture), we must distribute network traffic. For this purpose, you can use Traefik, a modern open-source reverse proxy server designed specifically to work with Docker containers. In this guide, we will configure Traefik as a reverse proxy for several applications running in Docker containers. Prerequisites To use Traefik, the following are required: A cloud server or a virtual machine with any pre-installed Linux distribution. We will be using Ubuntu 22.04. Docker and Docker Compose installed. See our installation guide.  You can also use a pre-configured image with Docker. To do this, go to the Cloud servers section in your Hostman control panel, click Create server, and select Docker in the Marketplace tab. In this guide, we will use two containers with the Nginx web server. Each container will display a specific message when accessed by its domain name. We will cover the creation of these containers further below. Configuring Traefik Let's start by setting up Traefik: Create a directory for storing configuration files and navigate into it: mkdir ~/test-traefik && cd ~/test-traefik Inside the project’s root directory, create three subdirectories: one for the Traefik configuration file and two others for the configuration files of the applications that will use Traefik: mkdir traefik app1 app2 Create the main configuration file for Traefik named traefik.yml in the previously created traefik directory: nano traefik/traefik.yml Insert the following code into the file: entryPoints: web: address: ":80" providers: docker: exposedByDefault: false api: dashboard: true insecure: true Let’s look closer at the parameters. entryPoints define the ports and protocols through which Traefik will accept requests. They specify on which port and IP address the service will listen for traffic. web — A unique name for the entry point, which can be referenced in routes. In this example, we use the name web. address: ":80" — Indicates that the entry point will listen for traffic on port 80 (HTTP) across all available network interfaces on the system. providers specify the sources of information about which routes and services should be used (e.g., Docker, Kubernetes, files, etc.). docker — Enables and uses the Docker provider. When using the Docker provider, Traefik automatically detects running containers and routes traffic to them. exposedByDefault: false — Disables automatic exposure of all Docker containers as services. This makes the configuration more secure: only containers explicitly enabled through labels (traefik.enable=true) will be routed (i.e., will accept and handle traffic). The api section contains settings for the administrative API and Traefik's built-in monitoring web interface. dashboard: true — Enables Traefik's web-based monitoring dashboard, which allows you to track active routes, entry points, and services. The dashboard is not a mandatory component and can be disabled by setting this to false. insecure: true — Allows access to the monitoring dashboard over HTTP. This is convenient for testing and getting familiar with the system but is unsafe to use in a production environment. To ensure secure access to the dashboard via HTTPS, set this to false. Preparing Configuration Files for Applications Now, let's prepare the configuration files for the applications that will use Traefik as a reverse proxy. We will deploy two Nginx containers, each displaying a specific message when accessed via its address. Create the Nginx configuration file for the first application: nano app1/default.conf Contents: server { listen 80; server_name app1.test.com; location / { root /usr/share/nginx/html; index index.html; } } For the server name, we specify the local domain app1.test.com. You can use either an IP address or a domain name. If you don't have a global domain name, you can use any name that is accessible only at the local level. Additionally, you will need to add the chosen domain to the /etc/hosts file (explained later). Next, create the html directory where the index.html file for the first application will be stored: mkdir app1/html Write the message "Welcome to App 1" into the index.html file using input redirection: echo "<h1>Welcome to App 1</h1>" > app1/html/index.html Repeat the same steps for the second application, but use values specific to the second app: nano app2/default.conf Contents: server { listen 80; server_name app2.test.com; location / { root /usr/share/nginx/html; index index.html; } } Set the local domain name for the second application as app2.test.com. Create the html directory for the second application: mkdir app2/html Write the message "Welcome to App 2" into the index.html file: echo "<h1>Welcome to App 2</h1>" > app2/html/index.html Since we used local domain names, they need to be registered in the system. To do this, open the hosts file using any text editor: nano /etc/hosts Add the following entries: 127.0.0.1 app1.test.com  127.0.0.1 app2.test.com   The final project structure should look like this: test-traefik/ ├── app1/ │ ├── default.conf │ └── html/ │ └── index.html ├── app2/ │ ├── default.conf │ └── html/ │ └── index.html └── traefik/ └── traefik.yml Launching Traefik and Applications Now let's proceed with launching Traefik and the applications. To do this, create a docker-compose.yml file in the root project directory (test-traefik): nano docker-compose.yml Insert the following configuration: version: "3.9" services: traefik: image: traefik:v2.10 container_name: traefik restart: always command: - "--configFile=/etc/traefik/traefik.yml" ports: - "80:80" - "8080:8080" volumes: - "./traefik/traefik.yml:/etc/traefik/traefik.yml" - "/var/run/docker.sock:/var/run/docker.sock:ro" app1: image: nginx:1.26-alpine container_name: nginx-app1 restart: always volumes: - "./app1/default.conf:/etc/nginx/conf.d/default.conf" - "./app1/html:/usr/share/nginx/html" labels: - "traefik.enable=true" - "traefik.http.routers.app1.rule=Host(`app1.test.com`)" - "traefik.http.services.app1.loadbalancer.server.port=80" app2: image: nginx:1.26-alpine container_name: nginx-app2 restart: always volumes: - "./app2/default.conf:/etc/nginx/conf.d/default.conf" - "./app2/html:/usr/share/nginx/html" labels: - "traefik.enable=true" - "traefik.http.routers.app2.rule=Host(`app2.test.com`)" - "traefik.http.services.app2.loadbalancer.server.port=80" Use the following command to launch the containers: docker compose up -d If Docker Compose was installed using the docker-compose-plugin package, the command to launch the containers will be as follows: docker-compose up -d Check the status of the running containers using the command: docker ps All containers should have the status Up. Let's verify whether the running containers with Nginx services can handle the traffic. To do this, send a request to the domain names using the curl utility. For the first application: curl -i app1.test.com For the second application: curl -i app2.test.com As you can see, both services returned the previously specified messages. Next, let's check the Traefik monitoring dashboard. Open a browser and go to the server's IP address on port 8080: In the Routers section, you will see the previously defined routes app1.test.com and app2.test.com. Conclusion Today, we explored Traefik's functionality using two Nginx services as an example. With Traefik, you can easily proxy applications running in Docker containers.
17 January 2025 · 7 min to read
Docker

Removing Docker Images, Containers, Volumes, and Networks

Docker is software for quickly deploying applications through containerization. However, with its active use, many objects accumulate, consuming valuable host resources: images, containers, volumes, and networks. You can remove these objects through Docker Desktop, but it is much more convenient, especially when dealing with a significant number of objects, to use command-line tools. In this article, you will find tips for working with Docker and learn how to remove various objects using both the Docker Desktop client and command-line tools. Removing Containers To interact with containers and change their current state, including removing them, go to the "Containers/Apps" tab in the Docker Desktop web interface, select the desired object, and apply the chosen action: Now, let's look at how to remove these objects using command-line tools. To remove containers, use the docker container rm command or simply docker rm. For clarity, we will use docker container rm with the following syntax: docker container rm [removal options] [object ID] Options: --force or -f: Force removal of the container (e.g., if running). --link or -l: Remove the specified link (e.g., between two objects)*. --volume or -v: Remove anonymous volumes associated with the container. Containers are isolated from each other. One way to link them is via network ports. Using the --link flag will remove this network link in Docker. There are two commands in the command-line arsenal for obtaining information about containers: docker ps and docker container ls. They have the same options and syntax: Options: --all or -a: Display all containers. By default, only running containers are displayed. --filter or -f: Filter based on a set of flags. --format: Format the output. You can display only the necessary information. --last or -n: Show the last n containers. --latest or -l: Show the most recent container. --no-trunc: Do not truncate the output. --quiet or -q: Display only the container IDs. --size or -s: Display the total size. Using these parameters, you can create a list of containers you wish to remove, then pass the container IDs to the docker container rm command. For example, to create a list of containers with the status created or exited, run this command to get such objects: docker ps -a -f status=created -f status=exited Now, pass the result to the removal command: docker container rm $(docker ps -a -f status=created -f status=exited -q) To remove running containers, you must first stop them. Of course, you can use the --force option, but this may lead to data corruption with the application's working data. It is always better to first stop the containers with the docker stop command. To remove all containers in Docker, you can simply use these two commands: docker stop $(docker ps -a -q)docker container rm $(docker ps -a -q) There is a separate command to remove all stopped containers: docker container prune. Removing Docker Images Like containers, you can also remove Docker images within the client application. To do this, go to the "Images" tab: To delete an image, click "Clean up…" in the upper right corner and select the images. If an image is currently in use, Docker will not allow you to delete it. Now, let's move on to the command-line tools. There are two commands for removing Docker images: docker rmi and docker image rm. They are identical and work in much the same way as docker rm. Here's their syntax: docker rmi [remove options] [image IDs] Options: --force or -f: Forcefully remove the image. --no-prune: Do not remove untagged parent images. To find the image IDs, we use the following command: docker images [options] [REPOSITORY:[TAG]] Options: --all or -a: Show all images. By default, intermediate images are hidden. --digests: Show digests. --filter or -f: Filter by flags. --format: Format the output. --no-trunc: Do not truncate the output. --quiet or -q: Show only the image IDs. The application of these commands is the same as in the previous section. First, we query the list of images we want and use it as input for the docker rmi command. For example, to remove images that are not associated with any containers, we can use the dangling=true flag. It is important to note that we will get untagged images. docker images --filter dangling=true After checking the list, we can safely remove it: docker rmi $(docker images --filter dangling=true -q) To remove all unused images, use the docker image prune command. Removing Volumes A volume is a file system located outside the containers and stored on the host machine. To free up disk space occupied by volumes, go to the "Volumes" section, and in the top-right corner, select the corresponding icon: To delete volumes from the command line, use the docker volume rm command with the following syntax: docker volume rm [options] [volume names] This command is not very flexible with options and only provides the --force or -f flag for forced removal. You can only remove volumes if they are not associated with running containers. Forced removal of volumes linked to active containers is not recommended, as it may lead to data corruption. To list volume names, use the docker volume ls command with the following syntax: docker volume ls [options] Again, Docker is limited on options here, with only three available: --filter or -f: Filter by flags. --format: Format the output. --quiet or -q: Show only the volume names. Volumes exist independently of containers, and after their deletion, they remain in the host's file system as unlinked volumes. Let's try deleting such volumes. Use the dangling=true flag for this purpose: docker volume ls -f dangling=true Now, pass the results to the command for deletion: docker volume rm $(docker volume ls -f dangling=true -q) Alternatively, you can use another command to remove all unused volumes: docker volume prune. However, before using this command, check the list to ensure it includes only the volumes you want to remove. If you need to remove an unnamed volume, you can delete it with its associated container. For this, add the -v flag when using docker rm. Removing Networks To remove networks, you need to use the docker network rm command with the following syntax: docker network rm [network names/IDs] This command does not have any options. You can pass either names or IDs of the networks. To find the names and IDs of the networks, use the docker network ls command: docker network ls [options] This command has four available options: --filter or -f: Filter by flags. --format: Format the output. --no-trunc: Do not truncate the output. --quiet or -q: Show only IDs. Before deleting a network, you must remove any objects (containers) that are using it. To check which containers are using a specific network, use the following command: docker ps -f network=[network ID] Afterward, you can proceed to delete the network. For example, to delete networks with the driver=bridge value, use the following commands: docker network ls -f driver=bridgedocker network rm $(docker network ls -f driver=bridge -q) Cleaning Up Docker from All Objects Sometimes, you might need to remove everything and reinstall Docker to return an application to its initial state. Instead of deleting Docker entirely, you can execute a series of commands to clean up all objects and work with a fresh environment: Stop and remove containers: docker stop $(docker ps -a -q)docker rm $(docker ps -a -q) Remove images: docker rmi $(docker images -a -q) Remove volumes: docker volume rm $(docker volume ls -a -q) Remove networks: docker network rm $(docker network ls -a -q)  
05 December 2024 · 6 min to read
Docker

How to Install Docker on Ubuntu 22.04

Docker is a free, open-source tool for application containerization. Containers are isolated environments similar to virtual machines (VMs), but they are more lightweight and portable across platforms, requiring fewer system resources. Docker uses OS-level virtualization, leveraging features built into the Linux kernel. This guide walks through installing Docker on Ubuntu 22.04 but also applies to other Ubuntu versions. Additionally, we’ll download Docker Compose, a tool essential for managing multiple containers efficiently. For this guide, we will use a Hostman cloud server. System Requirements According to Docker's documentation, the following 64-bit Ubuntu versions are supported: Ubuntu Oracular 24.10 Ubuntu Noble 24.04 (LTS) Ubuntu Jammy 22.04 (LTS) Ubuntu Focal 20.04 (LTS) Docker works on most popular architectures. The resource requirements for your device will depend on your intended use and how comfortably you want to work with Docker. The scale of applications you plan to deploy in containers will largely dictate the system needs. Some sources recommend a minimum of 2 GB of RAM. Additionally, a stable internet connection is required. Installing Docker on Ubuntu 22.04 Installing Docker on Ubuntu 22.04 involves executing a series of terminal commands. Below is a step-by-step guide with explanations. The steps are also applicable to server versions of Ubuntu. 1. Update Package Indexes The default repository may not always contain the latest software releases. Therefore, we will download Docker from its official repository to ensure the latest version. First, update the package indexes: sudo apt update 2. Install Additional Packages To install Docker, you’ll need to download four additional packages: curl: Required for interacting with web resources. software-properties-common: Enables software management via scripts. ca-certificates: Contains information about certification authorities. apt-transport-https: Necessary for data transfer over the HTTPS protocol. Download these packages with the following command: sudo apt install curl software-properties-common ca-certificates apt-transport-https -y The -y flag automatically answers "Yes" to all terminal prompts. 3. Import the GPG Key The GPG key is required to verify software signatures. It is needed to add Docker's repository to the local list. Import the GPG key with the following command: wget -O- https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor | sudo tee /etc/apt/keyrings/docker.gpg > /dev/null During the import process, the terminal may display a warning before confirming the successful execution of the command. 4. Add Docker Repository Add the repository for your version of Ubuntu, named "Jammy." For other versions, use their respective code names listed in the "System Requirements" section. Run the following command: echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null During execution, the terminal will prompt you to confirm the operation. Press Enter. 5. Update Package Indexes Again After making these changes, update the package indexes once more using the familiar command: sudo apt update 6. Verify the Repository Ensure that the installation will proceed from the correct repository by running the following command: apt-cache policy docker-ce Output example: The output may vary depending on the latest Docker releases. The key point is to confirm that the installation will be performed from Docker's official repository. 7. Installing Docker After configuring the repositories, proceed with the Docker installation: sudo apt install docker-ce -y The installation process will begin immediately. To confirm a successful installation, check Docker's status in the system: sudo systemctl status docker Output example: The output should indicate that the Docker service is active and running. Installing Docker Compose Docker Compose is a Docker tool designed for managing multiple containers. It is commonly used in projects where many containers must work together as a unified system. Managing this process manually can be challenging. Instead, you describe the entire configuration in a single YAML file containing the settings and configurations for all containers and their applications. There are several ways to install Docker Compose. If you need the latest version, make sure to use manual installation and installation via the Git version control system. Installation via apt-get If having the latest version is not critical for you, Docker Compose can be installed directly from the Ubuntu repository. Run the following command: sudo apt-get install docker-compose Installing via Git First, install Git: sudo apt-get install git Verify the installation by checking the Git version: git --version The output should show the Git version. Next, clone the Docker Compose repository. Navigate to the Docker Compose GitHub page and copy the repository URL. Run the following command to clone the repository: git clone https://github.com/docker/compose.git The cloning process will begin, and the repository will be downloaded from GitHub. Manual Installation Go to the Docker Compose GitHub repository and locate the latest release version under the Latest tag. At the time of writing, the Latest version of Docker Compose is v2.31.0. Let's download it: sudo curl -L "https://github.com/docker/compose/releases/download/v2.31.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose In this command, the parameters $(uname -s) and $(uname -m) automatically account for the system characteristics and architecture. After the download finishes, change the file's permissions: sudo chmod +x /usr/local/bin/docker-compose Conclusion In this guide, we covered the installation of Docker on Ubuntu 22.04, along with several ways to install Docker Compose. You can order a cloud server at Hostman for your experiments and practice.
04 December 2024 · 5 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support