How to Improve Docker Containers Security: Best Practices

How to Improve Docker Containers Security: Best Practices
Hostman Team
Technical writer
Docker
24.11.2023
Reading time: 5 min

Programmers widely use Docker containers. They are isolated environments that have everything needed to launch an application quickly. Working with containers speeds up application development and increases the developer's efficiency.

One of the actual problems for developers using containers is security in Docker. Containers are a standardized environment for attacks; their misuse opens the door to valuable information for attackers. Let's take a look at how we can secure containers with time-tested best practices.

Security basics when working with Docker containers

Container security depends on the operating system, the embedded software components with which the developer interacts, and configuration settings. Proper build and deployment will ensure that Docker is secure, and you can enjoy all the benefits of containers.

Another important tip is to update the software regularly. Each update introduces improved protection algorithms, so use only up-to-date solutions.

Recommendations for building the image

Always use a verified image from official sources. Alpine Linux is the best option as a base distribution. These simple guidelines reduce the likelihood of surface and supply chain attacks.

Many developers wonder whether choosing a fixed or the latest tag is better. Specifying a particular version in the tags provides a strong defense against making changes that could break containers. However, it prevents security settings from being updated when updates are released, which can reduce security. If you tag a specific version, choose the most stable one.

Do not assign root privileges to users

Processes in containers are initially started in root mode. To ensure security, grant fewer privileges to the user. To do this, specify the -u symbol in front of an arbitrarily assigned user ID that does not exist in a particular container. It looks like this:

Docker run -u 3000 <image>

The second way to create a user without root privileges:

FROM <base image>
RUN addgroup -S appgroup
&& adduser -S appuser -G appgroup
USER appuser
...<continued Dockerfile>...

These settings prevent attackers from logging in through the container.

Privilege capabilities and configuration

You shouldn't run privileged containers, and it is also advisable to prevent new privileges from being added while the container is in use. To do this, set the following settings:

--security-opt=no-new-privileges

For security reasons, you should not use default capabilities. It is better to remove the irrelevant ones.

Create control groups to track resource access parameters. This allows you to control memory access as well as all operations. Each container is automatically allocated its own group. To avoid increasing the risk of hacker attacks, never specify the --cgroup-parent attribute.

To ensure the security of Docker containers, restrict user access to memory. To do this, set parameters such as:

--memory="400m"
--memory-swap="1g"
--cpus=0.5
--restart=on-failure:5
--ulimit nofile=5
--ulimit nproc=5

Data storage and file system

The root file system of all containers should not be modified. Select read-only settings:

docker run --read-only <image>

Properly implement long-term storage of information. You can either use volumes or mount host directories. Whichever you choose, you should select "read-only" in the settings to prevent unauthorized modification of data.

If you are using temporary storage for files, set the options:

docker run --read-only --tmpfs /tmp:rw,noexec, nosuid <image>

Network parameters

Initially, docker0 is installed on the system. We do not recommend using this bridge interface. To disable this option, set the --bridge=none parameter. This will prevent containers from communicating through a network connection. 

It is better to create separate networks for connections:

docker network create <network_name>
docker run --network=<network_name>

For security purposes, isolate the host interface by assigning the --net=host parameter.

When working with containers, you should systematically monitor network activity. This way, you can detect anomalous activity in time and prevent malicious attacks.

Use only trusted registries

Using the official online registry from Docker is a safe solution. You can also configure the registry yourself on your own host. Installing the registry behind the firewall serves as an additional lever to strengthen security.

Regular scanning

Don't neglect scanning for vulnerabilities. You can use either free solutions or more functional paid software. Regular monitoring will allow you to detect problems quickly and avoid serious consequences.

Do not open a UNIX socket

This socket is the entry point to the API. By opening the socket at /var/run/docker.sock, you grant unrestricted root access to the host. Ensure the other containers don't get access; this is a critical security setting.

Do not include secrets and credentials

Initially, any user accessing the image can get information about the secrets recorded in Dockerfiles. To prevent this, you should use Docker BuildKit to store secret information and specify the --secret option on the command line.

Now you know how to protect Docker from intruders. Following these simple rules will allow you to avoid serious problems and keep your information safe.

Docker
24.11.2023
Reading time: 5 min

Similar

Docker

Converting a Container to a Virtual Machine

A tricky question often asked during technical interviews for a DevOps engineer position is: "What is the difference between a container and a virtual machine?" Most candidates get confused when answering this question, and some interviewers themselves don’t fully understand what kind of answer they want to hear. To clearly understand the differences and never have to revisit this question, we will show you how to convert a container into a virtual machine and run it in the Hostman cloud. The process described in this article will help better understand the key differences between containers and virtual machines and demonstrate each approach's practical application. This article will be especially useful for working with systems requiring a specific environment. We will perform all further actions in a Linux OS environment and use a virtual machine based on the KVM hypervisor created with VirtualBox to prepare the necessary image. You can also use other providers such as VMware, QEMU, or virt-manager. Configuration of Our Future Virtual Machine Let’s start this exciting journey by creating a container. For this, we will use Docker. If it is not installed yet, install it using the command below (before that, you may need to update the list of available packages with sudo apt update): sudo apt install docker.io -y Create a container based on the minimal Alpine image and attach to its shell: sudo docker run --name test -it alpine sh Install the necessary programs using the apk package manager that you plan to use in the future virtual machine. You don’t necessarily have to limit yourself to packages from the standard Alpine repository — you can also add other repositories or, if needed, download or compile packages directly in the container. apk add tmux busybox-extras openssh-client openssh-server iptables dhclient ppp socat tcpdump vim openrc mkinitfs grub grub-bios Here’s a list of minimally required packages: tmux — a console multiplexer. It will be useful for saving user sessions and the context of running processes in case of a network disconnect. busybox-extras — an extended version of BusyBox that includes additional utilities but remains a compact distribution of standard tools. openssh-client and openssh-server — OpenSSH client and server, necessary for setting up remote connections. iptables — a utility for configuring IP packet filtering rules. dhclient — a DHCP client for automating network configuration. ppp — a package for implementing the Point-to-Point Protocol. socat — a program for creating tunnels, similar to netcat, with encryption support and an interactive shell. tcpdump — a utility for capturing traffic. Useful for debugging network issues. vim — a console text editor with rich customization options. It is popular among experienced Linux users. openrc — an initialization system based on dependency management that works with SysVinit. It’s a key component needed to convert a container into a virtual machine, as containers do not have it by default. mkinitfs — a package for generating initramfs, allowing you to build necessary drivers and modules that are loaded during the initial system initialization. grub and grub-bios — OS bootloader. In this case, we are specifically interested in creating a bootloader for BIOS-based systems using an MBR partition table. Set the root password: export PASSWORD=<your secret password>  echo "root:$PASSWORD" | chpasswd   Create a user. You will need it for remote SSH access later: export USERNAME=<username>  adduser -s /bin/sh $USERNAME   Set the SUID bit on the executable file busybox. This is necessary so that the user can execute commands with superuser privileges: chmod u+s /bin/busybox   Create a script to be executed during system initialization: cat <<EOF > /etc/local.d/init.start #!/bin/sh dmesg -n 1 mount -o remount,rw / ifconfig lo 127.0.0.1 netmask 255.0.0.0 dhclient eth0 # ifconfig eth0 172.16.0.200 netmask 255.255.255.0 # route add -net default gw 172.16.0.1 busybox-extras telnetd EOF Let’s go through the script line by line: dmesg -n 1 — Displays critical messages from the Linux kernel's message buffer so that potential issues can be detected during startup. mount -o remount,rw / — Remounts the root file system (/) with the rw (read-write) flag. This allows modifications to the file system after boot. ifconfig lo 127.0.0.1 netmask 255.0.0.0 — Configures the loopback interface (lo) with IP address 127.0.0.1 and subnet mask 255.0.0.0. This ensures internal network communication on the machine. dhclient eth0 — Runs the DHCP client for the eth0 interface to automatically obtain IP address settings and other network parameters from a DHCP server. # ifconfig eth0 172.16.0.200 netmask 255.255.255.0 — This line is commented out, but if uncommented, it will assign a static IP address 172.16.0.200 and subnet mask 255.255.255.0 to the eth0 interface. We included this line in the script in case a static network configuration is needed. # route add -net default gw 172.16.0.1 — This line is also commented out, but if uncommented, it will add a default route with gateway 172.16.0.1. This determines how packets will be routed outside the local network. busybox-extras telnetd — Starts the Telnet server. Please note that using the Telnet protocol in production environments is not recommended due to the lack of encryption for data transmission. Make the script executable: chmod +x /etc/local.d/init.start Add the script to the autostart: rc-update add local Add the OpenSSH server daemon to the autostart. This will allow you to connect to the cloud server via SSH later: rc-update add sshd default Set the default DNS server: echo nameserver 8.8.8.8 > /etc/resolv.conf Exit the terminal using the exit command or the keyboard shortcut CTRL+D. The next step is to save the container's file system to the host as an archive, which can also be done using Docker. In my case, the final artifact is only 75 megabytes in size. sudo docker export test > test.tar Transforming a Docker Image into a Virtual Machine Image Containers are a Linux-specific technology since they don't have their own kernel and instead rely on abstractions of the host's Linux kernel to provide isolation and resource management. The key abstractions include: namespaces: isolation for USER, TIME, PID, NET, MOUNT, UTS, IPC, CGROUP namespaces. cgroups: limitations on resources like CPU, RAM, and I/O. capabilities: a set of capabilities for executing specific privileged operations without superuser rights. These kernel components make Docker and other container technologies closely tied to Linux, meaning they can't natively run on other operating systems like Windows, macOS, or BSD. For running Docker on Windows, macOS, or BSD, there is Docker Desktop, which provides a virtual machine with a minimal Linux-based operating system kernel. Docker Engine is installed and running inside this virtual machine, enabling users to manage containers and images in their usual environment. Since we need a full operating system and not just a container, we will require our own kernel. Create the image file we will work with: truncate -s 200M test.img Use fdisk to create a partition on the test.img image: echo -e "n\np\n1\n\n\nw" | fdisk test.img n — create a new partition p — specify that this will be a primary partition 1 — the partition number \n\n — use default values for the start and end sectors w — write changes Associate the test.img file with the /dev/loop3 device, starting from an offset of 2048 blocks (1 MB): sudo losetup -o $[2048*512] /dev/loop3 test.img Note that /dev/loop3 may already be in use. You can check used devices with: losetup -l Format the partition linked to /dev/loop3 as EXT4: sudo mkfs.ext4 /dev/loop3 Mount the partition at /mnt: sudo mount /dev/loop3 /mnt Extract the Docker image (test.tar) into the /mnt directory: sudo tar xvf test.tar -C /mnt Create the /mnt/boot directory to store the bootloader and kernel files: sudo mkdir -pv /mnt/boot Download the Linux kernel source code: wget https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-6.8.9.tar.xz Extract the Linux kernel source code in the current directory: tar xf linux-6.8.9.tar.xz Install the necessary packages for building the Linux kernel: sudo apt install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison -y Navigate to the kernel source directory and create the default configuration file: cd linux-6.8.9make defconfig Add necessary configuration options to the .config file: echo -e "CONFIG_BRIDGE=y\nCONFIG_TUN=y\nCONFIG_PPP=y\nCONFIG_PPP_ASYNC=y\nCONFIG_PPP_DEFLATE=y" >> .config CONFIG_BRIDGE=y — Enables network bridge support, allowing multiple network interfaces to be combined into one. CONFIG_TUN=y — Enables support for virtual network interfaces like TUN/TAP, useful for VPN setups. CONFIG_PPP=y — Enables support for the Point-to-Point Protocol (PPP). CONFIG_PPP_ASYNC=y — Enables asynchronous PPP for serial ports. CONFIG_PPP_DEFLATE=y — Enables PPP data compression using the DEFLATE algorithm. Prepare the source code for building: make prepare -j4 Create the necessary scripts, build the compressed kernel image (bzImage) and the kernel modules: make scripts -j4make bzImage -j4make modules -j4 Install the built kernel and modules into the /mnt/boot directory (which contains the virtual machine image filesystem): sudo make INSTALL_PATH=/mnt/boot installsudo make INSTALL_MOD_PATH=/mnt modules_install Install the GRUB bootloader into the /mnt/boot directory. Make sure you're in the directory containing the test.img file: sudo grub-install --target=i386-pc --boot-directory=/mnt/boot/test.img --modules='part_msdos' Bind-mount the host system’s /proc, /sys, and /dev directories to the /mnt directory. This is necessary for creating the initramfs: sudo mount --bind /proc /mnt/proc/sudo mount --bind /sys /mnt/sys/sudo mount --bind /dev /mnt/dev/ Change root (chroot) into the /mnt filesystem using a shell: sudo chroot /mnt /bin/sh Generate the initial RAM filesystem (initramfs) for the kernel version you are working with: mkinitfs -k -o /boot/initrd.img-6.8.9 6.8.9 Generate the GRUB bootloader configuration file: grub-mkconfig -o /boot/grub/grub.cfg By completing these steps, you will have created a small virtual machine image with a fully working Linux kernel, a bootloader (GRUB), and an initramfs. Local Verification of the Built Image For local verification, it’s most convenient to use QEMU. This package is available for Windows, macOS, and Linux. Install it by following the instructions for your OS on the official website. Convert the test.img to the qcow2 format. This will reduce the size of the final image from 200 MB to 134 MB. qemu-img convert test.img -O qcow2 test.qcow2 Run the image using QEMU. qemu-system-x86_64 -hda test.qcow2 If all steps were completed correctly, the initialization process will be successful, and an interactive menu for entering the login and password will appear. To check the version of the installed kernel, use the uname -a command, which will output the necessary information. Creating a Virtual Machine in Hostman Go to the Cloud Servers section and start creating a new server. Select the prepared and tested image as the server’s base. To do this, first add it to the list of available images. Supported formats include: iso, qcow2, vmdk, vhd, vhdx, vdi, raw, img. Upload the image in one of the available ways: from your computer or by link. Note that after uploading, the image will also be available via URL. Continue with the creation of the cloud server and specify the other parameters of its configuration. Since the image is minimal, it can be run even on the smallest configuration. Once the cloud server is created, go to the Console tab and verify whether the virtual machine was successfully created from the image. The virtual machine has been created and works correctly. Since we added the OpenSSH daemon to the autostart in advance, it is now possible to establish a full remote connection to the server using the username, IP address, and password. Conclusion To turn a container into a full-fledged lightweight virtual machine, we sequentially added key components: the OpenRC initialization system, GRUB bootloader, Linux kernel, and initramfs. This process highlighted the importance of each component in the overall virtual machine architecture and demonstrated the practical differences from container environments. As a result of this experiment, we realized the importance of understanding the architecture and functions of each component to successfully create images for specific needs and to manage virtual machines more effectively from a resource perspective. The image built in this article is quite minimal since it is a Proof-of-Concept, but one can go even further. For example, you could use a special guide to minimize the kernel and explore minimal Linux distributions such as Tiny Core Linux or SliTaz. On the other hand, if your choice is to add functionality by increasing the image size, we strongly recommend checking out the Gentoo Wiki. This resource offers extensive information on fine-tuning the system.
22 January 2025 · 11 min to read
Docker

How to Create and Optimize Docker Images

In today's environment, most companies actively use the Docker containerization system in their projects, especially when working with microservice applications. Docker allows you to quickly deploy any applications, whether monolithic or cloud-native. Despite the simplicity of working with Docker, it's important to understand some nuances of creating your own images. In this article, we will explore how to work with Docker images and optimize them using two different applications as examples. Prerequisites To work with the Docker containerization system, we will need: A cloud server or a virtual machine with any pre-installed Linux distribution. We will be using Ubuntu 22.04. Docker installed. See our installation guide.  You can also use a pre-configured image with Docker. To do this, go to the Cloud servers section in your Hostman control panel, click Create server, and select Docker in the Marketplace tab. Working with Docker Images Docker images are created by other users and stored in registries—special repositories for images. Registries can be public or private. Public repositories are available to all users without requiring authentication. Private registries, however, can only be accessed by users with appropriate login credentials. Companies widely use private repositories to store their own images during software development. By default, Docker uses the public registry Docker Hub, which any user can use to publish their own images or download images created by others. When a user runs a command such as docker run, the Docker daemon will, by default, contact its standard registry. If necessary, you can change the registry to another one. To create custom Docker images, a Dockerfile is used—a text file containing instructions for building an image. These instructions use 18 specially reserved keywords. The most common types of instructions include the following: FROM specifies the base image. Every image starts with a base image. A base image refers to a Linux distribution, such as Ubuntu, Debian, Oracle Linux, Alpine, etc. There are also many images with various pre-installed software, such as Nginx, Grafana, Prometheus, MySQL, and others. However, even when using an image with pre-installed software, some Linux OS distribution will always be specified inside. WORKDIR creates a directory inside the image. Its functionality is similar to the mkdir utility used to create directories in Linux distributions. It can be used multiple times in one image. COPY copies files and directories from the host system into the image. It is used to copy configuration files and application source code files. ADD is similar to the COPY instruction, but in addition to copying files, ADD allows downloading files from remote sources and extracting .tar archives. RUN executes commands inside the image. With RUN, you can perform any actions that a user can perform in a Bash shell, including creating files, installing packages, starting services, etc. CMD specifies the command that will be executed when the container is started. Example: Creating an Image As an example, we will create an image with a simple Python program. Create a project directory and move into it: mkdir python-calculator && cd python-calculator Create a file console_calculator.py with the following content: print("*" * 10, "Calculator", "*" * 10) print("To exit from program type q") try: while True: arithmetic_operators = input("Choose arithmetic operation (+ - * /):\n") if arithmetic_operators == "q": break if arithmetic_operators in ("+", "-", "*", "/"): first_number = float(input("First number is:\n")) second_number = float(input("Second number is:\n")) print("The result is:") if arithmetic_operators == "+": print("%.2f" % (first_number + second_number)) elif arithmetic_operators == "-": print("%.2f" % (first_number - second_number)) elif arithmetic_operators == "*": print("%.2f" % (first_number * second_number)) elif arithmetic_operators == "/": if second_number != 0: print("%.2f" % (first_number / second_number)) else: print("You can't divide by zero!") else: print("Invalid symbol!") except (KeyboardInterrupt, EOFError) as e: print(e) Create a new Dockerfile with the following content: FROM python:3.10-alpine WORKDIR /app COPY console_calculator.py . CMD ["python3","console_calculator.py"] For the base image, we will use python:3.10, which is based on a lightweight Linux distribution called Alpine. We will discuss the use of Alpine in more detail in the next chapter. Inside the image, we will create a directory app, where the project file will be located. The container will be launched using the command "python3", "console_calculator.py". To build the image, the docker build command is used. Each image must also be assigned a tag. A tag is a unique identifier that can be assigned to an image. The tag is specified using the -t flag: docker build -t python-console-calculator:01 . The period at the end of the command indicates that the Dockerfile is located in the current directory. You can display the list of created images using: docker images To launch the container, use:  docker run --rm -it python-console-calculator:01 Let's test the functionality of the program by performing a few simple arithmetic operations: To exit the program, you need to press the q key. Since we specified the --rm flag when starting the container, the container will be automatically removed. You can also run the container in daemon mode, i.e., in the background. To do this, include the -d flag when starting the container: docker run -dit python-console-calculator:01 After that, the container will appear in the list of running containers: When starting the container in the background to access our script, you need to use docker exec, which executes a command inside the container. First, you need to start a shell (bash or sh), then manually run the script inside the container. To do this, use the docker exec command, passing the sh command as an argument to open the shell inside the container (where 4f1b8b26c607 is the unique container ID displayed in the CONTAINER ID column of the docker ps output): docker exec -it 4f1b8b26c607 sh Then, run the script manually: python console_calculator.py To remove a running container, you need to use the docker rm command and pass the container's ID or name. You also need to use the -f flag, which will force the removal of a running container: docker rm -f 186e8f43ca60 Optimizing Docker Images When creating Docker images, there is one main rule: finished images should be compact and occupy as little space as possible. Additionally, the smaller the image, the faster it is built. This can play a key role when using CI/CD methods or when releasing software in the Time to Market model. Proper Selection of the Base Image As the first recommendation, it's important to choose the base image wisely. For example, instead of using various Linux distribution images like Ubuntu, Oracle Linux, Rocky Linux, and many others, you can directly choose an image that already comes with the required programming language, framework, or other necessary technology. Examples of such images include: node for working with the Node.js platform A pre-built image with Nginx ibmjava for working with the Java programming language postgres for working with the PostgreSQL databases redis for working with the NoSQL Redis Using a specific image instead of an operating system image has the following advantages: There is no need to install the main tool (programming language, framework, etc.), so the image won't be "cluttered" with unnecessary packages, preventing an increase in size. Images that come with pre-installed software (like Nginx, Redis, PostgreSQL, Grafana, etc.) are always created by the developers of the software themselves. This means that users do not need to configure the program to run it (except in cases where it needs to be integrated with their service). Let's consider this recommendation with a practical example. We will use a simple Python program that prints "Hello from Python!".  First, we will build an image using debian as the base image. Create and navigate to the directory where the project files will be stored: mkdir dockerfile-python && cd dockerfile-python Create the test.py file with the following content: print("Hello from Python!") Next, create a Dockerfile with the following content: FROM debian:latest COPY test.py . RUN apt update RUN apt -y install python3 CMD ["python3", "test.py"] To run Python programs, you also need to install the Python interpreter. Then, build the image: docker build -t python-debian:01 . Let’s check the Docker image size:  docker images The image takes up 185MB, which is quite a lot for an application that just prints a single line to the terminal. Now, let's choose the correct base image, which is based on the Alpine distribution. Another feature of base images is that for many images, there are special versions in the form of slim and alpine images, which are even smaller. Let's look at the example of the official Python 3.10 image. The python:3.10 image takes up a whole 1 GB, whereas the slim version is much smaller—127 MB. And the alpine image is only 50 MB. Slim images are images that contain the minimum set of packages necessary to run a finished application. These images lack most packages and libraries. Slim images are created from both regular Linux distributions (such as Ubuntu or Debian) and Alpine-based distributions. Alpine images are images that use the Alpine distribution as the operating system— a lightweight Linux distribution that takes up about 5 MB of disk space (without the kernel). It differs from other Linux distributions in that it uses a package manager called apk, lacks the system initialization system, and has fewer pre-installed programs. When using both slim and Alpine images, it is essential to thoroughly test your application, as the required packages or libraries might be missing in such distributions. Now, let's test our application using the Python image with Alpine. Return to the previously used Dockerfile and replace the base image from debian to the python:alpine3.19 image. You should also remove the two RUN instructions, as there will be no need to install the Python interpreter: FROM python:alpine3.19 COPY test.py . CMD ["python3", "test.py"] Use a new tag to build the image: List all the Docker images. Check the image size and compare with the previous one:  Since we chose the correct base image with Python already preinstalled, the image size was reduced from 185 MB to 43.8 MB. Reducing the Number of Layers Docker images are based on the concept of layers. A layer represents a change made to the image's file system. These changes include copying/creating directories and files or installing packages. It is recommended to use as few layers as possible in the image. Among all Dockerfile instructions, only the FROM, COPY, ADD, and RUN instructions create layers that increase the final image size. All other instructions create temporary intermediate images and do not directly increase the image size. Let's take the previously used Dockerfile and modify it according to new requirements. Suppose we need to install additional packages using the apt package manager: FROM debian:latest COPY test.py . RUN apt update RUN apt -y install python3 htop net-tools mc gcc CMD ["python3", "test.py"] Build the image: docker build -t python-non-optimize:01 . Check the size of the created Docker image: docker images The image size was 570 MB. However, we can reduce the size by using fewer layers. Previously, our Dockerfile contained two RUN instructions, which created two layers. We can reduce the image size by combining the apt update and apt install commands using the && symbol, which in Bash means that the next command will only run if the first one completes successfully. Another important point is to remove cache files left in the image after package installation using the apt package manager (this also applies to other package managers such as yum/dnf and apk). The cache must be removed. For distributions using apt, the cache of installed programs is stored in the /var/lib/apt/lists directory. Therefore, we will add a command to delete all files in that directory within the RUN instruction without creating a new layer: FROM debian:latest COPY test.py . RUN apt update && apt -y install python3 htop net-tools mc gcc && rm -rf /var/lib/apt/lists/* CMD ["python3", "test.py"] Build the image: docker build -t python-optimize:03 . And check the size: The image size was reduced from the initial 570 MB to the current 551 MB. Using Multi-Stage Builds Another significant way to reduce the size of the created image is by using multi-stage builds. These builds, which involve two or more base images, allow us to separate the build environment from the runtime environment, effectively removing unnecessary files and dependencies from the final image. These unnecessary files might include libraries or development dependencies that are only needed during the build process. Let’s explore this approach with a practical example using the Node.js platform. Node.js should be installed beforehand, following our guide. We will first build the application image without multi-stage builds to evaluate the difference in size. Create a directory for the project: mkdir node-app && cd node-app Initialize a new Node.js application: npm init -y Install the express library: npm install express Create an index.js file with the content: const express = require('express'); const app = express(); const PORT = process.env.PORT || 3000; app.get('/', (req, res) => { res.send('Hello, World!'); }); app.listen(PORT, () => { console.log(Server is running on port${PORT}); }); Create Dockerfile with this content: FROM node:14-alpine WORKDIR /app COPY package*.json ./ RUN npm install COPY index.js . EXPOSE 3000 CMD ["npm", "start"] Build the image: docker build -t node-app:01 . Check the size: docker images The image size was 124 MB. Now let's rewrite the Dockerfile to use two images, transforming it into the following form: FROM node:14 AS builder WORKDIR /app COPY package*.json ./ RUN npm install COPY index.js . FROM gcr.io/distroless/base-debian10 AS production WORKDIR /app COPY --from=builder /app . EXPOSE 3000 CMD ["npm", "start"] Build the image: docker build -t node-app:02 . List the Docker images and check the size: docker images As a result, the image size was drastically reduced—from 124 MB to 21.5 MB. Conclusion In this article, we created our own Docker image and explored various ways to run it. We also paid significant attention to optimizing Docker images. Through optimization, we can greatly reduce the image size, which allows for faster image builds.
22 January 2025 · 12 min to read
Docker

Using Traefik in Docker as a Reverse Proxy for Docker Containers

Docker containers allow for quick and easy deployment of services and applications. However, as the number of deployed applications grows, and when multiple instances of a single service are required (especially relevant for microservices architecture), we must distribute network traffic. For this purpose, you can use Traefik, a modern open-source reverse proxy server designed specifically to work with Docker containers. In this guide, we will configure Traefik as a reverse proxy for several applications running in Docker containers. Prerequisites To use Traefik, the following are required: A cloud server or a virtual machine with any pre-installed Linux distribution. We will be using Ubuntu 22.04. Docker and Docker Compose installed. See our installation guide.  You can also use a pre-configured image with Docker. To do this, go to the Cloud servers section in your Hostman control panel, click Create server, and select Docker in the Marketplace tab. In this guide, we will use two containers with the Nginx web server. Each container will display a specific message when accessed by its domain name. We will cover the creation of these containers further below. Configuring Traefik Let's start by setting up Traefik: Create a directory for storing configuration files and navigate into it: mkdir ~/test-traefik && cd ~/test-traefik Inside the project’s root directory, create three subdirectories: one for the Traefik configuration file and two others for the configuration files of the applications that will use Traefik: mkdir traefik app1 app2 Create the main configuration file for Traefik named traefik.yml in the previously created traefik directory: nano traefik/traefik.yml Insert the following code into the file: entryPoints: web: address: ":80" providers: docker: exposedByDefault: false api: dashboard: true insecure: true Let’s look closer at the parameters. entryPoints define the ports and protocols through which Traefik will accept requests. They specify on which port and IP address the service will listen for traffic. web — A unique name for the entry point, which can be referenced in routes. In this example, we use the name web. address: ":80" — Indicates that the entry point will listen for traffic on port 80 (HTTP) across all available network interfaces on the system. providers specify the sources of information about which routes and services should be used (e.g., Docker, Kubernetes, files, etc.). docker — Enables and uses the Docker provider. When using the Docker provider, Traefik automatically detects running containers and routes traffic to them. exposedByDefault: false — Disables automatic exposure of all Docker containers as services. This makes the configuration more secure: only containers explicitly enabled through labels (traefik.enable=true) will be routed (i.e., will accept and handle traffic). The api section contains settings for the administrative API and Traefik's built-in monitoring web interface. dashboard: true — Enables Traefik's web-based monitoring dashboard, which allows you to track active routes, entry points, and services. The dashboard is not a mandatory component and can be disabled by setting this to false. insecure: true — Allows access to the monitoring dashboard over HTTP. This is convenient for testing and getting familiar with the system but is unsafe to use in a production environment. To ensure secure access to the dashboard via HTTPS, set this to false. Preparing Configuration Files for Applications Now, let's prepare the configuration files for the applications that will use Traefik as a reverse proxy. We will deploy two Nginx containers, each displaying a specific message when accessed via its address. Create the Nginx configuration file for the first application: nano app1/default.conf Contents: server { listen 80; server_name app1.test.com; location / { root /usr/share/nginx/html; index index.html; } } For the server name, we specify the local domain app1.test.com. You can use either an IP address or a domain name. If you don't have a global domain name, you can use any name that is accessible only at the local level. Additionally, you will need to add the chosen domain to the /etc/hosts file (explained later). Next, create the html directory where the index.html file for the first application will be stored: mkdir app1/html Write the message "Welcome to App 1" into the index.html file using input redirection: echo "<h1>Welcome to App 1</h1>" > app1/html/index.html Repeat the same steps for the second application, but use values specific to the second app: nano app2/default.conf Contents: server { listen 80; server_name app2.test.com; location / { root /usr/share/nginx/html; index index.html; } } Set the local domain name for the second application as app2.test.com. Create the html directory for the second application: mkdir app2/html Write the message "Welcome to App 2" into the index.html file: echo "<h1>Welcome to App 2</h1>" > app2/html/index.html Since we used local domain names, they need to be registered in the system. To do this, open the hosts file using any text editor: nano /etc/hosts Add the following entries: 127.0.0.1 app1.test.com  127.0.0.1 app2.test.com   The final project structure should look like this: test-traefik/ ├── app1/ │ ├── default.conf │ └── html/ │ └── index.html ├── app2/ │ ├── default.conf │ └── html/ │ └── index.html └── traefik/ └── traefik.yml Launching Traefik and Applications Now let's proceed with launching Traefik and the applications. To do this, create a docker-compose.yml file in the root project directory (test-traefik): nano docker-compose.yml Insert the following configuration: version: "3.9" services: traefik: image: traefik:v2.10 container_name: traefik restart: always command: - "--configFile=/etc/traefik/traefik.yml" ports: - "80:80" - "8080:8080" volumes: - "./traefik/traefik.yml:/etc/traefik/traefik.yml" - "/var/run/docker.sock:/var/run/docker.sock:ro" app1: image: nginx:1.26-alpine container_name: nginx-app1 restart: always volumes: - "./app1/default.conf:/etc/nginx/conf.d/default.conf" - "./app1/html:/usr/share/nginx/html" labels: - "traefik.enable=true" - "traefik.http.routers.app1.rule=Host(`app1.test.com`)" - "traefik.http.services.app1.loadbalancer.server.port=80" app2: image: nginx:1.26-alpine container_name: nginx-app2 restart: always volumes: - "./app2/default.conf:/etc/nginx/conf.d/default.conf" - "./app2/html:/usr/share/nginx/html" labels: - "traefik.enable=true" - "traefik.http.routers.app2.rule=Host(`app2.test.com`)" - "traefik.http.services.app2.loadbalancer.server.port=80" Use the following command to launch the containers: docker compose up -d If Docker Compose was installed using the docker-compose-plugin package, the command to launch the containers will be as follows: docker-compose up -d Check the status of the running containers using the command: docker ps All containers should have the status Up. Let's verify whether the running containers with Nginx services can handle the traffic. To do this, send a request to the domain names using the curl utility. For the first application: curl -i app1.test.com For the second application: curl -i app2.test.com As you can see, both services returned the previously specified messages. Next, let's check the Traefik monitoring dashboard. Open a browser and go to the server's IP address on port 8080: In the Routers section, you will see the previously defined routes app1.test.com and app2.test.com. Conclusion Today, we explored Traefik's functionality using two Nginx services as an example. With Traefik, you can easily proxy applications running in Docker containers.
17 January 2025 · 7 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support