Sign In
Sign In

How to Create and Optimize Docker Images

How to Create and Optimize Docker Images
Hostman Team
Technical writer
Docker
22.01.2025
Reading time: 12 min

In today's environment, most companies actively use the Docker containerization system in their projects, especially when working with microservice applications. Docker allows you to quickly deploy any applications, whether monolithic or cloud-native. Despite the simplicity of working with Docker, it's important to understand some nuances of creating your own images. In this article, we will explore how to work with Docker images and optimize them using two different applications as examples.

Prerequisites

To work with the Docker containerization system, we will need:

  • A cloud server or a virtual machine with any pre-installed Linux distribution. We will be using Ubuntu 22.04.

  • Docker installed. See our installation guide

You can also use a pre-configured image with Docker. To do this, go to the Cloud servers section in your Hostman control panel, click Create server, and select Docker in the Marketplace tab.

Working with Docker Images

Docker images are created by other users and stored in registries—special repositories for images. Registries can be public or private. Public repositories are available to all users without requiring authentication. Private registries, however, can only be accessed by users with appropriate login credentials. Companies widely use private repositories to store their own images during software development.

By default, Docker uses the public registry Docker Hub, which any user can use to publish their own images or download images created by others. When a user runs a command such as docker run, the Docker daemon will, by default, contact its standard registry. If necessary, you can change the registry to another one.

To create custom Docker images, a Dockerfile is used—a text file containing instructions for building an image. These instructions use 18 specially reserved keywords. The most common types of instructions include the following:

  • FROM specifies the base image. Every image starts with a base image. A base image refers to a Linux distribution, such as Ubuntu, Debian, Oracle Linux, Alpine, etc. There are also many images with various pre-installed software, such as Nginx, Grafana, Prometheus, MySQL, and others. However, even when using an image with pre-installed software, some Linux OS distribution will always be specified inside.

  • WORKDIR creates a directory inside the image. Its functionality is similar to the mkdir utility used to create directories in Linux distributions. It can be used multiple times in one image.

  • COPY copies files and directories from the host system into the image. It is used to copy configuration files and application source code files.

  • ADD is similar to the COPY instruction, but in addition to copying files, ADD allows downloading files from remote sources and extracting .tar archives.

  • RUN executes commands inside the image. With RUN, you can perform any actions that a user can perform in a Bash shell, including creating files, installing packages, starting services, etc.

  • CMD specifies the command that will be executed when the container is started.

Example: Creating an Image

As an example, we will create an image with a simple Python program.

  1. Create a project directory and move into it:

mkdir python-calculator && cd python-calculator
  1. Create a file console_calculator.py with the following content:

print("*" * 10, "Calculator", "*" * 10)
print("To exit from program type q")

try:
 while True:
    arithmetic_operators = input("Choose arithmetic operation (+ - * /):\n")
    if arithmetic_operators == "q":
        break
    if arithmetic_operators in ("+", "-", "*", "/"):
        first_number = float(input("First number is:\n"))
        second_number = float(input("Second number is:\n"))
        print("The result is:")
        if arithmetic_operators == "+":
            print("%.2f" % (first_number + second_number))
        elif arithmetic_operators == "-":
            print("%.2f" % (first_number - second_number))
        elif arithmetic_operators == "*":
            print("%.2f" % (first_number * second_number))
        elif arithmetic_operators == "/":
            if second_number != 0:
                print("%.2f" % (first_number / second_number))
            else:
                print("You can't divide by zero!")      
    else:
        print("Invalid symbol!")

except (KeyboardInterrupt, EOFError) as e:
    print(e)
  1. Create a new Dockerfile with the following content:
FROM python:3.10-alpine

WORKDIR /app

COPY console_calculator.py .

CMD ["python3","console_calculator.py"]

For the base image, we will use python:3.10, which is based on a lightweight Linux distribution called Alpine. We will discuss the use of Alpine in more detail in the next chapter.

Inside the image, we will create a directory app, where the project file will be located.

The container will be launched using the command "python3", "console_calculator.py".

  1. To build the image, the docker build command is used. Each image must also be assigned a tag. A tag is a unique identifier that can be assigned to an image. The tag is specified using the -t flag:

docker build -t python-console-calculator:01 .

The period at the end of the command indicates that the Dockerfile is located in the current directory.

You can display the list of created images using:

docker images

Image24

To launch the container, use: 

docker run --rm -it python-console-calculator:01

Image1

Let's test the functionality of the program by performing a few simple arithmetic operations:

Image3

To exit the program, you need to press the q key.

Since we specified the --rm flag when starting the container, the container will be automatically removed.

You can also run the container in daemon mode, i.e., in the background. To do this, include the -d flag when starting the container:

docker run -dit python-console-calculator:01

After that, the container will appear in the list of running containers:

Image10

When starting the container in the background to access our script, you need to use docker exec, which executes a command inside the container. First, you need to start a shell (bash or sh), then manually run the script inside the container.

To do this, use the docker exec command, passing the sh command as an argument to open the shell inside the container (where 4f1b8b26c607 is the unique container ID displayed in the CONTAINER ID column of the docker ps output):

docker exec -it 4f1b8b26c607 sh

Then, run the script manually:

python console_calculator.py

Image16

To remove a running container, you need to use the docker rm command and pass the container's ID or name. You also need to use the -f flag, which will force the removal of a running container:

docker rm -f 186e8f43ca60

Optimizing Docker Images

When creating Docker images, there is one main rule: finished images should be compact and occupy as little space as possible. Additionally, the smaller the image, the faster it is built. This can play a key role when using CI/CD methods or when releasing software in the Time to Market model.

Proper Selection of the Base Image

As the first recommendation, it's important to choose the base image wisely. For example, instead of using various Linux distribution images like Ubuntu, Oracle Linux, Rocky Linux, and many others, you can directly choose an image that already comes with the required programming language, framework, or other necessary technology. Examples of such images include:

  • node for working with the Node.js platform
  • A pre-built image with Nginx
  • ibmjava for working with the Java programming language
  • postgres for working with the PostgreSQL databases
  • redis for working with the NoSQL Redis

Using a specific image instead of an operating system image has the following advantages:

  • There is no need to install the main tool (programming language, framework, etc.), so the image won't be "cluttered" with unnecessary packages, preventing an increase in size.

  • Images that come with pre-installed software (like Nginx, Redis, PostgreSQL, Grafana, etc.) are always created by the developers of the software themselves. This means that users do not need to configure the program to run it (except in cases where it needs to be integrated with their service).

Let's consider this recommendation with a practical example. We will use a simple Python program that prints "Hello from Python!".  First, we will build an image using debian as the base image.

  1. Create and navigate to the directory where the project files will be stored:

mkdir dockerfile-python && cd dockerfile-python
  1. Create the test.py file with the following content:

print("Hello from Python!")
  1. Next, create a Dockerfile with the following content:

FROM debian:latest

COPY test.py .

RUN apt update 
RUN apt -y install python3

CMD ["python3", "test.py"]

To run Python programs, you also need to install the Python interpreter.

  1. Then, build the image:

docker build -t python-debian:01 .

Image20

Let’s check the Docker image size: 

docker images

Image8

The image takes up 185MB, which is quite a lot for an application that just prints a single line to the terminal.

Now, let's choose the correct base image, which is based on the Alpine distribution.

Another feature of base images is that for many images, there are special versions in the form of slim and alpine images, which are even smaller. Let's look at the example of the official Python 3.10 image. The python:3.10 image takes up a whole 1 GB, whereas the slim version is much smaller—127 MB. And the alpine image is only 50 MB.

Image17

Slim images are images that contain the minimum set of packages necessary to run a finished application. These images lack most packages and libraries. Slim images are created from both regular Linux distributions (such as Ubuntu or Debian) and Alpine-based distributions.

Alpine images are images that use the Alpine distribution as the operating system— a lightweight Linux distribution that takes up about 5 MB of disk space (without the kernel). It differs from other Linux distributions in that it uses a package manager called apk, lacks the system initialization system, and has fewer pre-installed programs.

When using both slim and Alpine images, it is essential to thoroughly test your application, as the required packages or libraries might be missing in such distributions.

Now, let's test our application using the Python image with Alpine.

  1. Return to the previously used Dockerfile and replace the base image from debian to the python:alpine3.19 image. You should also remove the two RUN instructions, as there will be no need to install the Python interpreter:

FROM python:alpine3.19

COPY test.py .

CMD ["python3", "test.py"]
  1. Use a new tag to build the image:

Image21

  1. List all the Docker images. Check the image size and compare with the previous one: 

Image11

Since we chose the correct base image with Python already preinstalled, the image size was reduced from 185 MB to 43.8 MB.

Reducing the Number of Layers

Docker images are based on the concept of layers. A layer represents a change made to the image's file system. These changes include copying/creating directories and files or installing packages. It is recommended to use as few layers as possible in the image. Among all Dockerfile instructions, only the FROM, COPY, ADD, and RUN instructions create layers that increase the final image size. All other instructions create temporary intermediate images and do not directly increase the image size.

Let's take the previously used Dockerfile and modify it according to new requirements. Suppose we need to install additional packages using the apt package manager:

FROM debian:latest

COPY test.py .

RUN apt update 
RUN apt -y install python3 htop net-tools mc gcc

CMD ["python3", "test.py"]

Build the image:

docker build -t python-non-optimize:01 .

Check the size of the created Docker image:

docker images

Image14

The image size was 570 MB. However, we can reduce the size by using fewer layers. Previously, our Dockerfile contained two RUN instructions, which created two layers. We can reduce the image size by combining the apt update and apt install commands using the && symbol, which in Bash means that the next command will only run if the first one completes successfully.

Another important point is to remove cache files left in the image after package installation using the apt package manager (this also applies to other package managers such as yum/dnf and apk). The cache must be removed. For distributions using apt, the cache of installed programs is stored in the /var/lib/apt/lists directory. Therefore, we will add a command to delete all files in that directory within the RUN instruction without creating a new layer:

FROM debian:latest

COPY test.py .

RUN apt update && apt -y install python3 htop net-tools mc gcc && rm -rf /var/lib/apt/lists/*

CMD ["python3", "test.py"]

Build the image:

docker build -t python-optimize:03 .

And check the size:

Image7

The image size was reduced from the initial 570 MB to the current 551 MB.

Using Multi-Stage Builds

Another significant way to reduce the size of the created image is by using multi-stage builds. These builds, which involve two or more base images, allow us to separate the build environment from the runtime environment, effectively removing unnecessary files and dependencies from the final image. These unnecessary files might include libraries or development dependencies that are only needed during the build process.

Let’s explore this approach with a practical example using the Node.js platform. Node.js should be installed beforehand, following our guide.

We will first build the application image without multi-stage builds to evaluate the difference in size.

  1. Create a directory for the project:

mkdir node-app && cd node-app
  1. Initialize a new Node.js application:

npm init -y

Image12

  1. Install the express library:
npm install express
  1. Create an index.js file with the content:
const express = require('express');
const app = express();
const PORT = process.env.PORT || 3000;

app.get('/', (req, res) => {
  res.send('Hello, World!');
});

app.listen(PORT, () => {
  console.log(Server is running on port${PORT});
});
  1. Create Dockerfile with this content:
FROM node:14-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY index.js .
EXPOSE 3000
CMD ["npm", "start"]
  1. Build the image:
docker build -t node-app:01 .

Image15

  1. Check the size:
docker images

Image19

  1. The image size was 124 MB. Now let's rewrite the Dockerfile to use two images, transforming it into the following form:
FROM node:14 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY index.js .

FROM gcr.io/distroless/base-debian10 AS production
WORKDIR /app
COPY --from=builder /app .
EXPOSE 3000
CMD ["npm", "start"]
  1. Build the image:
docker build -t node-app:02 .

Image18

  1. List the Docker images and check the size:
docker images

Image23

As a result, the image size was drastically reduced—from 124 MB to 21.5 MB.

Conclusion

In this article, we created our own Docker image and explored various ways to run it. We also paid significant attention to optimizing Docker images. Through optimization, we can greatly reduce the image size, which allows for faster image builds.

Docker
22.01.2025
Reading time: 12 min

Similar

Docker

Docker Exec: Access, Commands, and Use Cases

docker exec is a utility that allows you to connect to an already running Docker container and execute commands without restarting or stopping it. This is very convenient for technical analysis, configuration, and debugging applications. For example, you can check logs, modify configurations, or restart services. And on a cloud server in Hostman, this command helps manage running applications in real time, without rebuilding containers or interfering with the image. How to Use docker exec: Parameters and Examples Before using it, make sure Docker is installed and the container is running. If you are just starting out, check out the installation guide for Docker on Ubuntu 22.04. The basic syntax of docker exec is: docker exec [options] <container> <command> Where: <container> is the name or ID of the target container; <command> is the instruction to be executed inside it. Main Parameters: -i — enables input mode; -t — attaches a pseudo-terminal, useful for running bash; -d — runs the task in the background; -u — allows running the command as a specified user; -e — sets environment variables; -w — sets the working directory in which the command will be executed. Example of launching bash inside a container: docker exec -it my_container /bin/bash This way, you can access the container’s environment and run commands directly without stopping it. Usage Examples List files inside the container: ls /app Run commands with root access: docker exec -u root my_container whoami Pass environment variables: docker exec -e DEBUG=true my_container env Set working directory: docker exec -w /var/www my_container ls Run background tasks: docker exec -d my_container touch /tmp/testfile Check Nginx configuration inside a container before restarting it: docker exec -it nginx_container nginx -t Advanced Use Cases Let’s consider some typical but slightly more complex scenarios that may be useful in daily work: running as another user, passing multiple environment variables, specifying a working directory, etc. Run as web user: docker exec -u www-data my_container ls -la /var/www Set multiple environment variables at once: docker exec -e DEBUG=true -e STAGE=dev my_container env Set working directory with admin rights: docker exec -u root -w /opt/app my_container ls Example with Laravel in Hostman If you deploy a Laravel application in a container on a Hostman server, docker exec will be very handy. Suppose you have a container with Laravel and a database in a separate service. To connect to the server: ssh root@your-server-ip After connecting, you can run Artisan commands—Laravel’s built-in CLI—inside the container. Clear application cache: docker exec -it laravel_app php artisan cache:clear Run migrations: docker exec -it laravel_app php artisan migrate Check queue status: docker exec -it laravel_app php artisan queue:listen Set permissions: docker exec -u www-data -it laravel_app php artisan config:cache Make a backup of a database deployed in a separate container: docker exec -it mariadb_container mysqldump -u root -p laravel_db > backup.sql Before running the last command, make sure that a volume for /backup is mounted, or use SCP to transfer the file to your local machine. This approach does not require changing the image or direct container access, which makes administration safe and flexible. Extended Capabilities of docker exec In this section, we will look at less common but more flexible uses of the docker exec command: for example, running psql in a PostgreSQL container, executing Node.js scripts, or connecting to stopped containers. These cases show how flexible the command can be if something non-standard is required. The command is not limited to basic tasks: in addition to launching shell or bash, you can work with environments, interact with databases, run Node.js scripts, and connect to any running container. Connect to PostgreSQL CLI: docker exec -it postgres_container psql -U postgres -d my_db Run a Node.js script (if you have script.js): docker exec -it node_app node script.js Run a stopped container: docker start my_container   docker exec -it my_container bash Manage users explicitly with -u: docker exec -u www-data my_container ls -la /var/www Quickly remove temporary files: docker exec -it my_container rm -rf /tmp/cache/* This approach is convenient in cron jobs or when manually cleaning temporary directories. When Not to Use the Command Despite its convenience, docker exec is a manual tool for interacting with containers. In production environments, its use can be risky. Why not use docker exec in production: Changes are not saved in Dockerfile. This can break reproducibility and infrastructure integrity. No command logging, so it’s difficult to track actions. Possible desynchronization with CI/CD pipeline. Instead, it is recommended to use: Dockerfile and docker-compose.yml for reproducible builds; CI/CD for automating tasks via GitHub Actions or GitLab CI; Monitoring for log processes with Prometheus, Grafana, and Loki. Troubleshooting Common Errors No such container Cause: container not found or stopped Solution: docker ps The command shows a list of running containers. If your container is not listed, it’s not running or hasn’t been created. exec failed: container not running Cause: attempt to run a command in a stopped container Solution: docker start <container_name> After starting the container, you can use docker exec again. permission denied Cause: insufficient user permissions Solution: docker exec -u root <container> <command> The -u root flag runs the command as root, providing extended access inside the container. This is especially useful when working with system files or configurations. Difference Between docker exec and docker attach In addition to docker exec, there is another way to interact with a container—the docker attach command. It connects you directly to the main process running inside the container, as if you launched it in the terminal. This is convenient if you need to monitor logs or enter data directly, but there are risks: any accidental key press (for example, Ctrl+C) can stop the container. That’s why it’s important to understand the differences. Also, docker attach requires TTY (a terminal emulator) for correct work with interactive apps like bash or sh. Parameter docker exec docker attach Requires TTY Optional Yes Multiple connections Yes No Interferes with main process No Yes Usable for debugging Yes Partially (may harm app) Use docker exec for auxiliary tasks—it provides flexibility and reduces risks. Conclusion The docker exec command is an effective tool for managing containers without interfering with their lifecycle. It allows you to run commands as different users, pass variables, check logs, and perform administrative tasks. When working in cloud services such as Hostman, this is especially useful: you can perform targeted actions without rebuilding the image and without risking the main process. It is important to remember: docker exec is a manual tool and does not replace automated DevOps approaches. For system-level changes, it is better to use Dockerfile and CI/CD, keeping your infrastructure reproducible and secure.
05 September 2025 · 6 min to read
Docker

How to Install Docker on Ubuntu 22.04

Docker is a free, open-source tool for application containerization. Containers are isolated environments similar to virtual machines (VMs), but they are more lightweight and portable across platforms, requiring fewer system resources. Docker uses OS-level virtualization, leveraging features built into the Linux kernel. Apps order after installing Docker on Ubuntu Although it applies to other Ubuntu versions as well, this tutorial explains how to install Docker on Ubuntu 22.04. We'll also download Docker Compose, which is a necessary tool for effectively managing several containers. For this guide, we will use a Hostman cloud server. System Requirements According to Docker's documentation, the following 64-bit Ubuntu versions are supported: Ubuntu Oracular 24.10 Ubuntu Noble 24.04 (LTS) Ubuntu Jammy 22.04 (LTS) Ubuntu Focal 20.04 (LTS) Docker works on most popular architectures. The resource requirements for your device will depend on your intended use and how comfortably you want to work with Docker. The scale of applications you plan to deploy in containers will largely dictate the system needs. Some sources recommend a minimum of 2 GB of RAM. Additionally, a stable internet connection is required. Installing Docker on Ubuntu 22.04 Installing Docker on Ubuntu 22.04 involves executing a series of terminal commands. Below is a step-by-step guide with explanations. The steps are also applicable to server versions of Ubuntu. 1. Update Package Indexes The default repository may not always contain the latest software releases. Therefore, we will download Docker from its official repository to ensure the latest version. First, update the package indexes: sudo apt update 2. Install Additional Packages To install Docker, you’ll need to download four additional packages: curl: Required for interacting with web resources. software-properties-common: Enables software management via scripts. ca-certificates: Contains information about certification authorities. apt-transport-https: Necessary for data transfer over the HTTPS protocol. Download these packages with the following command: sudo apt install curl software-properties-common ca-certificates apt-transport-https -y The -y flag automatically answers "Yes" to all terminal prompts. 3. Import the GPG Key Software signatures must be verified using the GPG key. Docker's repository must be added to the local list. Use the command to import the GPG key: wget -O- https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor | sudo tee /etc/apt/keyrings/docker.gpg > /dev/null During the import process, the terminal may display a warning before confirming the successful execution of the command. 4. Add Docker Repository Add the repository for your version of Ubuntu, named "Jammy." For other versions, use their respective code names listed in the "System Requirements" section. Run the following command: echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null During execution, the terminal will prompt you to confirm the operation. Press Enter. 5. Update Package Indexes Again After making these changes, update the package indexes once more using the familiar command: sudo apt update 6. Verify the Repository Ensure that the installation will proceed from the correct repository by running the following command: apt-cache policy docker-ce Output example: Depending on the most recent Docker releases, the result could change. Verifying that the installation will be carried out from Docker's official repository is crucial. 7. Installing Docker After configuring the repositories, proceed with the Docker installation: sudo apt install docker-ce -y The installation process will begin immediately. To confirm a successful installation, check Docker's status in the system: sudo systemctl status docker Output example: The output should indicate that the Docker service is active and running. And if you’re looking for a reliable, high-performance, and budget-friendly solution for your workflows, Hostman has you covered with Linux VPS Hosting options, including Debian VPS, Ubuntu VPS, and VPS CentOS. Installing Docker Compose Docker Compose is a Docker tool designed for managing multiple containers. It is commonly used in projects where many containers must work together as a unified system. Managing this process manually can be challenging. Instead, you describe the entire configuration in a single YAML file containing the settings and configurations for all containers and their applications. There are several ways to install Docker Compose. If you need the latest version, make sure to use manual installation and installation via the Git version control system. Installation via apt-get If having the latest version is not critical for you, Docker Compose can be installed directly from the Ubuntu repository. Run the following command: sudo apt-get install docker-compose Installing via Git First, install Git: sudo apt-get install git Verify the installation by checking the Git version: git --version The output should show the Git version. Next, clone the Docker Compose repository. Navigate to the Docker Compose GitHub page and copy the repository URL. Run the following command to clone the repository: git clone https://github.com/docker/compose.git The cloning process will begin, and the repository will be downloaded from GitHub. Manual Installation Go to the Docker Compose GitHub repository and locate the latest release version under the Latest tag. At the time of writing, the Latest version of Docker Compose is v2.31.0. Let's download it: sudo curl -L "https://github.com/docker/compose/releases/download/v2.31.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose In this command, the parameters $(uname -s) and $(uname -m) automatically account for the system characteristics and architecture. After the download finishes, change the file's permissions: sudo chmod +x /usr/local/bin/docker-compose Right order of your infrastructure after installation of Docker on Ubuntu Conclusion In this guide, we covered the installation of Docker on Ubuntu 22.04, along with several ways to install Docker Compose. You can order a cloud server at Hostman for your experiments and practice.
22 August 2025 · 5 min to read
Docker

Running Selenium with Chrome in Docker

Sometimes, it’s useful to work with Selenium in Python within a Docker container. This raises questions about the benefits of using such tools, version compatibility between ChromeDriver and Chromium, and the nuances of their implementation. In this article, we’ll cover key considerations and provide solutions to common issues. And if you’re looking for a reliable, high-performance, and budget-friendly solution for your workflows, Hostman has you covered with Linux VPS Hosting options, including Debian VPS, Ubuntu VPS, and VPS CentOS. Why Run Selenium in Docker? Running Selenium in a container offers several advantages: Portability: Easily transfer the environment between different machines, avoiding version conflicts and OS-specific dependencies. Isolation: The Selenium container can be quickly replaced or updated without affecting other components on the server. CI/CD Compatibility: Dockerized Selenium fits well into CI/CD pipelines — you can spin up a clean test environment from scratch each time your system needs testing. Preparing an Ubuntu Server for Selenium with Docker First, make sure Docker and Docker Compose are installed on the server: docker --version && docker compose version In some Docker Compose versions, the command is docker-compose instead of docker compose. If the tools are installed, you’ll see output confirming their versions. If not, follow this guide. Selenium in Docker Example When deploying Selenium in Docker containers, consider the host architecture, functional requirements, and performance. Official selenium/standalone-* images are designed for AMD64 (x86_64) CPUs, while seleniarm/standalone-* images are adapted for ARM architectures (e.g., Apple silicon or ARM64 server CPUs). First, create a docker-compose.yml file in your project root. It will contain two services: version: "3" services: app: build: . restart: always volumes: - .:/app depends_on: - selenium platform: linux/amd64 selenium: image: selenium/standalone-chromium:latest # For AMD64 # image: seleniarm/standalone-chromium:latest # For ARM64 container_name: selenium-container restart: unless-stopped shm_size: 2g ports: - "4444:4444" # Selenium WebDriver API - "7900:7900" # VNC Viewer environment: - SE_NODE_MAX_SESSIONS=1 - SE_NODE_OVERRIDE_MAX_SESSIONS=true - SE_NODE_SESSION_TIMEOUT=300 - SE_NODE_GRID_URL=http://localhost:4444 - SE_NODE_DETECT_DRIVERS=false You must choose the correct image for your system architecture by uncommenting the appropriate line. The app service will run your main Python code. Let’s define a standard Dockerfile for this service: # Use a minimal Python image FROM python:3.11-slim # Set working directory WORKDIR /app # Install Python dependencies COPY requirements.txt /app/ RUN pip install --no-cache-dir -r requirements.txt # Copy project files COPY . /app/ # Set environment variables (Chromium is in a separate container) ENV SELENIUM_REMOTE_URL="http://selenium:4444/wd/hub" # Run Python script CMD ["python", "main.py"] This Dockerfile uses a base Python image and automatically installs the necessary dependencies. Now let’s add the driver initialization script to main.py: import time # Used to create a delay for checking browser functionality import os from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options # WebDriver settings chrome_options = Options() chrome_options.add_argument("--no-sandbox") chrome_options.add_argument("--disable-dev-shm-usage") chrome_options.add_argument("--disable-gpu") chrome_options.add_argument("--disable-webrtc") chrome_options.add_argument("--hide-scrollbars") chrome_options.add_argument("--disable-notifications") chrome_options.add_argument("--start-maximized") SELENIUM_REMOTE_URL = os.getenv("SELENIUM_REMOTE_URL", "http://selenium:4444/wd/hub") driver = webdriver.Remote( command_executor=SELENIUM_REMOTE_URL, options=chrome_options ) # Open a test page driver.get("https://www.timeweb.cloud") time.sleep(9999) # Shut down WebDriver driver.quit() In the requirements.txt file, list standard dependencies, including Selenium: attrs==25.1.0 certifi==2025.1.31 h11==0.14.0 idna==3.10 outcome==1.3.0.post0 PySocks==1.7.1 selenium==4.28.1 sniffio==1.3.1 sortedcontainers==2.4.0 trio==0.28.0 trio-websocket==0.11.1 typing_extensions==4.12.2 urllib3==2.3.0 websocket-client==1.8.0 wsproto==1.2.0 Now you can launch the containers: docker compose up -d Expected output: Docker will build and launch the containers. To verify everything is running correctly: docker compose ps You should see two running containers which means everything was loaded successfully. You can now integrate a script in main.py to interact with any site. Debugging Selenium in Docker with VNC In official Selenium Docker images (like seleniarm/standalone-chromium, selenium/standalone-chrome, etc.), direct access to the Chrome DevTools Protocol is usually overridden by Selenium Grid. It generates a new port for each session and proxies it via WebSocket. Arguments like --remote-debugging-port=9229 are ignored or overwritten by Selenium, making direct browser port access impossible from outside the container. Instead, these Docker images offer built-in VNC (Virtual Network Computing), similar to TeamViewer or AnyDesk, but working differently. VNC requires headless mode to be disabled, since it transmits the actual screen content — and if the screen is blank, there will be nothing to see. You can connect to the VNC web interface at: http://<server_ip>:7900 When connecting, you'll be asked for a password. To generate one, connect to the selenium-container via terminal: docker exec -it selenium-container bash Then enter: x11vnc -storepasswd You’ll be prompted to enter and confirm a password interactively. Enter the created password into the VNC web interface, and you’ll gain access to the browser controlled by Selenium inside Docker. From there, you can open DevTools to inspect DOM elements or debug network requests. Conclusion Running Selenium in Docker containers simplifies environment portability and reduces the risk of version conflicts between tools. It also allows visual debugging of tests via VNC, if needed. Just make sure to choose the correct image for your system architecture and disable headless mode when a graphical interface is required. This provides a more flexible and convenient infrastructure for testing and accelerates Selenium integration into CI/CD pipelines.
19 June 2025 · 6 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support