Sign In
Sign In

Deploying a Node.js Application Using Docker

Deploying a Node.js Application Using Docker
Hostman Team
Technical writer
Docker Node.js
20.05.2024
Reading time: 12 min

Have you ever tried to deploy your application somewhere outside of your local machine? Running a developed product (for example, a Node.js server) on another computer is sometimes a difficult task.

Software dependencies, environment variables, configuration files—all of these need to be configured to get the simplest application running. And doing it manually is a routine and unreliable job. We need automation.

Many modern technologies strive to solve the problem of different environments. Containerization is one of these options. And Docker is the most commonly used tool here.

Prerequisites

Before you start following this guide, make sure you already have:

This tutorial assumes that the reader already has experience with the Node.js platform and is familiar with Docker.

Why Docker?

Docker allows you to package your application, environment, and dependencies into a container.

First, we create an application image: code, libraries, configuration files, environment variables, and environment. Everything inside the image is needed to build and run the application.

A container directly refers to an instance of this image. If we draw an analogy from programming languages, then an image is a class, and a container is an instance of this class.

Unlike a virtual machine, a container is just an operating system process.

Essentially, Docker creates an abstraction over low-level operating system tools, allowing one or more containerized processes to run inside virtualized instances of the Linux operating system.

Despite the fact that Docker is by no means a panacea for deployment automation, it solves many important problems:

  • Deploys applications quickly

  • Provides portability between machines

  • Has version control

  • Allows you to build a flexible architecture using components

  • Reduces maintenance overhead due to its compact size

Step 1. Create a Node.js Application

Configuration and dependencies

First, you need to create a directory for the application source files. Let's call it node_app:

mkdir node_app

Now, move to this directory:

cd node_app

As with any Node.js project, we will need a configuration file. Let's create and open it. On Linux, this can be done via nano:

nano package.json

Our project's details are standard:

{
   "name": "node-app-by-hostman",
   "description": "node with docker",
   "version": "1.0.0",
   "main": "hostman.js",
   "keywords": [
     "nodejs"
     "express",
     "docker"
   ],
   "dependencies": {
     "express": "^4.16.4"
   },
   "scripts": {
     "start": "node hostman.js"
   }
}

This file contains general information about the project, author, and license. It is needed for the NPM package manager, which is responsible for installing dependencies and publishing projects to the official library.

The most important parameters in this package.json are:

  • The main entry point for the application is the hostman.js file.

  • In the dependencies, we specify the Express framework. 

You can now save and close the file. All that remains is to install the dependencies:

npm install

Application source code

For our example, we will create a simple Node.js application that displays a static web page: index.html.

The file structure is like this:

  • hostman.js is the entry point that processes requests and performs routing;

  • index.html is web page markup.

It is worth noting that to simplify the example, we will write CSS styles directly in HTML. Of course, in real projects, the visual description of a web page is located in separate files like style.css, often using SASS, LESS, or SCSS transpilers.

Using nano, we will create and open hostman.js:

nano hostman.js

It will only contain the bare minimum code to run the web server:

const express = require('express'); // requiring the Express framework (module)
const app = express(); // creating an application instance
const router = express.Router(); // creating a router instance
const path = __dirname; // path to the working directory
const port = 8080; // server port
// print HTTP METHOD to the console on every request
router.use(function (req,res,next) {
   console.log('/' + req.method);
   next();
});
// respond to the main page request with the index.html file
router.get('/', function(req,res){
   res.sendFile(path + 'index.html');
});
// connect the router to the application
app.use('/', router);
// start listening on port 8080, thereby starting the http server
app.listen(port, function () {
   console.log('Listening on port 8080')
})

The official Express documentation provides a more detailed description of the framework's functionality and examples of its use.

The HTML markup file index.html looks pretty trivial:

<!DOCTYPE html>
<html lang="en">
<head>
     <title>NodeJS app with Docker by Hostman</title>
     <meta charset="utf-8">
     <meta name="viewport" content="width=device-width, initial-scale=1">
</head>
<body>
     <div>Hello World from Hostman!</div>
</body>
<style>
body
{
    height: 100vh;
    display: flex;
    align-items: center;
    justify-content: center;
}
body > div
{
    padding: 12px;
    color: white;
    font-weight: bold;
    background: black;
}
</style>
</html>

To make sure everything displays correctly, open the index.html file in your browser. You should see "Hello World from Hostman!" in the central part of the page, along with a dark border.

At this point, our improvised application can be considered complete. Now, we can move on to dockerization itself.

Step 2. Create a Dockerfile

A Dockerfile is a text document that contains instructions for building a Docker image.

All instructions are executed exactly in the order in which they are written in this file. The recording format is simple: the instruction name and its arguments. This is somewhat similar to functions in programming languages. Comments are written after #.

# comment
INSTRUCTION arguments

Although instruction names are not case-sensitive, they are usually written in capital letters so they do not visually blend with the arguments.

Let's create and open a Dockerfile, after which we can move on to editing it:

nano Dockerfile

Installing the Node.js image

Docker will consecutively execute the instructions in the Dockerfile each time an end user deploys your Node.js application.

Therefore, the first thing they will need is Node itself. Add the following instruction to the Dockerfile:

FROM node:19-alpine

In this case, the FROM command installs the official image of Node.js Alpine Linux version 19 on the machine.

By the way, Docker has an official Docker Hub library that stores container images from developers all over the world. Of course, it also has Node.js.

If you look at the Node.js code on GitHub, you will notice a similar Dockerfile that sets up the environment for running Node on the user's machine.

To make a very simple analogy, a Dockerfile in Docker is almost the same as package.json in NPM. It sets up the project and recursively "drags" all the dependencies: a higher-level Dockerfile installs an image with a lower-level Dockerfile, and so on.

Setting the working directory

The Docker image (which will later turn into a container) needs to be told in which directory to run the rest of the commands that will operate on files and folders. For example, RUN, CMD, ENTRYPOINT, COPY or ADD commands.

To do this, there is a WORKDIR instruction, which passes the directory path as an argument:

WORKDIR /app

Copying configuration files

Using the COPY command, you need to copy the package.json and package-lock.json files from the project directory on the local computer to the container's file system - to the directory specified earlier:

COPY package.json package-lock.json ./

Because the Dockerfile is located in the project directory, the container image contains all the necessary files. However, an image is not a container. Therefore, using the COPY command, we tell Docker which specific files need to be transferred to the "virtual space" of the container.

Installing NPM dependencies

Since the previously installed application directory already contains package.json and package-lock.json, you can download the required dependencies from the NPM registry.

For these purposes, you usually run the npm install command. For Docker to do this automatically, you need to specify the RUN instruction:

RUN npm install

Docker will execute this command in the previously specified /app directory.

Note that the RUN statement executes commands while installing the image (not starting the container), which will later exist as a container. By the way, you can enter commands like this:

RUN [“command1”, “command2”, “command3”]

Copying other files

Once all dependencies are installed, you can copy all the remaining project files to the /app directory. To do this, use the same COPY command, but specify the entire directory rather than specific files:

COPY. ./

Launching the application

Now you can enter the command to deploy your Node.js app. Use the CMD instruction for this. Unlike the RUN instruction, CMD executes the specified commands while the container is running, not during the image installation: 

cmd npm start

Don't forget that you already have the start command defined in package.json:

   "scripts": {
    "start": "node hostman.js"
  }

Final image configuration in Dockerfile

So, after we outlined the entire sequence of actions for Docker, the complete Dockerfile code should look like this:

# install the official Node.js image
FROM node:19-alpine
# specify the working (root) directory
WORKDIR /app
# copy the main application files to the working directory
COPY package.json package-lock.json ./
# install the specified NPM dependencies during the image installation stage
RUN npm install
# after installation, copy all project files to the root directory
COPY. ./
# run the main script when the container starts
cmd npm start

That's it! The minimum set of instructions is specified. Now you can try to create an image and run a container based on it.

About the .dockerignore file

.dockerignore is another configuration file. It contains directories that should be excluded when creating the Docker image. Your project folder probably has a lot of files that are in no way related to the image we create, although they are important during development.

In fact, .dockerignore is much more important than it might seem at first glance. It prevents files that are too large or sensitive from getting into the image. It also limits the effect of ADD or COPY commands used in the Dockerfile.

For example, every time you use the docker build command, Docker checks the image cache against the file system's state. If there are changes, it builds the image again.

However, if some files in your directory are updated quite often, but are not needed to build the image, they should be excluded so as not to perform a pointless rebuild.

Creating and editing .dockerignore

The .dockerignore file should be created in the root directory of your project. Inside the file, on each new line, indicate the names of files and directories to be excluded.

# this is a comment
README.md

Like in a Dockerfile, the # symbol marks a comment. 

There are also ways to specify files more generally:

*/folder

In this case, all directories (or files without an extension) with the name folder in any directory one level below will be excluded from the build.

However, you can ignore directories and files recursively, at the root and at all levels below:

**/folder

At the same time, using !, you can exculde a specific file from the exception. 

In this case, we will exclude all .md files except README.md:

*.md
!README.md

Step 3. Build the Docker Image

The Docker image is created based on the description in the Dockerfile. The command for this should be run from the root of the project, where the Dockerfile is located:

dockerbuild. -t nodeproject

The -t flag is required to set the tag name of the new image. It can later be referenced via nodeproject:latest.

After this, you can check that the image was actually created:

docker images nodeproject:latest

This command displays information about a specific Docker image:

REPOSITORY TAG IMAGE ID CREATED SIZE
nodeproject latest gk8orf8fre489 3 minutes ago 15MB

You do not need to enter a specific name, the console will display information about all images on the computer.

Step 4. Starting the Docker Container

Each created image can be run as a container:

docker run nodeproject

Running as a container, a Docker image is an operating system process in which the file system, network, and process tree are separate from the host computer.

All console output from your Node.js application will be printed to the same terminal in which the container was launched. However, binding a container process to a specific terminal instance is not the best solution.

A better practice is to run the container in the background using the special --detach or -d flags.

docker run -d nodeproject

Docker will start the container by outputting a special identifier in the terminal. It can be used to access the container in subsequent commands:

9341f8b2532b121e9dea8aeb55563c24302df96710c411c699a612e794e89ee4

Before each container run, always check whether the container has already been launched unless you are sure it is not. 

For this, Docker has a command that displays a list of all containers running on the computer:

docker ps

This way you can see the container ID, the image that the container is running on, the command used to start the container, the time it was created, the current status, the ports exposed by the container, and the name of the container itself. By default, Docker assigns a random name to the container, but this can be changed using the --name flag.

Please note that we are talking about the name of the container, not the image. Let's say you run a container called myname:

docker run -d --name myname nodeproject

Now you can stop it by specifying a name:

docker stop myname

And also remove:

docker rm myname

Autonomous container logs

A container running in the background does not explicitly display output to the console. However, it exists and can be seen like this:

docker logs myname

Now everything that your application managed to output to the console will be printed in the terminal.

Conclusion

This article very briefly explains what Docker is, how it works, and why it can be useful when developing Node applications.

By understanding how to properly format a Dockerfile and deploy a Node.js application using Docker, you can automate the process of deploying software to end-user machines.

Such solutions are most relevant in DevOps development, in particular when building CI/CD pipelines.

Docker Node.js
20.05.2024
Reading time: 12 min

Similar

Docker

Docker Exec: Access, Commands, and Use Cases

docker exec is a utility that allows you to connect to an already running Docker container and execute commands without restarting or stopping it. This is very convenient for technical analysis, configuration, and debugging applications. For example, you can check logs, modify configurations, or restart services. And on a cloud server in Hostman, this command helps manage running applications in real time, without rebuilding containers or interfering with the image. How to Use docker exec: Parameters and Examples Before using it, make sure Docker is installed and the container is running. If you are just starting out, check out the installation guide for Docker on Ubuntu 22.04. The basic syntax of docker exec is: docker exec [options] <container> <command> Where: <container> is the name or ID of the target container; <command> is the instruction to be executed inside it. Main Parameters: -i — enables input mode; -t — attaches a pseudo-terminal, useful for running bash; -d — runs the task in the background; -u — allows running the command as a specified user; -e — sets environment variables; -w — sets the working directory in which the command will be executed. Example of launching bash inside a container: docker exec -it my_container /bin/bash This way, you can access the container’s environment and run commands directly without stopping it. Usage Examples List files inside the container: ls /app Run commands with root access: docker exec -u root my_container whoami Pass environment variables: docker exec -e DEBUG=true my_container env Set working directory: docker exec -w /var/www my_container ls Run background tasks: docker exec -d my_container touch /tmp/testfile Check Nginx configuration inside a container before restarting it: docker exec -it nginx_container nginx -t Advanced Use Cases Let’s consider some typical but slightly more complex scenarios that may be useful in daily work: running as another user, passing multiple environment variables, specifying a working directory, etc. Run as web user: docker exec -u www-data my_container ls -la /var/www Set multiple environment variables at once: docker exec -e DEBUG=true -e STAGE=dev my_container env Set working directory with admin rights: docker exec -u root -w /opt/app my_container ls Example with Laravel in Hostman If you deploy a Laravel application in a container on a Hostman server, docker exec will be very handy. Suppose you have a container with Laravel and a database in a separate service. To connect to the server: ssh root@your-server-ip After connecting, you can run Artisan commands—Laravel’s built-in CLI—inside the container. Clear application cache: docker exec -it laravel_app php artisan cache:clear Run migrations: docker exec -it laravel_app php artisan migrate Check queue status: docker exec -it laravel_app php artisan queue:listen Set permissions: docker exec -u www-data -it laravel_app php artisan config:cache Make a backup of a database deployed in a separate container: docker exec -it mariadb_container mysqldump -u root -p laravel_db > backup.sql Before running the last command, make sure that a volume for /backup is mounted, or use SCP to transfer the file to your local machine. This approach does not require changing the image or direct container access, which makes administration safe and flexible. Extended Capabilities of docker exec In this section, we will look at less common but more flexible uses of the docker exec command: for example, running psql in a PostgreSQL container, executing Node.js scripts, or connecting to stopped containers. These cases show how flexible the command can be if something non-standard is required. The command is not limited to basic tasks: in addition to launching shell or bash, you can work with environments, interact with databases, run Node.js scripts, and connect to any running container. Connect to PostgreSQL CLI: docker exec -it postgres_container psql -U postgres -d my_db Run a Node.js script (if you have script.js): docker exec -it node_app node script.js Run a stopped container: docker start my_container   docker exec -it my_container bash Manage users explicitly with -u: docker exec -u www-data my_container ls -la /var/www Quickly remove temporary files: docker exec -it my_container rm -rf /tmp/cache/* This approach is convenient in cron jobs or when manually cleaning temporary directories. When Not to Use the Command Despite its convenience, docker exec is a manual tool for interacting with containers. In production environments, its use can be risky. Why not use docker exec in production: Changes are not saved in Dockerfile. This can break reproducibility and infrastructure integrity. No command logging, so it’s difficult to track actions. Possible desynchronization with CI/CD pipeline. Instead, it is recommended to use: Dockerfile and docker-compose.yml for reproducible builds; CI/CD for automating tasks via GitHub Actions or GitLab CI; Monitoring for log processes with Prometheus, Grafana, and Loki. Troubleshooting Common Errors No such container Cause: container not found or stopped Solution: docker ps The command shows a list of running containers. If your container is not listed, it’s not running or hasn’t been created. exec failed: container not running Cause: attempt to run a command in a stopped container Solution: docker start <container_name> After starting the container, you can use docker exec again. permission denied Cause: insufficient user permissions Solution: docker exec -u root <container> <command> The -u root flag runs the command as root, providing extended access inside the container. This is especially useful when working with system files or configurations. Difference Between docker exec and docker attach In addition to docker exec, there is another way to interact with a container—the docker attach command. It connects you directly to the main process running inside the container, as if you launched it in the terminal. This is convenient if you need to monitor logs or enter data directly, but there are risks: any accidental key press (for example, Ctrl+C) can stop the container. That’s why it’s important to understand the differences. Also, docker attach requires TTY (a terminal emulator) for correct work with interactive apps like bash or sh. Parameter docker exec docker attach Requires TTY Optional Yes Multiple connections Yes No Interferes with main process No Yes Usable for debugging Yes Partially (may harm app) Use docker exec for auxiliary tasks—it provides flexibility and reduces risks. Conclusion The docker exec command is an effective tool for managing containers without interfering with their lifecycle. It allows you to run commands as different users, pass variables, check logs, and perform administrative tasks. When working in cloud services such as Hostman, this is especially useful: you can perform targeted actions without rebuilding the image and without risking the main process. It is important to remember: docker exec is a manual tool and does not replace automated DevOps approaches. For system-level changes, it is better to use Dockerfile and CI/CD, keeping your infrastructure reproducible and secure.
05 September 2025 · 6 min to read
Docker

How to Install Docker on Ubuntu 22.04

Docker is a free, open-source tool for application containerization. Containers are isolated environments similar to virtual machines (VMs), but they are more lightweight and portable across platforms, requiring fewer system resources. Docker uses OS-level virtualization, leveraging features built into the Linux kernel. Apps order after installing Docker on Ubuntu Although it applies to other Ubuntu versions as well, this tutorial explains how to install Docker on Ubuntu 22.04. We'll also download Docker Compose, which is a necessary tool for effectively managing several containers. For this guide, we will use a Hostman cloud server. System Requirements According to Docker's documentation, the following 64-bit Ubuntu versions are supported: Ubuntu Oracular 24.10 Ubuntu Noble 24.04 (LTS) Ubuntu Jammy 22.04 (LTS) Ubuntu Focal 20.04 (LTS) Docker works on most popular architectures. The resource requirements for your device will depend on your intended use and how comfortably you want to work with Docker. The scale of applications you plan to deploy in containers will largely dictate the system needs. Some sources recommend a minimum of 2 GB of RAM. Additionally, a stable internet connection is required. Installing Docker on Ubuntu 22.04 Installing Docker on Ubuntu 22.04 involves executing a series of terminal commands. Below is a step-by-step guide with explanations. The steps are also applicable to server versions of Ubuntu. 1. Update Package Indexes The default repository may not always contain the latest software releases. Therefore, we will download Docker from its official repository to ensure the latest version. First, update the package indexes: sudo apt update 2. Install Additional Packages To install Docker, you’ll need to download four additional packages: curl: Required for interacting with web resources. software-properties-common: Enables software management via scripts. ca-certificates: Contains information about certification authorities. apt-transport-https: Necessary for data transfer over the HTTPS protocol. Download these packages with the following command: sudo apt install curl software-properties-common ca-certificates apt-transport-https -y The -y flag automatically answers "Yes" to all terminal prompts. 3. Import the GPG Key Software signatures must be verified using the GPG key. Docker's repository must be added to the local list. Use the command to import the GPG key: wget -O- https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor | sudo tee /etc/apt/keyrings/docker.gpg > /dev/null During the import process, the terminal may display a warning before confirming the successful execution of the command. 4. Add Docker Repository Add the repository for your version of Ubuntu, named "Jammy." For other versions, use their respective code names listed in the "System Requirements" section. Run the following command: echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null During execution, the terminal will prompt you to confirm the operation. Press Enter. 5. Update Package Indexes Again After making these changes, update the package indexes once more using the familiar command: sudo apt update 6. Verify the Repository Ensure that the installation will proceed from the correct repository by running the following command: apt-cache policy docker-ce Output example: Depending on the most recent Docker releases, the result could change. Verifying that the installation will be carried out from Docker's official repository is crucial. 7. Installing Docker After configuring the repositories, proceed with the Docker installation: sudo apt install docker-ce -y The installation process will begin immediately. To confirm a successful installation, check Docker's status in the system: sudo systemctl status docker Output example: The output should indicate that the Docker service is active and running. And if you’re looking for a reliable, high-performance, and budget-friendly solution for your workflows, Hostman has you covered with Linux VPS Hosting options, including Debian VPS, Ubuntu VPS, and VPS CentOS. Installing Docker Compose Docker Compose is a Docker tool designed for managing multiple containers. It is commonly used in projects where many containers must work together as a unified system. Managing this process manually can be challenging. Instead, you describe the entire configuration in a single YAML file containing the settings and configurations for all containers and their applications. There are several ways to install Docker Compose. If you need the latest version, make sure to use manual installation and installation via the Git version control system. Installation via apt-get If having the latest version is not critical for you, Docker Compose can be installed directly from the Ubuntu repository. Run the following command: sudo apt-get install docker-compose Installing via Git First, install Git: sudo apt-get install git Verify the installation by checking the Git version: git --version The output should show the Git version. Next, clone the Docker Compose repository. Navigate to the Docker Compose GitHub page and copy the repository URL. Run the following command to clone the repository: git clone https://github.com/docker/compose.git The cloning process will begin, and the repository will be downloaded from GitHub. Manual Installation Go to the Docker Compose GitHub repository and locate the latest release version under the Latest tag. At the time of writing, the Latest version of Docker Compose is v2.31.0. Let's download it: sudo curl -L "https://github.com/docker/compose/releases/download/v2.31.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose In this command, the parameters $(uname -s) and $(uname -m) automatically account for the system characteristics and architecture. After the download finishes, change the file's permissions: sudo chmod +x /usr/local/bin/docker-compose Right order of your infrastructure after installation of Docker on Ubuntu Conclusion In this guide, we covered the installation of Docker on Ubuntu 22.04, along with several ways to install Docker Compose. You can order a cloud server at Hostman for your experiments and practice.
22 August 2025 · 5 min to read
Docker

Running Selenium with Chrome in Docker

Sometimes, it’s useful to work with Selenium in Python within a Docker container. This raises questions about the benefits of using such tools, version compatibility between ChromeDriver and Chromium, and the nuances of their implementation. In this article, we’ll cover key considerations and provide solutions to common issues. And if you’re looking for a reliable, high-performance, and budget-friendly solution for your workflows, Hostman has you covered with Linux VPS Hosting options, including Debian VPS, Ubuntu VPS, and VPS CentOS. Why Run Selenium in Docker? Running Selenium in a container offers several advantages: Portability: Easily transfer the environment between different machines, avoiding version conflicts and OS-specific dependencies. Isolation: The Selenium container can be quickly replaced or updated without affecting other components on the server. CI/CD Compatibility: Dockerized Selenium fits well into CI/CD pipelines — you can spin up a clean test environment from scratch each time your system needs testing. Preparing an Ubuntu Server for Selenium with Docker First, make sure Docker and Docker Compose are installed on the server: docker --version && docker compose version In some Docker Compose versions, the command is docker-compose instead of docker compose. If the tools are installed, you’ll see output confirming their versions. If not, follow this guide. Selenium in Docker Example When deploying Selenium in Docker containers, consider the host architecture, functional requirements, and performance. Official selenium/standalone-* images are designed for AMD64 (x86_64) CPUs, while seleniarm/standalone-* images are adapted for ARM architectures (e.g., Apple silicon or ARM64 server CPUs). First, create a docker-compose.yml file in your project root. It will contain two services: version: "3" services: app: build: . restart: always volumes: - .:/app depends_on: - selenium platform: linux/amd64 selenium: image: selenium/standalone-chromium:latest # For AMD64 # image: seleniarm/standalone-chromium:latest # For ARM64 container_name: selenium-container restart: unless-stopped shm_size: 2g ports: - "4444:4444" # Selenium WebDriver API - "7900:7900" # VNC Viewer environment: - SE_NODE_MAX_SESSIONS=1 - SE_NODE_OVERRIDE_MAX_SESSIONS=true - SE_NODE_SESSION_TIMEOUT=300 - SE_NODE_GRID_URL=http://localhost:4444 - SE_NODE_DETECT_DRIVERS=false You must choose the correct image for your system architecture by uncommenting the appropriate line. The app service will run your main Python code. Let’s define a standard Dockerfile for this service: # Use a minimal Python image FROM python:3.11-slim # Set working directory WORKDIR /app # Install Python dependencies COPY requirements.txt /app/ RUN pip install --no-cache-dir -r requirements.txt # Copy project files COPY . /app/ # Set environment variables (Chromium is in a separate container) ENV SELENIUM_REMOTE_URL="http://selenium:4444/wd/hub" # Run Python script CMD ["python", "main.py"] This Dockerfile uses a base Python image and automatically installs the necessary dependencies. Now let’s add the driver initialization script to main.py: import time # Used to create a delay for checking browser functionality import os from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options # WebDriver settings chrome_options = Options() chrome_options.add_argument("--no-sandbox") chrome_options.add_argument("--disable-dev-shm-usage") chrome_options.add_argument("--disable-gpu") chrome_options.add_argument("--disable-webrtc") chrome_options.add_argument("--hide-scrollbars") chrome_options.add_argument("--disable-notifications") chrome_options.add_argument("--start-maximized") SELENIUM_REMOTE_URL = os.getenv("SELENIUM_REMOTE_URL", "http://selenium:4444/wd/hub") driver = webdriver.Remote( command_executor=SELENIUM_REMOTE_URL, options=chrome_options ) # Open a test page driver.get("https://www.timeweb.cloud") time.sleep(9999) # Shut down WebDriver driver.quit() In the requirements.txt file, list standard dependencies, including Selenium: attrs==25.1.0 certifi==2025.1.31 h11==0.14.0 idna==3.10 outcome==1.3.0.post0 PySocks==1.7.1 selenium==4.28.1 sniffio==1.3.1 sortedcontainers==2.4.0 trio==0.28.0 trio-websocket==0.11.1 typing_extensions==4.12.2 urllib3==2.3.0 websocket-client==1.8.0 wsproto==1.2.0 Now you can launch the containers: docker compose up -d Expected output: Docker will build and launch the containers. To verify everything is running correctly: docker compose ps You should see two running containers which means everything was loaded successfully. You can now integrate a script in main.py to interact with any site. Debugging Selenium in Docker with VNC In official Selenium Docker images (like seleniarm/standalone-chromium, selenium/standalone-chrome, etc.), direct access to the Chrome DevTools Protocol is usually overridden by Selenium Grid. It generates a new port for each session and proxies it via WebSocket. Arguments like --remote-debugging-port=9229 are ignored or overwritten by Selenium, making direct browser port access impossible from outside the container. Instead, these Docker images offer built-in VNC (Virtual Network Computing), similar to TeamViewer or AnyDesk, but working differently. VNC requires headless mode to be disabled, since it transmits the actual screen content — and if the screen is blank, there will be nothing to see. You can connect to the VNC web interface at: http://<server_ip>:7900 When connecting, you'll be asked for a password. To generate one, connect to the selenium-container via terminal: docker exec -it selenium-container bash Then enter: x11vnc -storepasswd You’ll be prompted to enter and confirm a password interactively. Enter the created password into the VNC web interface, and you’ll gain access to the browser controlled by Selenium inside Docker. From there, you can open DevTools to inspect DOM elements or debug network requests. Conclusion Running Selenium in Docker containers simplifies environment portability and reduces the risk of version conflicts between tools. It also allows visual debugging of tests via VNC, if needed. Just make sure to choose the correct image for your system architecture and disable headless mode when a graphical interface is required. This provides a more flexible and convenient infrastructure for testing and accelerates Selenium integration into CI/CD pipelines.
19 June 2025 · 6 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support