Sometimes, it’s useful to work with Selenium in Python within a Docker container. This raises questions about the benefits of using such tools, version compatibility between ChromeDriver and Chromium, and the nuances of their implementation. In this article, we’ll cover key considerations and provide solutions to common issues.
Running Selenium in a container offers several advantages:
First, make sure Docker and Docker Compose are installed on the server:
docker --version && docker compose version
In some Docker Compose versions, the command is docker-compose instead of docker compose.
If the tools are installed, you’ll see output confirming their versions. If not, follow this guide.
When deploying Selenium in Docker containers, consider the host architecture, functional requirements, and performance. Official selenium/standalone-*
images are designed for AMD64 (x86_64) CPUs, while seleniarm/standalone-*
images are adapted for ARM architectures (e.g., Apple silicon or ARM64 server CPUs).
First, create a docker-compose.yml
file in your project root. It will contain two services:
version: "3"
services:
app:
build: .
restart: always
volumes:
- .:/app
depends_on:
- selenium
platform: linux/amd64
selenium:
image: selenium/standalone-chromium:latest # For AMD64
# image: seleniarm/standalone-chromium:latest # For ARM64
container_name: selenium-container
restart: unless-stopped
shm_size: 2g
ports:
- "4444:4444" # Selenium WebDriver API
- "7900:7900" # VNC Viewer
environment:
- SE_NODE_MAX_SESSIONS=1
- SE_NODE_OVERRIDE_MAX_SESSIONS=true
- SE_NODE_SESSION_TIMEOUT=300
- SE_NODE_GRID_URL=http://localhost:4444
- SE_NODE_DETECT_DRIVERS=false
You must choose the correct image for your system architecture by uncommenting the appropriate line.
The app service will run your main Python code. Let’s define a standard Dockerfile
for this service:
# Use a minimal Python image
FROM python:3.11-slim
# Set working directory
WORKDIR /app
# Install Python dependencies
COPY requirements.txt /app/
RUN pip install --no-cache-dir -r requirements.txt
# Copy project files
COPY . /app/
# Set environment variables (Chromium is in a separate container)
ENV SELENIUM_REMOTE_URL="http://selenium:4444/wd/hub"
# Run Python script
CMD ["python", "main.py"]
This Dockerfile
uses a base Python image and automatically installs the necessary dependencies.
Now let’s add the driver initialization script to main.py
:
import time # Used to create a delay for checking browser functionality
import os
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
# WebDriver settings
chrome_options = Options()
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--disable-dev-shm-usage")
chrome_options.add_argument("--disable-gpu")
chrome_options.add_argument("--disable-webrtc")
chrome_options.add_argument("--hide-scrollbars")
chrome_options.add_argument("--disable-notifications")
chrome_options.add_argument("--start-maximized")
SELENIUM_REMOTE_URL = os.getenv("SELENIUM_REMOTE_URL", "http://selenium:4444/wd/hub")
driver = webdriver.Remote(
command_executor=SELENIUM_REMOTE_URL,
options=chrome_options
)
# Open a test page
driver.get("https://www.timeweb.cloud")
time.sleep(9999)
# Shut down WebDriver
driver.quit()
In the requirements.txt
file, list standard dependencies, including Selenium:
attrs==25.1.0
certifi==2025.1.31
h11==0.14.0
idna==3.10
outcome==1.3.0.post0
PySocks==1.7.1
selenium==4.28.1
sniffio==1.3.1
sortedcontainers==2.4.0
trio==0.28.0
trio-websocket==0.11.1
typing_extensions==4.12.2
urllib3==2.3.0
websocket-client==1.8.0
wsproto==1.2.0
Now you can launch the containers:
docker compose up -d
Expected output: Docker will build and launch the containers.
To verify everything is running correctly:
docker compose ps
You should see two running containers which means everything was loaded successfully. You can now integrate a script in main.py
to interact with any site.
In official Selenium Docker images (like seleniarm/standalone-chromium
, selenium/standalone-chrome
, etc.), direct access to the Chrome DevTools Protocol is usually overridden by Selenium Grid. It generates a new port for each session and proxies it via WebSocket.
Arguments like --remote-debugging-port=9229
are ignored or overwritten by Selenium, making direct browser port access impossible from outside the container.
Instead, these Docker images offer built-in VNC (Virtual Network Computing), similar to TeamViewer or AnyDesk, but working differently.
VNC requires headless mode to be disabled, since it transmits the actual screen content — and if the screen is blank, there will be nothing to see.
You can connect to the VNC web interface at:
http://<server_ip>:7900
When connecting, you'll be asked for a password. To generate one, connect to the selenium-container via terminal:
docker exec -it selenium-container bash
Then enter:
x11vnc -storepasswd
You’ll be prompted to enter and confirm a password interactively.
Enter the created password into the VNC web interface, and you’ll gain access to the browser controlled by Selenium inside Docker.
From there, you can open DevTools to inspect DOM elements or debug network requests.
Running Selenium in Docker containers simplifies environment portability and reduces the risk of version conflicts between tools. It also allows visual debugging of tests via VNC, if needed. Just make sure to choose the correct image for your system architecture and disable headless mode when a graphical interface is required. This provides a more flexible and convenient infrastructure for testing and accelerates Selenium integration into CI/CD pipelines.