Sign In
Sign In

Docker Exec: How to Use It to Run Commands in a Container

Docker Exec: How to Use It to Run Commands in a Container
Minhal Abbas
Technical writer
Docker
29.01.2025
Reading time: 8 min

Docker is an effective and versatile environment built to assist you in the matter of running, creating, as well as deploying apps within containers. One of the significant utilities in it is docker exec. It permits you to run code within a particular container. Furthermore, you can maintain as well as build a reliable, compact container through it. During creation or installation, it is significant to analyze different operations/configurations and examine the current condition or resolve bugs. Therefore, it offers an environment where commands can be run in dockerized apps.

This tutorial will cover docker exec, complete with possible use cases and explanations.

Prerequisites

You must meet certain prerequisites before beginning the article:

  • Installation: Verify that Docker is already installed. If not, check our tutorial to install it.
  • Permissions: The user account should have permissions/privileges to run the script.
  • Running Container: A container needs to be accessible as well as running at the moment. Through the docker ps, you can determine the ID or name of the container.
  • General Concepts: You should be familiar with core concepts of Docker. Familiarity with Linux systems and Docker basics will help in troubleshooting any issues during configuration.

These requirements are necessary before beginning the setup.

Basic Introduction 

docker exec permits greater control, enhanced privacy, as well as better security for your apps. It helps users with regarding, management, monitoring, and debugging running apps within the particular container. Explore its features to boost your productivity and automate workflows. In this way, you can run direct commands by performing several operations like opening sessions, shell commands, and even running scripts.

This significantly enhances workflow by enabling interaction with the active/operational app. You can address issues as well as make configurations without the need for a full container restart, which improves efficiency.

General Syntax 

The general syntax is:

docker exec [OPTIONS] CONTAINER CODE [ARG...]
  • OPTIONS: These are flags for customizing the behaviour of the particular command. Several options are as below:

    • -i: It indicates STDIN is launched even if not connected.

    • -t: It addresses the allocation of a pseudo-TTY.

    • -u USER: It indicates the particular user for running the command.

    • -w WORKDIR: It indicates the directory which is working for the particular command.

  • CONTAINER: It indicates the container ID or name, where instructions are executed.

  • CODE:  This is the script or command that you require to run inside the container

  • ARG: It represents the additional parameters that are required to be passed to the particular CODE.

How to Use Docker Exec to Run Commands in a Container

Through this utility, you can run programs, check logs, and perform other admin operations inside the particular running container by accessing its CLI. It is beneficial for effective management since it increases adaptability and gives more hold of dockerized apps.

Testing with a Sample Container

Before running the specific command, you should have the minimum of one container that is currently operational. If you do not have it yet, execute the below command by with the particular container name. In our case, we use mynginx

docker run -d --name mynginx nginx

Image9

Finding the Active Container ID

Before beginning, you are required to know the ID or name of the running container. Let’s run the below command to obtain the info on all dockerized apps that are currently operational:

docker ps

Image14

In the figure, the operational instance ID is b51dc8e05c77 and the name is mynginx

Working With a Particular Directory

In this first example, you can run the command in the particular directory of the operational container. To achieve this, the --workdir or -w option is used by mentioning the folder name. Look at a use case where the pwd is run within the operational container mynginx:

docker exec --workdir /tmp mynginx pwd

Here:

  • docker exec: It is the core command to run the command within the operational container.
  • --workdir /tmp: This OPTION indicates our working directory.
  • mynginx: It indicates the CONTAINER name. 
  • pwd: It indicates the executed CODE within the container.

Image1

In the figure, the pwd executes within the particular mynginx instance and allocates the working directory to /tmp

Single Command Execution 

In this example, execute a single command. For this, first mention the container name or ID, and afterwards, the particular command that you are required to execute. Here, mynginx is the name of the operational container, and the echo "Hello, Hostman Users!" is the command:

docker exec mynginx echo "Hello, Hostman Users!"

Image11

In the figure, there is an execution of the echo "Hello, Hostman Users!" command within mynginx.

Several Commands Execution 

You can execute several commands in a single line statement by splitting them with semicolon. Let’s look at the below statement:

docker exec mynginx /bin/bash ls; free -m; df -h;

Image7

In the result, ls shows the content inside of the mentioned folder, free -m shows the system memory and df -h disk space usage. It permits you to analyze the memory state, filesystem, and other info in one statement.

Enabling the Shell Through Name

You can enable the shell within the dockerized app. It permits an interface for the file system as well as script execution. Here, the -it option activates interactive mode and assigns the interface:

docker exec -it mynginx /bin/bash

Image17

The figure enables the bash shell interface within mynginx. But, /bin/bash is not guaranteed to be present in every image of Docker. Therefore, other shells like sh can also be enabled.

Now, input exit and press ENTER to close the interface:

exit

Image15

To launch other shells like sh (which is a symbolic link to bash or another shell), use /bin/sh in the below statement line:

docker exec -it mynginx /bin/sh

Image3

In the figure, the code line launches the shell interface, which is operational. 

Enabling the Shell Through ID

In this particular use case, enable the session through the b51dc8e05c77 container ID inside the Docker app. Furthermore, you have the ability to interact with the interface as though you directly logged in via the -it flag. The -t indicates the assignment of pseudo-TTY, and the -i opens the STDIN. Both are beneficial for analysis, debugging, as well as managerial operations:

docker exec -it b51dc8e05c77 bash

Image5

Furthermore, you can analyse the information of the current folder inside the particular shell (in a detailed format), e.g., file size, owner, group, number of links, modification date, and file permissions:

ls -l

Image12

It gives detailed information on each file as well as the folder that assists you in knowing their attributes and managing them effectively.

Working As a Particular User

You can execute a command as the specific user through the -u option. It is beneficial when you are permitted to work with specific privileges. It runs the command in the operational container through the particular user and group:

docker exec -u <user>:<group> <container_id> <command> 

For instance, the whoami runs as the www-data in the mynginx container:

docker exec -u www-data mynginx whoami

Image6

In the figure, www-data verifies that the particular command is executed successfully with the correct user permissions and within the expected interface. 

Enabling a Non-Interactive Shell

Sometimes, users prefer not to have any interaction. For such circumstances, they can execute the command without any argument:

docker exec mynginx tail /etc/passwd 

Image10

The last 10 lines of the passwd file have been shown. This passwd file is stored in the /etc/passwd folder containing the user information. It helps you monitor the user account information, permitting you to quickly check for troubleshooting or update issues. 

Working With a Single Environment Variable

You may need to pass environment variables to the command that is run in the operational container. To achieve this, use the -e option as below:

docker exec -e MY_VAR=value mynginx printenv MY_VAR

Image2

In the figure, the printenv MY_VAR is successfully executed in mynginx when the MY_VAR is set to value correctly.

Working With Multiple Environment Variables

You can set more than one variable through the -e flag. 

docker exec -e TEST=john -e ENVIRONMENT=prod mynginx env

Image8

The figure confirms that the two variables TEST and ENVIRONMENT have been set to john and prod in the mynginx.

Working With the Detached Mode

You can run commands in the detached mode through the -d flag. Therefore, it runs in the background:

docker exec -d mynginx sleep 500

Image13

The figure confirms that the mynginx is executing the sleep 500 command.

Working With the Privileged Mode

Here, the --privileged flag permits you to execute the command, such as mount, with elevated privileges in the running container:

docker exec --privileged mynginx mount

Image4

In the figure, mount permits the system to create a mount point with the particular permissions in the mynginx.

More Information on docker exec

The --help option shows the manual with a list of available options with concise explanations. 

docker exec --help

Image16

Final Words

docker exec is an effective utility for controlling and interacting with active containers. It is helpful for operations like monitoring, managing, and debugging apps without interfering with their functionality. It permits you to run code, launch shells, customize several configuration aspects, and also set environment variables. Once you become familiar with the usage of this utility, you can manage containers easily. It makes your operations much smoother for creating and deploying apps.

Docker
29.01.2025
Reading time: 8 min

Similar

Docker

How to Automate Jenkins Setup with Docker

In the modern software development world, Continuous Integration and Continuous Delivery (CI/CD) have become an integral part of the development process. Jenkins, one of the leading CI/CD tools, helps automate application build, testing, and deployment. However, setting up and managing Jenkins can be time-consuming and complex, especially in large projects with many developers and diverse requirements. Docker, containerization, and container orchestration have come to the rescue, offering more efficient and scalable solutions for deploying applications and infrastructure. Docker allows developers to package applications and their dependencies into containers, which can be easily transported and run on any system with Docker installed. Benefits of Using Docker for Automating Jenkins Setup Simplified Installation and Setup: Using Docker to deploy Jenkins eliminates many challenges associated with installing dependencies and setting up the environment. You only need to run a few commands to get a fully functional Jenkins server. Repeatability: With Docker, you can be confident that your environment will always be the same, regardless of where it runs. This eliminates problems associated with different configurations across different servers. Environment Isolation: Docker provides isolation of applications and their dependencies, avoiding conflicts between different projects and services. Scalability: Using Docker and orchestration tools such as Docker Compose or Kubernetes allows Jenkins to be easily scaled by adding or removing agents as needed. Fast Deployment and Recovery: In case of failure or the need for an upgrade, Docker allows you to quickly deploy a new Jenkins container, minimizing downtime and ensuring business continuity. In this article, we will discuss how to automate the setup and deployment of Jenkins using Docker. We will cover all the stages, from creating a Docker file and setting up Docker Compose to integrating Jenkins Configuration as Code (JCasC) for automatic Jenkins configuration. As a result, you'll have a complete understanding of the process and a ready-made solution for automating Jenkins in your projects. Prerequisites Before you begin setting up Jenkins with Docker, you need to ensure that you have all the necessary tools and software. In this section, we will discuss the requirements for successfully automating Jenkins and how to install the necessary components. Installing Docker and Docker Compose Docker can be installed on various operating systems, including Linux, macOS, and Windows. Below are the steps for installing Docker on the most popular platforms: Linux (Ubuntu) Update the package list with the command: sudo apt update Install packages for HTTPS support: sudo apt install apt-transport-https ca-certificates curl software-properties-common Add the official Docker GPG key: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - Add the Docker repository to APT sources: sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" Install Docker: sudo apt install docker-ce Verify Docker is running: sudo systemctl status docker macOS Download and install Docker Desktop from the official website: Docker Desktop for Mac. Follow the on-screen instructions to complete the installation. Windows Download and install Docker Desktop from the official website: Docker Desktop for Windows. Follow the on-screen instructions to complete the installation. Docker Compose is typically installed along with Docker Desktop on macOS and Windows. For Linux, it requires separate installation: Download the latest version of Docker Compose: sudo curl -L "https://github.com/docker/compose/releases/download/$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep -Po '"tag_name": "\K.*?(?=")')/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose Make the downloaded file executable: sudo chmod +x /usr/local/bin/docker-compose Verify the installation: docker-compose --version Docker Hub is a cloud-based repository where you can find and store Docker images. The official Jenkins Docker image is available on Docker Hub and provides a ready-to-use Jenkins server. Go to the Docker Hub website. In the search bar, type Jenkins. Select the official image jenkins/jenkins. The official image is regularly updated and maintained by the community, ensuring a stable and secure environment. Creating a Dockerfile for Jenkins In this chapter, we will explore how to create a Dockerfile for Jenkins that will be used to build a Docker image. We will also discuss how to add configurations and plugins to this image to meet the specific requirements of your project. Structure of a Dockerfile A Dockerfile is a text document containing all the commands that a user could call on the command line to build an image. In each Dockerfile, instructions are used to define a step in the image-building process. The key commands include: FROM: Specifies the base image to create a new image from. RUN: Executes a command in the container. COPY or ADD: Copies files or directories into the container. CMD or ENTRYPOINT: Defines the command that will be executed when the container starts. Basic Dockerfile for Jenkins Let’s start by creating a simple Dockerfile for Jenkins. This file will use the official Jenkins image as the base and add a few necessary plugins. Create a new file named Dockerfile in your project directory. Add the following code: FROM jenkins/jenkins:lts RUN jenkins-plugin-cli --plugins workflow-aggregator git EXPOSE 8080 EXPOSE 50000 This basic Dockerfile installs two plugins: workflow-aggregator and git. It also exposes ports 8080 (for the web interface) and 50000 (for connecting Jenkins agents). Adding Configurations and Plugins For more complex configurations, we can add additional steps to the Dockerfile. For example, we can configure Jenkins to automatically use a specific configuration file or add scripts for pre-configuration. Create a jenkins_home directory to store custom configurations. Inside the new directory, create a custom_config.xml file with the required configurations: <?xml version='1.0' encoding='UTF-8'?> <hudson> <numExecutors>2</numExecutors> <mode>NORMAL</mode> <useSecurity>false</useSecurity> <disableRememberMe>false</disableRememberMe> <label></label> <primaryView>All</primaryView> <slaveAgentPort>50000</slaveAgentPort> <securityRealm class='hudson.security.SecurityRealm$None'/> <authorizationStrategy class='hudson.security.AuthorizationStrategy$Unsecured'/> </hudson> Update the Dockerfile as follows: FROM jenkins/jenkins:lts RUN jenkins-plugin-cli --plugins workflow-aggregator git docker-workflow COPY jenkins_home/custom_config.xml /var/jenkins_home/config.xml COPY scripts/init.groovy.d /usr/share/jenkins/ref/init.groovy.d/ EXPOSE 8080 EXPOSE 50000 In this example, we are installing additional plugins, copying the custom configuration file into Jenkins, and adding scripts to the init.groovy.d directory for automatic initialization of Jenkins during its first startup. Docker Compose Setup Docker Compose allows you to define your application's infrastructure as code using YAML files. This simplifies the configuration and deployment process, making it repeatable and easier to manage. Key benefits of using Docker Compose: Ease of Use: Create and manage multi-container applications with a single YAML file. Scalability: Easily scale services by adding or removing containers as needed. Convenience for Testing: Ability to run isolated environments for development and testing. Example of docker-compose.yml for Jenkins Let’s create a docker-compose.yml file to deploy Jenkins along with associated services such as a database and Jenkins agent. Create a docker-compose.yml file in your project directory. Add the following code to the file: version: '3.8' services: jenkins: image: jenkins/jenkins:lts container_name: jenkins-server ports: - "8080:8080" - "50000:50000" volumes: - jenkins_home:/var/jenkins_home networks: - jenkins-network jenkins-agent: image: jenkins/inbound-agent container_name: jenkins-agent environment: - JENKINS_URL=http://jenkins-server:8080 - JENKINS_AGENT_NAME=agent - JENKINS_AGENT_WORKDIR=/home/jenkins/agent volumes: - agent_workdir:/home/jenkins/agent depends_on: - jenkins networks: - jenkins-network volumes: jenkins_home: agent_workdir: networks: jenkins-network: This file defines two services: jenkins: The service uses the official Jenkins image. Ports 8080 and 50000 are forwarded for access to the Jenkins web interface and communication with agents. The /var/jenkins_home directory is mounted on the external volume jenkins_home to persist data across container restarts. jenkins-agent: The service uses the Jenkins inbound-agent image. The agent connects to the Jenkins server via the URL specified in the JENKINS_URL environment variable. The agent's working directory is mounted on an external volume agent_workdir. Once you create the docker-compose.yml file, you can start all services with a single command: Navigate to the directory that contains your docker-compose.yml. Run the following command to start all services: docker-compose up -d The -d flag runs the containers in the background. After executing this command, Docker Compose will create and start containers for all services defined in the file. You can now check the status of the running containers using the following command: docker-compose ps If everything went well, you should see only the jenkins-server container in the output. Now, let’s set up the Jenkins server and agent. Open a browser and go to http://localhost:8080/. During the first startup, you will see the following message: To retrieve the password, run this command: docker exec -it jenkins-server cat /var/jenkins_home/secrets/initialAdminPassword Copy the password and paste it into the Unlock Jenkins form. This will open a new window with the initial setup. Select Install suggested plugins. After the installation is complete, fill out the form to create an admin user. Accept the default URL and finish the setup. Then, go to Manage Jenkins → Manage Nodes. Click New Node, provide a name for the new node (e.g., "agent"), and select Permanent Agent. Fill in the remaining fields as shown in the screenshot. After creating the agent, a window will open with a command containing the secret for the agent connection. Copy the secret and add it to your docker-compose.yml: environment: - JENKINS_URL=http://jenkins-server:8080 - JENKINS_AGENT_NAME=agent - JENKINS_AGENT_WORKDIR=/home/jenkins/agent - JENKINS_SECRET=<your-secret-here> # Insert the secret here To restart the services, use the following commands and verify that the jenkins-agent container has started: docker-compose downdocker-compose up -d Configuring Jenkins with Code (JCasC) Jenkins Configuration as Code (JCasC) is an approach that allows you to describe the entire Jenkins configuration in a YAML file. It simplifies the automation, maintenance, and portability of Jenkins settings. In this chapter, we will explore how to set up JCasC for automatic Jenkins configuration when the container starts. JCasC allows you to describe Jenkins configuration in a single YAML file, which provides the following benefits: Automation: A fully automated Jenkins setup process, eliminating the need for manual configuration. Manageability: Easier management of configurations using version control systems. Documentation: Clear and easily readable documentation of Jenkins configuration. Example of a Jenkins Configuration File First, create the configuration file. Create a file named jenkins.yaml in your project directory. Add the following configuration to the file: jenkins: systemMessage: "Welcome to Jenkins configured as code!" securityRealm: local: allowsSignup: false users: - id: "admin" password: "${JENKINS_ADMIN_PASSWORD}" authorizationStrategy: loggedInUsersCanDoAnything: allowAnonymousRead: false tools: jdk: installations: - name: "OpenJDK 11" home: "/usr/lib/jvm/java-11-openjdk" jobs: - script: > pipeline { agent any stages { stage('Build') { steps { echo 'Building...' } } stage('Test') { steps { echo 'Testing...' } } stage('Deploy') { steps { echo 'Deploying...' } } } } This configuration file defines: System message in the systemMessage block. This string will appear on the Jenkins homepage and can be used to inform users of important information or changes. Local user database and administrator account in the securityRealm block. The field allowsSignup: false disables self-registration of new users. Then, a user with the ID admin is defined, with the password set by the environment variable ${JENKINS_ADMIN_PASSWORD}. Authorization strategy in the authorizationStrategy block. The policy loggedInUsersCanDoAnything allows authenticated users to perform any action, while allowAnonymousRead: false prevents anonymous users from accessing the system. JDK installation in the tools block. In this example, a JDK named OpenJDK 11 is specified with the location /usr/lib/jvm/java-11-openjdk. Pipeline example in the jobs block. This pipeline includes three stages: Build, Test, and Deploy, each containing one step that outputs a corresponding message to the console. Integrating JCasC with Docker and Docker Compose Next, we need to integrate our jenkins.yaml configuration file with Docker and Docker Compose so that this configuration is automatically applied when the Jenkins container starts. Update the Dockerfile to copy the configuration file into the container and install the JCasC plugin: FROM jenkins/jenkins:lts RUN jenkins-plugin-cli --plugins configuration-as-code COPY jenkins.yaml /var/jenkins_home/jenkins.yaml EXPOSE 8080 EXPOSE 50000 Update the docker-compose.yml to set environment variables and mount the configuration file. Add the following code in the volumes block: - ./jenkins.yaml:/var/jenkins_home/jenkins.yaml After the volumes block, add a new environment block (if you haven't defined it earlier): environment: - JENKINS_ADMIN_PASSWORD=admin_password Build the new Jenkins image with the JCasC configuration: docker-compose build Run the containers: docker-compose up -d After the containers start, go to your browser at http://localhost:8080 and log in with the administrator account. You should see the system message and the Jenkins configuration applied according to your jenkins.yaml file. A few important notes: The YAML files docker-compose.yml and jenkins.yaml might seem similar at first glance but serve completely different purposes. The file in Docker Compose describes the services and containers needed to run Jenkins and its environment, while the file in JCasC describes the Jenkins configuration itself, including plugin installation, user settings, security, system settings, and jobs. The .yml and .yaml extensions are variations of the same YAML file format. They are interchangeable and supported by various tools and libraries for working with YAML. The choice of format depends largely on historical community preferences; in Docker documentation, you will more often encounter examples with the .yml extension, while in JCasC documentation, .yaml is more common. The pipeline example provided below only outputs messages at each stage with no useful payload. This example is for demonstrating structure and basic concepts, but it does not prevent Jenkins from successfully applying the configuration. We will not dive into more complex and practical structures. jenkins.yaml describes the static configuration and is not intended to define the details of a specific CI/CD process for a particular project. For that purpose, you can use the Jenkinsfile, which offers flexibility for defining specific CI/CD steps and integrating with version control systems. We will discuss this in more detail in the next chapter. Key Concepts of Jobs in JCasC Jobs are a section of the configuration file that allows you to define and configure build tasks using code. This block includes the following: Description of Build Tasks: This section describes all aspects of a job, including its type, stages, triggers, and execution steps. Types of Jobs: There are different types of jobs in Jenkins, such as freestyle projects, pipelines, and multiconfiguration projects. In JCasC, pipelines are typically used because they provide a more flexible and powerful approach to automation. Declarative Syntax: Pipelines are usually described using declarative syntax, simplifying understanding and editing. Example Breakdown: pipeline: The main block that defines the pipeline job. agent any: Specifies that the pipeline can run on any available Jenkins agent. stages: The block that contains the pipeline stages. A stage is a step in the process. Additional Features: Triggers: You can add triggers to make the job run automatically under certain conditions, such as on a schedule or when a commit is made to a repository: triggers { cron('H 4/* 0 0 1-5') } Post-Conditions: You can add post-conditions to execute steps after the pipeline finishes, such as sending notifications or archiving artifacts. Parameters: You can define parameters for a job to make it configurable at runtime: parameters { string(name: 'BRANCH_NAME', defaultValue: 'main', description: 'Branch to build') } Automating Jenkins Deployment in Docker with JCasC Using Scripts for Automatic Deployment Use Bash scripts to automate the installation, updating, and running Jenkins containers. Leverage Jenkins Configuration as Code (JCasC) to automate Jenkins configuration. Script Examples Script for Deploying Jenkins in Docker: #!/bin/bash # Jenkins Parameters JENKINS_IMAGE="jenkins/jenkins:lts" CONTAINER_NAME="jenkins-server" JENKINS_PORT="8080" JENKINS_AGENT_PORT="50000" VOLUME_NAME="jenkins_home" CONFIG_DIR="$(pwd)/jenkins_configuration" # Create a volume to store Jenkins data docker volume create $VOLUME_NAME # Run Jenkins container with JCasC docker run -d \ --name $CONTAINER_NAME \ -p $JENKINS_PORT:8080 \ -p $JENKINS_AGENT_PORT:50000 \ -v $VOLUME_NAME:/var/jenkins_home \ -v $CONFIG_DIR:/var/jenkins_home/casc_configs \ -e CASC_JENKINS_CONFIG=/var/jenkins_home/casc_configs \ $JENKINS_IMAGE The JCasC configuration file jenkins.yaml was discussed earlier. Setting Up a CI/CD Pipeline for Jenkins Updates To set up a CI/CD pipeline, follow these steps: Open Jenkins and go to the home page. Click on Create Item. Enter a name for the new item, select Pipeline, and click OK. If this section is missing, you need to install the plugin in Jenkins. Go to Manage Jenkins → Manage Plugins. In the Available Plugins tab, search for Pipeline and install the Pipeline plugin. Similarly, install the Git Push plugin. After installation, go back to Create Item. Select Pipeline, and under Definition, choose Pipeline script from SCM. Select Git as the SCM. Add the URL of your repository; if it's private, add the credentials. In the Branch Specifier field, specify the branch that contains the Jenkinsfile (e.g., */main). Note that the Jenkinsfile should be created without an extension. If it's located in a subdirectory, specify it in the Script Path field. Click Save. Example of a Jenkinsfile pipeline { agent any environment { JENKINS_CONTAINER_NAME = 'new-jenkins-server' JENKINS_IMAGE = 'jenkins/jenkins:lts' JENKINS_PORT = '8080' JENKINS_VOLUME = 'jenkins_home' } stages { stage('Setup Docker') { steps { script { // Install Docker on the server if it's not installed sh ''' if ! [ -x "$(command -v docker)" ]; then curl -fsSL https://get.docker.com -o get-docker.sh sh get-docker.sh fi ''' } } } stage('Pull Jenkins Docker Image') { steps { script { // Pull the latest Jenkins image sh "docker pull ${JENKINS_IMAGE}" } } } stage('Cleanup Old Jenkins Container') { steps { script { // Stop and remove the old container if it exists def existingContainer = sh(script: "docker ps -a -q -f name=${JENKINS_CONTAINER_NAME}", returnStdout: true).trim() if (existingContainer) { echo "Stopping and removing existing container ${JENKINS_CONTAINER_NAME}..." sh "docker stop ${existingContainer} || true" sh "docker rm -f ${existingContainer} || true" } else { echo "No existing container with name ${JENKINS_CONTAINER_NAME} found." } } } } stage('Run Jenkins Container') { steps { script { // Run Jenkins container with port binding and volume mounting sh ''' docker run -d --name ${JENKINS_CONTAINER_NAME} \ -p ${JENKINS_PORT}:8080 \ -p 50000:50000 \ -v ${JENKINS_VOLUME}:/var/jenkins_home \ ${JENKINS_IMAGE} ''' } } } stage('Configure Jenkins (Optional)') { steps { script { // Additional Jenkins configuration through Groovy scripts or REST API sh ''' # Example script for performing initial Jenkins setup curl -X POST http://localhost:${JENKINS_PORT}/scriptText --data-urlencode 'script=println("Jenkins is running!")' ''' } } } } post { always { echo "Jenkins setup and deployment process completed." } } } On the page of your new pipeline, click Build Now. Go to Console Output. In case of a successful completion, you should see the following output. For this pipeline, we used the following files.  Dockerfile: FROM jenkins/jenkins:lts USER root RUN apt-get update && apt-get install -y docker.io docker-compose.yml: version: '3.7' services: jenkins: build: . ports: - "8081:8080" - "50001:50000" volumes: - jenkins_home:/var/jenkins_home - /var/run/docker.sock:/var/run/docker.sock environment: - JAVA_OPTS=-Djenkins.install.runSetupWizard=false networks: - jenkins-network volumes: jenkins_home: networks: jenkins-network: Ports 8081 and 50001 are used here so that the newly deployed Jenkins can occupy ports 8080 and 50000, respectively. This means that the main Jenkins, from which the pipeline is running, is currently located at http://localhost:8081/. One way to check if Jenkins has been deployed is to go to http://localhost:8080/, as we specified this in the pipeline. Since this is a new image, a welcome message with authentication will appear on the homepage. Conclusion Automating the deployment, updates, and backups of Jenkins is crucial for ensuring the reliability and security of CI/CD processes. Using modern tools enhances this process with a variety of useful features and resources. If you're further interested in exploring Jenkins capabilities, we recommend the following useful resources that can assist with automating deployments: Official Jenkins website Jenkins Configuration as Code documentation Pipeline Syntax
30 January 2025 · 19 min to read
Docker

Converting a Container to a Virtual Machine

A tricky question often asked during technical interviews for a DevOps engineer position is: "What is the difference between a container and a virtual machine?" Most candidates get confused when answering this question, and some interviewers themselves don’t fully understand what kind of answer they want to hear. To clearly understand the differences and never have to revisit this question, we will show you how to convert a container into a virtual machine and run it in the Hostman cloud. The process described in this article will help better understand the key differences between containers and virtual machines and demonstrate each approach's practical application. This article will be especially useful for working with systems requiring a specific environment. We will perform all further actions in a Linux OS environment and use a virtual machine based on the KVM hypervisor created with VirtualBox to prepare the necessary image. You can also use other providers such as VMware, QEMU, or virt-manager. Configuration of Our Future Virtual Machine Let’s start this exciting journey by creating a container. For this, we will use Docker. If it is not installed yet, install it using the command below (before that, you may need to update the list of available packages with sudo apt update): sudo apt install docker.io -y Create a container based on the minimal Alpine image and attach to its shell: sudo docker run --name test -it alpine sh Install the necessary programs using the apk package manager that you plan to use in the future virtual machine. You don’t necessarily have to limit yourself to packages from the standard Alpine repository — you can also add other repositories or, if needed, download or compile packages directly in the container. apk add tmux busybox-extras openssh-client openssh-server iptables dhclient ppp socat tcpdump vim openrc mkinitfs grub grub-bios Here’s a list of minimally required packages: tmux — a console multiplexer. It will be useful for saving user sessions and the context of running processes in case of a network disconnect. busybox-extras — an extended version of BusyBox that includes additional utilities but remains a compact distribution of standard tools. openssh-client and openssh-server — OpenSSH client and server, necessary for setting up remote connections. iptables — a utility for configuring IP packet filtering rules. dhclient — a DHCP client for automating network configuration. ppp — a package for implementing the Point-to-Point Protocol. socat — a program for creating tunnels, similar to netcat, with encryption support and an interactive shell. tcpdump — a utility for capturing traffic. Useful for debugging network issues. vim — a console text editor with rich customization options. It is popular among experienced Linux users. openrc — an initialization system based on dependency management that works with SysVinit. It’s a key component needed to convert a container into a virtual machine, as containers do not have it by default. mkinitfs — a package for generating initramfs, allowing you to build necessary drivers and modules that are loaded during the initial system initialization. grub and grub-bios — OS bootloader. In this case, we are specifically interested in creating a bootloader for BIOS-based systems using an MBR partition table. Set the root password: export PASSWORD=<your secret password>  echo "root:$PASSWORD" | chpasswd   Create a user. You will need it for remote SSH access later: export USERNAME=<username>  adduser -s /bin/sh $USERNAME   Set the SUID bit on the executable file busybox. This is necessary so that the user can execute commands with superuser privileges: chmod u+s /bin/busybox   Create a script to be executed during system initialization: cat <<EOF > /etc/local.d/init.start #!/bin/sh dmesg -n 1 mount -o remount,rw / ifconfig lo 127.0.0.1 netmask 255.0.0.0 dhclient eth0 # ifconfig eth0 172.16.0.200 netmask 255.255.255.0 # route add -net default gw 172.16.0.1 busybox-extras telnetd EOF Let’s go through the script line by line: dmesg -n 1 — Displays critical messages from the Linux kernel's message buffer so that potential issues can be detected during startup. mount -o remount,rw / — Remounts the root file system (/) with the rw (read-write) flag. This allows modifications to the file system after boot. ifconfig lo 127.0.0.1 netmask 255.0.0.0 — Configures the loopback interface (lo) with IP address 127.0.0.1 and subnet mask 255.0.0.0. This ensures internal network communication on the machine. dhclient eth0 — Runs the DHCP client for the eth0 interface to automatically obtain IP address settings and other network parameters from a DHCP server. # ifconfig eth0 172.16.0.200 netmask 255.255.255.0 — This line is commented out, but if uncommented, it will assign a static IP address 172.16.0.200 and subnet mask 255.255.255.0 to the eth0 interface. We included this line in the script in case a static network configuration is needed. # route add -net default gw 172.16.0.1 — This line is also commented out, but if uncommented, it will add a default route with gateway 172.16.0.1. This determines how packets will be routed outside the local network. busybox-extras telnetd — Starts the Telnet server. Please note that using the Telnet protocol in production environments is not recommended due to the lack of encryption for data transmission. Make the script executable: chmod +x /etc/local.d/init.start Add the script to the autostart: rc-update add local Add the OpenSSH server daemon to the autostart. This will allow you to connect to the cloud server via SSH later: rc-update add sshd default Set the default DNS server: echo nameserver 8.8.8.8 > /etc/resolv.conf Exit the terminal using the exit command or the keyboard shortcut CTRL+D. The next step is to save the container's file system to the host as an archive, which can also be done using Docker. In my case, the final artifact is only 75 megabytes in size. sudo docker export test > test.tar Transforming a Docker Image into a Virtual Machine Image Containers are a Linux-specific technology since they don't have their own kernel and instead rely on abstractions of the host's Linux kernel to provide isolation and resource management. The key abstractions include: namespaces: isolation for USER, TIME, PID, NET, MOUNT, UTS, IPC, CGROUP namespaces. cgroups: limitations on resources like CPU, RAM, and I/O. capabilities: a set of capabilities for executing specific privileged operations without superuser rights. These kernel components make Docker and other container technologies closely tied to Linux, meaning they can't natively run on other operating systems like Windows, macOS, or BSD. For running Docker on Windows, macOS, or BSD, there is Docker Desktop, which provides a virtual machine with a minimal Linux-based operating system kernel. Docker Engine is installed and running inside this virtual machine, enabling users to manage containers and images in their usual environment. Since we need a full operating system and not just a container, we will require our own kernel. Create the image file we will work with: truncate -s 200M test.img Use fdisk to create a partition on the test.img image: echo -e "n\np\n1\n\n\nw" | fdisk test.img n — create a new partition p — specify that this will be a primary partition 1 — the partition number \n\n — use default values for the start and end sectors w — write changes Associate the test.img file with the /dev/loop3 device, starting from an offset of 2048 blocks (1 MB): sudo losetup -o $[2048*512] /dev/loop3 test.img Note that /dev/loop3 may already be in use. You can check used devices with: losetup -l Format the partition linked to /dev/loop3 as EXT4: sudo mkfs.ext4 /dev/loop3 Mount the partition at /mnt: sudo mount /dev/loop3 /mnt Extract the Docker image (test.tar) into the /mnt directory: sudo tar xvf test.tar -C /mnt Create the /mnt/boot directory to store the bootloader and kernel files: sudo mkdir -pv /mnt/boot Download the Linux kernel source code: wget https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-6.8.9.tar.xz Extract the Linux kernel source code in the current directory: tar xf linux-6.8.9.tar.xz Install the necessary packages for building the Linux kernel: sudo apt install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison -y Navigate to the kernel source directory and create the default configuration file: cd linux-6.8.9make defconfig Add necessary configuration options to the .config file: echo -e "CONFIG_BRIDGE=y\nCONFIG_TUN=y\nCONFIG_PPP=y\nCONFIG_PPP_ASYNC=y\nCONFIG_PPP_DEFLATE=y" >> .config CONFIG_BRIDGE=y — Enables network bridge support, allowing multiple network interfaces to be combined into one. CONFIG_TUN=y — Enables support for virtual network interfaces like TUN/TAP, useful for VPN setups. CONFIG_PPP=y — Enables support for the Point-to-Point Protocol (PPP). CONFIG_PPP_ASYNC=y — Enables asynchronous PPP for serial ports. CONFIG_PPP_DEFLATE=y — Enables PPP data compression using the DEFLATE algorithm. Prepare the source code for building: make prepare -j4 Create the necessary scripts, build the compressed kernel image (bzImage) and the kernel modules: make scripts -j4make bzImage -j4make modules -j4 Install the built kernel and modules into the /mnt/boot directory (which contains the virtual machine image filesystem): sudo make INSTALL_PATH=/mnt/boot installsudo make INSTALL_MOD_PATH=/mnt modules_install Install the GRUB bootloader into the /mnt/boot directory. Make sure you're in the directory containing the test.img file: sudo grub-install --target=i386-pc --boot-directory=/mnt/boot/test.img --modules='part_msdos' Bind-mount the host system’s /proc, /sys, and /dev directories to the /mnt directory. This is necessary for creating the initramfs: sudo mount --bind /proc /mnt/proc/sudo mount --bind /sys /mnt/sys/sudo mount --bind /dev /mnt/dev/ Change root (chroot) into the /mnt filesystem using a shell: sudo chroot /mnt /bin/sh Generate the initial RAM filesystem (initramfs) for the kernel version you are working with: mkinitfs -k -o /boot/initrd.img-6.8.9 6.8.9 Generate the GRUB bootloader configuration file: grub-mkconfig -o /boot/grub/grub.cfg By completing these steps, you will have created a small virtual machine image with a fully working Linux kernel, a bootloader (GRUB), and an initramfs. Local Verification of the Built Image For local verification, it’s most convenient to use QEMU. This package is available for Windows, macOS, and Linux. Install it by following the instructions for your OS on the official website. Convert the test.img to the qcow2 format. This will reduce the size of the final image from 200 MB to 134 MB. qemu-img convert test.img -O qcow2 test.qcow2 Run the image using QEMU. qemu-system-x86_64 -hda test.qcow2 If all steps were completed correctly, the initialization process will be successful, and an interactive menu for entering the login and password will appear. To check the version of the installed kernel, use the uname -a command, which will output the necessary information. Creating a Virtual Machine in Hostman Go to the Cloud Servers section and start creating a new server. Select the prepared and tested image as the server’s base. To do this, first add it to the list of available images. Supported formats include: iso, qcow2, vmdk, vhd, vhdx, vdi, raw, img. Upload the image in one of the available ways: from your computer or by link. Note that after uploading, the image will also be available via URL. Continue with the creation of the cloud server and specify the other parameters of its configuration. Since the image is minimal, it can be run even on the smallest configuration. Once the cloud server is created, go to the Console tab and verify whether the virtual machine was successfully created from the image. The virtual machine has been created and works correctly. Since we added the OpenSSH daemon to the autostart in advance, it is now possible to establish a full remote connection to the server using the username, IP address, and password. Conclusion To turn a container into a full-fledged lightweight virtual machine, we sequentially added key components: the OpenRC initialization system, GRUB bootloader, Linux kernel, and initramfs. This process highlighted the importance of each component in the overall virtual machine architecture and demonstrated the practical differences from container environments. As a result of this experiment, we realized the importance of understanding the architecture and functions of each component to successfully create images for specific needs and to manage virtual machines more effectively from a resource perspective. The image built in this article is quite minimal since it is a Proof-of-Concept, but one can go even further. For example, you could use a special guide to minimize the kernel and explore minimal Linux distributions such as Tiny Core Linux or SliTaz. On the other hand, if your choice is to add functionality by increasing the image size, we strongly recommend checking out the Gentoo Wiki. This resource offers extensive information on fine-tuning the system.
22 January 2025 · 11 min to read
Docker

How to Create and Optimize Docker Images

In today's environment, most companies actively use the Docker containerization system in their projects, especially when working with microservice applications. Docker allows you to quickly deploy any applications, whether monolithic or cloud-native. Despite the simplicity of working with Docker, it's important to understand some nuances of creating your own images. In this article, we will explore how to work with Docker images and optimize them using two different applications as examples. Prerequisites To work with the Docker containerization system, we will need: A cloud server or a virtual machine with any pre-installed Linux distribution. We will be using Ubuntu 22.04. Docker installed. See our installation guide.  You can also use a pre-configured image with Docker. To do this, go to the Cloud servers section in your Hostman control panel, click Create server, and select Docker in the Marketplace tab. Working with Docker Images Docker images are created by other users and stored in registries—special repositories for images. Registries can be public or private. Public repositories are available to all users without requiring authentication. Private registries, however, can only be accessed by users with appropriate login credentials. Companies widely use private repositories to store their own images during software development. By default, Docker uses the public registry Docker Hub, which any user can use to publish their own images or download images created by others. When a user runs a command such as docker run, the Docker daemon will, by default, contact its standard registry. If necessary, you can change the registry to another one. To create custom Docker images, a Dockerfile is used—a text file containing instructions for building an image. These instructions use 18 specially reserved keywords. The most common types of instructions include the following: FROM specifies the base image. Every image starts with a base image. A base image refers to a Linux distribution, such as Ubuntu, Debian, Oracle Linux, Alpine, etc. There are also many images with various pre-installed software, such as Nginx, Grafana, Prometheus, MySQL, and others. However, even when using an image with pre-installed software, some Linux OS distribution will always be specified inside. WORKDIR creates a directory inside the image. Its functionality is similar to the mkdir utility used to create directories in Linux distributions. It can be used multiple times in one image. COPY copies files and directories from the host system into the image. It is used to copy configuration files and application source code files. ADD is similar to the COPY instruction, but in addition to copying files, ADD allows downloading files from remote sources and extracting .tar archives. RUN executes commands inside the image. With RUN, you can perform any actions that a user can perform in a Bash shell, including creating files, installing packages, starting services, etc. CMD specifies the command that will be executed when the container is started. Example: Creating an Image As an example, we will create an image with a simple Python program. Create a project directory and move into it: mkdir python-calculator && cd python-calculator Create a file console_calculator.py with the following content: print("*" * 10, "Calculator", "*" * 10) print("To exit from program type q") try: while True: arithmetic_operators = input("Choose arithmetic operation (+ - * /):\n") if arithmetic_operators == "q": break if arithmetic_operators in ("+", "-", "*", "/"): first_number = float(input("First number is:\n")) second_number = float(input("Second number is:\n")) print("The result is:") if arithmetic_operators == "+": print("%.2f" % (first_number + second_number)) elif arithmetic_operators == "-": print("%.2f" % (first_number - second_number)) elif arithmetic_operators == "*": print("%.2f" % (first_number * second_number)) elif arithmetic_operators == "/": if second_number != 0: print("%.2f" % (first_number / second_number)) else: print("You can't divide by zero!") else: print("Invalid symbol!") except (KeyboardInterrupt, EOFError) as e: print(e) Create a new Dockerfile with the following content: FROM python:3.10-alpine WORKDIR /app COPY console_calculator.py . CMD ["python3","console_calculator.py"] For the base image, we will use python:3.10, which is based on a lightweight Linux distribution called Alpine. We will discuss the use of Alpine in more detail in the next chapter. Inside the image, we will create a directory app, where the project file will be located. The container will be launched using the command "python3", "console_calculator.py". To build the image, the docker build command is used. Each image must also be assigned a tag. A tag is a unique identifier that can be assigned to an image. The tag is specified using the -t flag: docker build -t python-console-calculator:01 . The period at the end of the command indicates that the Dockerfile is located in the current directory. You can display the list of created images using: docker images To launch the container, use:  docker run --rm -it python-console-calculator:01 Let's test the functionality of the program by performing a few simple arithmetic operations: To exit the program, you need to press the q key. Since we specified the --rm flag when starting the container, the container will be automatically removed. You can also run the container in daemon mode, i.e., in the background. To do this, include the -d flag when starting the container: docker run -dit python-console-calculator:01 After that, the container will appear in the list of running containers: When starting the container in the background to access our script, you need to use docker exec, which executes a command inside the container. First, you need to start a shell (bash or sh), then manually run the script inside the container. To do this, use the docker exec command, passing the sh command as an argument to open the shell inside the container (where 4f1b8b26c607 is the unique container ID displayed in the CONTAINER ID column of the docker ps output): docker exec -it 4f1b8b26c607 sh Then, run the script manually: python console_calculator.py To remove a running container, you need to use the docker rm command and pass the container's ID or name. You also need to use the -f flag, which will force the removal of a running container: docker rm -f 186e8f43ca60 Optimizing Docker Images When creating Docker images, there is one main rule: finished images should be compact and occupy as little space as possible. Additionally, the smaller the image, the faster it is built. This can play a key role when using CI/CD methods or when releasing software in the Time to Market model. Proper Selection of the Base Image As the first recommendation, it's important to choose the base image wisely. For example, instead of using various Linux distribution images like Ubuntu, Oracle Linux, Rocky Linux, and many others, you can directly choose an image that already comes with the required programming language, framework, or other necessary technology. Examples of such images include: node for working with the Node.js platform A pre-built image with Nginx ibmjava for working with the Java programming language postgres for working with the PostgreSQL databases redis for working with the NoSQL Redis Using a specific image instead of an operating system image has the following advantages: There is no need to install the main tool (programming language, framework, etc.), so the image won't be "cluttered" with unnecessary packages, preventing an increase in size. Images that come with pre-installed software (like Nginx, Redis, PostgreSQL, Grafana, etc.) are always created by the developers of the software themselves. This means that users do not need to configure the program to run it (except in cases where it needs to be integrated with their service). Let's consider this recommendation with a practical example. We will use a simple Python program that prints "Hello from Python!".  First, we will build an image using debian as the base image. Create and navigate to the directory where the project files will be stored: mkdir dockerfile-python && cd dockerfile-python Create the test.py file with the following content: print("Hello from Python!") Next, create a Dockerfile with the following content: FROM debian:latest COPY test.py . RUN apt update RUN apt -y install python3 CMD ["python3", "test.py"] To run Python programs, you also need to install the Python interpreter. Then, build the image: docker build -t python-debian:01 . Let’s check the Docker image size:  docker images The image takes up 185MB, which is quite a lot for an application that just prints a single line to the terminal. Now, let's choose the correct base image, which is based on the Alpine distribution. Another feature of base images is that for many images, there are special versions in the form of slim and alpine images, which are even smaller. Let's look at the example of the official Python 3.10 image. The python:3.10 image takes up a whole 1 GB, whereas the slim version is much smaller—127 MB. And the alpine image is only 50 MB. Slim images are images that contain the minimum set of packages necessary to run a finished application. These images lack most packages and libraries. Slim images are created from both regular Linux distributions (such as Ubuntu or Debian) and Alpine-based distributions. Alpine images are images that use the Alpine distribution as the operating system— a lightweight Linux distribution that takes up about 5 MB of disk space (without the kernel). It differs from other Linux distributions in that it uses a package manager called apk, lacks the system initialization system, and has fewer pre-installed programs. When using both slim and Alpine images, it is essential to thoroughly test your application, as the required packages or libraries might be missing in such distributions. Now, let's test our application using the Python image with Alpine. Return to the previously used Dockerfile and replace the base image from debian to the python:alpine3.19 image. You should also remove the two RUN instructions, as there will be no need to install the Python interpreter: FROM python:alpine3.19 COPY test.py . CMD ["python3", "test.py"] Use a new tag to build the image: List all the Docker images. Check the image size and compare with the previous one:  Since we chose the correct base image with Python already preinstalled, the image size was reduced from 185 MB to 43.8 MB. Reducing the Number of Layers Docker images are based on the concept of layers. A layer represents a change made to the image's file system. These changes include copying/creating directories and files or installing packages. It is recommended to use as few layers as possible in the image. Among all Dockerfile instructions, only the FROM, COPY, ADD, and RUN instructions create layers that increase the final image size. All other instructions create temporary intermediate images and do not directly increase the image size. Let's take the previously used Dockerfile and modify it according to new requirements. Suppose we need to install additional packages using the apt package manager: FROM debian:latest COPY test.py . RUN apt update RUN apt -y install python3 htop net-tools mc gcc CMD ["python3", "test.py"] Build the image: docker build -t python-non-optimize:01 . Check the size of the created Docker image: docker images The image size was 570 MB. However, we can reduce the size by using fewer layers. Previously, our Dockerfile contained two RUN instructions, which created two layers. We can reduce the image size by combining the apt update and apt install commands using the && symbol, which in Bash means that the next command will only run if the first one completes successfully. Another important point is to remove cache files left in the image after package installation using the apt package manager (this also applies to other package managers such as yum/dnf and apk). The cache must be removed. For distributions using apt, the cache of installed programs is stored in the /var/lib/apt/lists directory. Therefore, we will add a command to delete all files in that directory within the RUN instruction without creating a new layer: FROM debian:latest COPY test.py . RUN apt update && apt -y install python3 htop net-tools mc gcc && rm -rf /var/lib/apt/lists/* CMD ["python3", "test.py"] Build the image: docker build -t python-optimize:03 . And check the size: The image size was reduced from the initial 570 MB to the current 551 MB. Using Multi-Stage Builds Another significant way to reduce the size of the created image is by using multi-stage builds. These builds, which involve two or more base images, allow us to separate the build environment from the runtime environment, effectively removing unnecessary files and dependencies from the final image. These unnecessary files might include libraries or development dependencies that are only needed during the build process. Let’s explore this approach with a practical example using the Node.js platform. Node.js should be installed beforehand, following our guide. We will first build the application image without multi-stage builds to evaluate the difference in size. Create a directory for the project: mkdir node-app && cd node-app Initialize a new Node.js application: npm init -y Install the express library: npm install express Create an index.js file with the content: const express = require('express'); const app = express(); const PORT = process.env.PORT || 3000; app.get('/', (req, res) => { res.send('Hello, World!'); }); app.listen(PORT, () => { console.log(Server is running on port${PORT}); }); Create Dockerfile with this content: FROM node:14-alpine WORKDIR /app COPY package*.json ./ RUN npm install COPY index.js . EXPOSE 3000 CMD ["npm", "start"] Build the image: docker build -t node-app:01 . Check the size: docker images The image size was 124 MB. Now let's rewrite the Dockerfile to use two images, transforming it into the following form: FROM node:14 AS builder WORKDIR /app COPY package*.json ./ RUN npm install COPY index.js . FROM gcr.io/distroless/base-debian10 AS production WORKDIR /app COPY --from=builder /app . EXPOSE 3000 CMD ["npm", "start"] Build the image: docker build -t node-app:02 . List the Docker images and check the size: docker images As a result, the image size was drastically reduced—from 124 MB to 21.5 MB. Conclusion In this article, we created our own Docker image and explored various ways to run it. We also paid significant attention to optimizing Docker images. Through optimization, we can greatly reduce the image size, which allows for faster image builds.
22 January 2025 · 12 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support