Sign In
Sign In

Converting a Container to a Virtual Machine

Converting a Container to a Virtual Machine
Hostman Team
Technical writer
Docker
22.01.2025
Reading time: 11 min

A tricky question often asked during technical interviews for a DevOps engineer position is: "What is the difference between a container and a virtual machine?" Most candidates get confused when answering this question, and some interviewers themselves don’t fully understand what kind of answer they want to hear. To clearly understand the differences and never have to revisit this question, we will show you how to convert a container into a virtual machine and run it in the Hostman cloud.

And if you’re looking for a reliable, high-performance, and budget-friendly solution for your workflows, Hostman has you covered with Linux VPS Hosting options, including Debian VPS, Ubuntu VPS, and VPS CentOS.

The process described in this article will help better understand the key differences between containers and virtual machines and demonstrate each approach's practical application. This article will be especially useful for working with systems requiring a specific environment.

We will perform all further actions in a Linux OS environment and use a virtual machine based on the KVM hypervisor created with VirtualBox to prepare the necessary image. You can also use other providers such as VMware, QEMU, or virt-manager.

Configuration of Our Future Virtual Machine

Let’s start this exciting journey by creating a container. For this, we will use Docker. If it is not installed yet, install it using the command below (before that, you may need to update the list of available packages with sudo apt update):

sudo apt install docker.io -y

Create a container based on the minimal Alpine image and attach to its shell:

sudo docker run --name test -it alpine sh

Install the necessary programs using the apk package manager that you plan to use in the future virtual machine. You don’t necessarily have to limit yourself to packages from the standard Alpine repository — you can also add other repositories or, if needed, download or compile packages directly in the container.

apk add tmux busybox-extras openssh-client openssh-server iptables dhclient ppp socat tcpdump vim openrc mkinitfs grub grub-bios

Here’s a list of minimally required packages:

  • tmux — a console multiplexer. It will be useful for saving user sessions and the context of running processes in case of a network disconnect.

  • busybox-extras — an extended version of BusyBox that includes additional utilities but remains a compact distribution of standard tools.

  • openssh-client and openssh-server — OpenSSH client and server, necessary for setting up remote connections.

  • iptables — a utility for configuring IP packet filtering rules.

  • dhclient — a DHCP client for automating network configuration.

  • ppp — a package for implementing the Point-to-Point Protocol.

  • socat — a program for creating tunnels, similar to netcat, with encryption support and an interactive shell.

  • tcpdump — a utility for capturing traffic. Useful for debugging network issues.

  • vim — a console text editor with rich customization options. It is popular among experienced Linux users.

  • openrc — an initialization system based on dependency management that works with SysVinit. It’s a key component needed to convert a container into a virtual machine, as containers do not have it by default.

  • mkinitfs — a package for generating initramfs, allowing you to build necessary drivers and modules that are loaded during the initial system initialization.

  • grub and grub-bios — OS bootloader. In this case, we are specifically interested in creating a bootloader for BIOS-based systems using an MBR partition table.

Set the root password:

export PASSWORD=<your secret password>  
echo "root:$PASSWORD" | chpasswd  

Create a user. You will need it for remote SSH access later:

export USERNAME=<username>  
adduser -s /bin/sh $USERNAME  

Set the SUID bit on the executable file busybox. This is necessary so that the user can execute commands with superuser privileges:

chmod u+s /bin/busybox  

Create a script to be executed during system initialization:

cat <<EOF > /etc/local.d/init.start  
#!/bin/sh  

dmesg -n 1  
mount -o remount,rw /  
ifconfig lo 127.0.0.1 netmask 255.0.0.0  
dhclient eth0  
# ifconfig eth0 172.16.0.200 netmask 255.255.255.0  
# route add -net default gw 172.16.0.1  
busybox-extras telnetd  
EOF  

Let’s go through the script line by line:

  • dmesg -n 1 — Displays critical messages from the Linux kernel's message buffer so that potential issues can be detected during startup.

  • mount -o remount,rw / — Remounts the root file system (/) with the rw (read-write) flag. This allows modifications to the file system after boot.

  • ifconfig lo 127.0.0.1 netmask 255.0.0.0 — Configures the loopback interface (lo) with IP address 127.0.0.1 and subnet mask 255.0.0.0. This ensures internal network communication on the machine.

  • dhclient eth0 — Runs the DHCP client for the eth0 interface to automatically obtain IP address settings and other network parameters from a DHCP server.

  • # ifconfig eth0 172.16.0.200 netmask 255.255.255.0 — This line is commented out, but if uncommented, it will assign a static IP address 172.16.0.200 and subnet mask 255.255.255.0 to the eth0 interface. We included this line in the script in case a static network configuration is needed.

  • # route add -net default gw 172.16.0.1 — This line is also commented out, but if uncommented, it will add a default route with gateway 172.16.0.1. This determines how packets will be routed outside the local network.

  • busybox-extras telnetd — Starts the Telnet server. Please note that using the Telnet protocol in production environments is not recommended due to the lack of encryption for data transmission.

Make the script executable:

chmod +x /etc/local.d/init.start

Add the script to the autostart:

rc-update add local

Add the OpenSSH server daemon to the autostart. This will allow you to connect to the cloud server via SSH later:

rc-update add sshd default

Set the default DNS server:

echo nameserver 8.8.8.8 > /etc/resolv.conf

Exit the terminal using the exit command or the keyboard shortcut CTRL+D. The next step is to save the container's file system to the host as an archive, which can also be done using Docker. In my case, the final artifact is only 75 megabytes in size.

sudo docker export test > test.tar

Transforming a Docker Image into a Virtual Machine Image

Containers are a Linux-specific technology since they don't have their own kernel and instead rely on abstractions of the host's Linux kernel to provide isolation and resource management. The key abstractions include:

  • namespaces: isolation for USER, TIME, PID, NET, MOUNT, UTS, IPC, CGROUP namespaces.

  • cgroups: limitations on resources like CPU, RAM, and I/O.

  • capabilities: a set of capabilities for executing specific privileged operations without superuser rights.

These kernel components make Docker and other container technologies closely tied to Linux, meaning they can't natively run on other operating systems like Windows, macOS, or BSD.

For running Docker on Windows, macOS, or BSD, there is Docker Desktop, which provides a virtual machine with a minimal Linux-based operating system kernel. Docker Engine is installed and running inside this virtual machine, enabling users to manage containers and images in their usual environment.

Since we need a full operating system and not just a container, we will require our own kernel.

  1. Create the image file we will work with:

truncate -s 200M test.img
  1. Use fdisk to create a partition on the test.img image:

echo -e "n\np\n1\n\n\nw" | fdisk test.img
    • n — create a new partition
    • p — specify that this will be a primary partition
    • 1 — the partition number
    • \n\n — use default values for the start and end sectors
    • w — write changes
  1. Associate the test.img file with the /dev/loop3 device, starting from an offset of 2048 blocks (1 MB):

sudo losetup -o $[2048*512] /dev/loop3 test.img

Note that /dev/loop3 may already be in use. You can check used devices with:

losetup -l
  1. Format the partition linked to /dev/loop3 as EXT4:

sudo mkfs.ext4 /dev/loop3
  1. Mount the partition at /mnt:

sudo mount /dev/loop3 /mnt
  1. Extract the Docker image (test.tar) into the /mnt directory:

sudo tar xvf test.tar -C /mnt
  1. Create the /mnt/boot directory to store the bootloader and kernel files:

sudo mkdir -pv /mnt/boot
  1. Download the Linux kernel source code:

wget https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-6.8.9.tar.xz
  1. Extract the Linux kernel source code in the current directory:

tar xf linux-6.8.9.tar.xz
  1. Install the necessary packages for building the Linux kernel:

sudo apt install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison -y
  1. Navigate to the kernel source directory and create the default configuration file:

cd linux-6.8.9
make defconfig
  1. Add necessary configuration options to the .config file:

echo -e "CONFIG_BRIDGE=y\nCONFIG_TUN=y\nCONFIG_PPP=y\nCONFIG_PPP_ASYNC=y\nCONFIG_PPP_DEFLATE=y" >> .config
    • CONFIG_BRIDGE=y — Enables network bridge support, allowing multiple network interfaces to be combined into one.

    • CONFIG_TUN=y — Enables support for virtual network interfaces like TUN/TAP, useful for VPN setups.

    • CONFIG_PPP=y — Enables support for the Point-to-Point Protocol (PPP).

    • CONFIG_PPP_ASYNC=y — Enables asynchronous PPP for serial ports.

    • CONFIG_PPP_DEFLATE=y — Enables PPP data compression using the DEFLATE algorithm.

  1. Prepare the source code for building:

make prepare -j4
  1. Create the necessary scripts, build the compressed kernel image (bzImage) and the kernel modules:

make scripts -j4
make bzImage -j4
make modules -j4
  1. Install the built kernel and modules into the /mnt/boot directory (which contains the virtual machine image filesystem):

sudo make INSTALL_PATH=/mnt/boot install
sudo make INSTALL_MOD_PATH=/mnt modules_install
  1. Install the GRUB bootloader into the /mnt/boot directory. Make sure you're in the directory containing the test.img file:

sudo grub-install --target=i386-pc --boot-directory=/mnt/boot/test.img --modules='part_msdos'
  1. Bind-mount the host system’s /proc, /sys, and /dev directories to the /mnt directory. This is necessary for creating the initramfs:

sudo mount --bind /proc /mnt/proc/
sudo mount --bind /sys /mnt/sys/
sudo mount --bind /dev /mnt/dev/
  1. Change root (chroot) into the /mnt filesystem using a shell:

sudo chroot /mnt /bin/sh
  1. Generate the initial RAM filesystem (initramfs) for the kernel version you are working with:

mkinitfs -k -o /boot/initrd.img-6.8.9 6.8.9
  1. Generate the GRUB bootloader configuration file:

grub-mkconfig -o /boot/grub/grub.cfg

By completing these steps, you will have created a small virtual machine image with a fully working Linux kernel, a bootloader (GRUB), and an initramfs.

Local Verification of the Built Image

For local verification, it’s most convenient to use QEMU. This package is available for Windows, macOS, and Linux. Install it by following the instructions for your OS on the official website.

  1. Convert the test.img to the qcow2 format. This will reduce the size of the final image from 200 MB to 134 MB.

qemu-img convert test.img -O qcow2 test.qcow2
  1. Run the image using QEMU.

qemu-system-x86_64 -hda test.qcow2

If all steps were completed correctly, the initialization process will be successful, and an interactive menu for entering the login and password will appear.

To check the version of the installed kernel, use the uname -a command, which will output the necessary information.

Creating a Virtual Machine in Hostman

Go to the Cloud Servers section and start creating a new server. Select the prepared and tested image as the server’s base. To do this, first add it to the list of available images. Supported formats include: iso, qcow2, vmdk, vhd, vhdx, vdi, raw, img.

0f245af6 1c65 43a9 Beb7 Cfc11492f439

Upload the image in one of the available ways: from your computer or by link.

A97f348d C383 4c0b Bc98 Cdbbd5bc2108

Note that after uploading, the image will also be available via URL.

5b6a2998 9f30 4336 85c8 5bf316a9f0d8

Continue with the creation of the cloud server and specify the other parameters of its configuration. Since the image is minimal, it can be run even on the smallest configuration.

Once the cloud server is created, go to the Console tab and verify whether the virtual machine was successfully created from the image.

Image2

The virtual machine has been created and works correctly.

Image1

Since we added the OpenSSH daemon to the autostart in advance, it is now possible to establish a full remote connection to the server using the username, IP address, and password.

C57a8bf8 C7cf 475d 9e34 Fce3bfe3640b

Conclusion

To turn a container into a full-fledged lightweight virtual machine, we sequentially added key components: the OpenRC initialization system, GRUB bootloader, Linux kernel, and initramfs. This process highlighted the importance of each component in the overall virtual machine architecture and demonstrated the practical differences from container environments.

As a result of this experiment, we realized the importance of understanding the architecture and functions of each component to successfully create images for specific needs and to manage virtual machines more effectively from a resource perspective. The image built in this article is quite minimal since it is a Proof-of-Concept, but one can go even further. For example, you could use a special guide to minimize the kernel and explore minimal Linux distributions such as Tiny Core Linux or SliTaz. On the other hand, if your choice is to add functionality by increasing the image size, we strongly recommend checking out the Gentoo Wiki. This resource offers extensive information on fine-tuning the system.

Docker
22.01.2025
Reading time: 11 min

Similar

Docker

Installing Nextcloud with Docker

For those who want full control over their data, Nextcloud provides a powerful open-source solution for building a private cloud storage system. It not only enables secure file synchronization across devices but also allows you to deploy storage on your own server, avoiding reliance on third-party providers. In this guide, we’ll go through the process of installing Nextcloud using isolated Docker containers, which greatly simplifies deployment and management. We’ll also configure automatic traffic encryption with SSL certificates from Let’s Encrypt to ensure secure data transmission. Prerequisites You will need: A Hostman cloud server with Linux Ubuntu 24.04 pre-installed. A domain name. Docker and Docker Compose installed. For the server, choose a configuration with 1 CPU core, 2 GB of RAM, and a public IPv4 address, which you can request when creating the server or later in the “Network” section. The server will be set up within a few minutes. The IPv4 address, login, and password for SSH access will be available in the Dashboard. Installing and Running Nextcloud Nextcloud requires several key components to run: Database: in this case, MariaDB, a high-performance and reliable DBMS. SSL certificate: we’ll use free SSL certificates from the non-profit certificate authority Let’s Encrypt. Reverse proxy: we’ll add Nginx Proxy Manager, which will route and balance incoming HTTP and HTTPS traffic to the appropriate containers. Step 1: Create a Configuration Directory First, we create a directory where we will store configuration files, and navigate to it. mkdir nextcloud && cd nextcloud Step 2: Create an .env File This hidden file will store variables with passwords: nano .env File contents: NEXTCLOUD_ROOT_PASSWORD=secure_root_password_123 NEXTCLOUD_DB_PASSWORD=secure_nextcloud_db_password_456 NPM_ROOT_PASSWORD=secure_npm_root_password_789 NPM_DB_PASSWORD=secure_npm_db_password_012 Don’t forget to replace the values with your own. Step 3: Create the docker-compose.yml File Use nano to create it: nano docker-compose.yml Add the following configuration: volumes: nextcloud-data: nextcloud-db: npm-data: npm-ssl: npm-db: networks: frontend: backend: services: nextcloud-app: image: nextcloud:31.0.8 restart: always volumes: - nextcloud-data:/var/www/html environment: - MYSQL_PASSWORD=${NEXTCLOUD_DB_PASSWORD} - MYSQL_DATABASE=nextcloud - MYSQL_USER=nextcloud - MYSQL_HOST=nextcloud-db - MYSQL_PORT=3306 networks: - frontend - backend nextcloud-db: image: mariadb:12.0.2 restart: always command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW volumes: - nextcloud-db:/var/lib/mysql environment: - MYSQL_ROOT_PASSWORD=${NEXTCLOUD_ROOT_PASSWORD} - MYSQL_PASSWORD=${NEXTCLOUD_DB_PASSWORD} - MYSQL_DATABASE=nextcloud - MYSQL_USER=nextcloud networks: - backend npm-app: image: jc21/nginx-proxy-manager:2.12.6 restart: always ports: - "80:80" - "81:81" - "443:443" environment: - DB_MYSQL_HOST=npm-db - DB_MYSQL_PORT=3306 - DB_MYSQL_USER=npm - DB_MYSQL_PASSWORD=${NPM_DB_PASSWORD} - DB_MYSQL_NAME=npm volumes: - npm-data:/data - npm-ssl:/etc/letsencrypt networks: - frontend - backend npm-db: image: jc21/mariadb-aria:10.11.5 restart: always environment: - MYSQL_ROOT_PASSWORD=${NPM_ROOT_PASSWORD} - MYSQL_DATABASE=npm - MYSQL_USER=npm - MYSQL_PASSWORD=${NPM_DB_PASSWORD} volumes: - npm-db:/var/lib/mysql networks: - backend Step 4: Start the Containers Run the command: docker compose up -d When running docker compose up -d, you may encounter an error related to Docker Hub pull limits. In that case: Log in to your Docker Hub account, or register a new one on the official website. Go to Account settings → Personal access tokens. Click Generate new token. Enter a description, set an expiration date, and select permissions: Read, Write, Delete. Click Generate. Copy and save the token (it will only be shown once). On the server, log in with: docker login -u dockeruser Replace dockeruser with your Docker Hub username. When prompted for a password, paste the token. Restart the containers:  docker compose up -d Wait until all containers are up and running.  Check with: docker ps All containers should have the status Up. Step 5. Configure HTTPS with Let’s Encrypt Open a browser and go to http://<server-IP>:81 to access the Nginx Proxy Manager interface. Log in with the default credentials: Login: admin@example.com Password: changeme On first login, update the admin user details (Full Name, Nickname, Email). Change the admin password: Current Password: changeme New Password: your new password Confirm Password: repeat the new password Save changes. Step 6: Add a Proxy Host Go to Hosts → Proxy Hosts. Click Add Proxy Host. Fill in the fields: Domain Names: the domain for your Nextcloud instance. Scheme: http. Forward Hostname/IP: nextcloud-app (the service name from docker-compose.yml). Forward Port: 80. Go to the SSL tab: In SSL Certificate, select Request a new SSL Certificate. Enable: Force SSL HTTP/2 Support HSTS Enabled Enter your email for Let’s Encrypt. Agree to Let’s Encrypt terms of service. Click Save. The configured host will appear in the list. Step 7. Log In to NextCloud Now, navigate to your domain name. If everything is set up correctly, the Nextcloud web interface will open, and an SSL certificate will be issued by Let’s Encrypt. Create a new administrator account. Optionally, install recommended apps or skip this step. At this point, Nextcloud installation and basic configuration is complete. Conclusion In this article, we demonstrated how to deploy Nextcloud using Docker and issue a free Let’s Encrypt certificate. This method is one of the most reliable, secure, and easily scalable approaches. Docker ensures application isolation, simplifies updates, and makes migration between systems easier. Using an SSL certificate is not just a recommendation but a necessity for protecting confidential data and ensuring encrypted traffic.
24 September 2025 · 5 min to read
Docker

Docker Exec: Access, Commands, and Use Cases

docker exec is a utility that allows you to connect to an already running Docker container and execute commands without restarting or stopping it. This is very convenient for technical analysis, configuration, and debugging applications. For example, you can check logs, modify configurations, or restart services. And on a cloud server in Hostman, this command helps manage running applications in real time, without rebuilding containers or interfering with the image. What is Docker exec Command The docker exec command allows users to interact with running Docker containers by executing commands directly inside them. This is a critical tool for container management, debugging, and performing administrative tasks without the need to restart or stop containers. It provides a way to troubleshoot and configure containers in real-time, facilitating a seamless workflow for managing containerized applications. How to Use docker exec: Parameters and Examples Before using it, make sure Docker is installed and the container is running. If you are just starting out, check out the installation guide for Docker on Ubuntu 22.04. The basic syntax of docker exec is: docker exec [options] <container> <command> Where: <container> is the name or ID of the target container; <command> is the instruction to be executed inside it. Key Options: -i — enables input mode; -t — attaches a pseudo-terminal, useful for running bash; -d — runs the task in the background; -u — allows running the command as a specified user; -e — sets environment variables; -w — sets the working directory in which the command will be executed. Example of launching bash inside a container: docker exec -it my_container /bin/bash This way, you can access the container’s environment and run commands directly without stopping it. Usage Examples List files inside the container: ls /app Run commands with root access: docker exec -u root my_container whoami Pass environment variables: docker exec -e DEBUG=true my_container env Set working directory: docker exec -w /var/www my_container ls Run background tasks: docker exec -d my_container touch /tmp/testfile Check Nginx configuration inside a container before restarting it: docker exec -it nginx_container nginx -t Advanced Use Cases Let’s consider some typical but slightly more complex scenarios that may be useful in daily work: running as another user, passing multiple environment variables, specifying a working directory, etc. Run as web user: docker exec -u www-data my_container ls -la /var/www Set multiple environment variables at once: docker exec -e DEBUG=true -e STAGE=dev my_container env Set working directory with admin rights: docker exec -u root -w /opt/app my_container ls Example with Laravel in Hostman If you deploy a Laravel application in a container on a Hostman server, docker exec will be very handy. Suppose you have a container with Laravel and a database in a separate service. To connect to the server: ssh root@your-server-ip After connecting, you can run Artisan commands—Laravel’s built-in CLI—inside the container. Clear application cache: docker exec -it laravel_app php artisan cache:clear Run migrations: docker exec -it laravel_app php artisan migrate Check queue status: docker exec -it laravel_app php artisan queue:listen Set permissions: docker exec -u www-data -it laravel_app php artisan config:cache Make a backup of a database deployed in a separate container: docker exec -it mariadb_container mysqldump -u root -p laravel_db > backup.sql Before running the last command, make sure that a volume for /backup is mounted, or use SCP to transfer the file to your local machine. This approach does not require changing the image or direct container access, which makes administration safe and flexible. Extended Capabilities of docker exec In this section, we will look at less common but more flexible uses of the docker exec command: for example, running psql in a PostgreSQL container, executing Node.js scripts, or connecting to stopped containers. These cases show how flexible the command can be if something non-standard is required. The command is not limited to basic tasks: in addition to launching shell or bash, you can work with environments, interact with databases, run Node.js scripts, and connect to any running container. Connect to PostgreSQL CLI: docker exec -it postgres_container psql -U postgres -d my_db Run a Node.js script (if you have script.js): docker exec -it node_app node script.js Run a stopped container: docker start my_container   docker exec -it my_container bash Manage users explicitly with -u: docker exec -u www-data my_container ls -la /var/www Quickly remove temporary files: docker exec -it my_container rm -rf /tmp/cache/* This approach is convenient in cron jobs or when manually cleaning temporary directories. When Not to Use the Command Despite its convenience, docker exec is a manual tool for interacting with containers. In production environments, its use can be risky. Why not use docker exec in production: Changes are not saved in Dockerfile. This can break reproducibility and infrastructure integrity. No command logging, so it’s difficult to track actions. Possible desynchronization with CI/CD pipeline. Instead, it is recommended to use: Dockerfile and docker-compose.yml for reproducible builds; CI/CD for automating tasks via GitHub Actions or GitLab CI; Monitoring for log processes with Prometheus, Grafana, and Loki. Troubleshooting Common Errors No such container Cause: container not found or stopped Solution: docker ps The command shows a list of running containers. If your container is not listed, it’s not running or hasn’t been created. exec failed: container not running Cause: attempt to run a command in a stopped container Solution: docker start <container_name> After starting the container, you can use docker exec again. permission denied Cause: insufficient user permissions Solution: docker exec -u root <container> <command> The -u root flag runs the command as root, providing extended access inside the container. This is especially useful when working with system files or configurations. Difference Between docker exec and docker attach In addition to docker exec, there is another way to interact with a container—the docker attach command. It connects you directly to the main process running inside the container, as if you launched it in the terminal. This is convenient if you need to monitor logs or enter data directly, but there are risks: any accidental key press (for example, Ctrl+C) can stop the container. That’s why it’s important to understand the differences. Also, docker attach requires TTY (a terminal emulator) for correct work with interactive apps like bash or sh. Parameter docker exec docker attach Requires TTY Optional Yes Multiple connections Yes No Interferes with main process No Yes Usable for debugging Yes Partially (may harm app) Use docker exec for auxiliary tasks—it provides flexibility and reduces risks. Use Cases of Docker exec Debugging and Troubleshooting: One of the most common uses of docker exec is for debugging running containers. You can quickly inspect logs, check the file system contents, or run diagnostic tools inside the container to investigate issues. For example, you could run:  docker exec -it my-container tail -f /var/log/syslog This command will allow you to stream the contents of a log file in real-time to help identify problems. Configuration Modification: docker exec is useful for making quick configuration changes inside containers. For instance, if you need to update environment variables or adjust configuration files for an application running in a container, you can do so without restarting the container. Example: docker exec my-container bash -c "echo 'new_value' > /path/to/config/file" Maintenance Tasks: For ongoing maintenance, docker exec allows you to perform various tasks such as running database migrations, executing backups, or installing missing packages within the container. For example: docker exec my-container /bin/bash -c "apt-get update && apt-get install -y new-package" This can be helpful when you need to manage container-based services without interrupting their operation. Security Audits: docker exec enables security professionals to examine a container's internal state, check for potential vulnerabilities, or review installed software and packages for compliance. You can execute commands like: docker exec my-container dpkg -l | grep vulnerable-package  This can help in scanning containers for security flaws or outdated software that may pose a risk. Running Ad-Hoc Commands: Sometimes, you need to run quick, one-time commands inside a container for tasks such as checking the system status, testing a specific command, or inspecting the environment. docker exec allows you to run such commands without the need to enter the container interactively. Example: docker exec my-container uptime This will return the uptime of the container without needing to access the shell. Conclusion The docker exec command is an effective tool for managing containers without interfering with their lifecycle. It allows you to run commands as different users, pass variables, check logs, and perform administrative tasks. When working in cloud services such as Hostman, this is especially useful: you can perform targeted actions without rebuilding the image and without risking the main process. It is important to remember: docker exec is a manual tool and does not replace automated DevOps approaches. For system-level changes, it is better to use Dockerfile and CI/CD, keeping your infrastructure reproducible and secure. FAQ 1. What does docker exec do? The docker exec command allows you to run commands inside an already running container. You can execute simple one-off commands (e.g., ls, cat, ps) or launch a full shell session (/bin/bash or /bin/sh). This is useful for debugging, inspecting processes, modifying configuration, or performing administrative tasks within the container. Example:docker exec my-container ls -l This lists files inside the container named my-container. 2. How do I get into docker exec? To open an interactive shell inside a container, you use docker exec with the -it flags: -i keeps STDIN open (interactive mode). -t allocates a pseudo-terminal (TTY). Example: docker exec -it my-container /bin/bash This drops you into a Bash shell inside the container. If Bash is not available, you can try /bin/sh: docker exec -it my-container /bin/sh 3. What is the difference between shell and exec in Docker? docker exec: Runs a command inside an already running container. You use it when you want to access or inspect a container that’s currently active. docker run with a shell (like /bin/bash): Creates a new container from an image and immediately launches the specified shell. The lifecycle is different because run starts a new instance, whereas exec attaches to an existing one. In short: exec = run command in an existing container. run = start a new container and run a command inside it. 4. What is the exec command used for? The exec command is used for: Debugging and troubleshooting: Open a shell to inspect logs, running processes, or network connectivity inside a container. Maintenance tasks: Apply updates, perform database migrations, or run backups without stopping the container. Quick checks: Run one-off commands (like checking disk usage or environment variables). Security checks: Verify installed packages or scan for vulnerabilities. Example use case: docker exec -it my-container env This shows all environment variables inside the container.
05 September 2025 · 10 min to read
Docker

How to Install Docker on Ubuntu 22.04

Docker is a free, open-source tool for application containerization. Containers are isolated environments similar to virtual machines (VMs), but they are more lightweight and portable across platforms, requiring fewer system resources. Docker uses OS-level virtualization, leveraging features built into the Linux kernel. Apps order after installing Docker on Ubuntu Although it applies to other Ubuntu versions as well, this tutorial explains how to install Docker on Ubuntu 22.04. We'll also download Docker Compose, which is a necessary tool for effectively managing several containers. For this guide, we will use a Hostman cloud server. Choose your server now! System Requirements According to Docker's documentation, the following 64-bit Ubuntu versions are supported: Ubuntu Oracular 24.10 Ubuntu Noble 24.04 (LTS) Ubuntu Jammy 22.04 (LTS) Ubuntu Focal 20.04 (LTS) Docker works on most popular architectures. The resource requirements for your device will depend on your intended use and how comfortably you want to work with Docker. The scale of applications you plan to deploy in containers will largely dictate the system needs. Some sources recommend a minimum of 2 GB of RAM. Additionally, a stable internet connection is required. Installing Docker on Ubuntu 22.04 Installing Docker on Ubuntu 22.04 involves executing a series of terminal commands. Below is a step-by-step guide with explanations. The steps are also applicable to server versions of Ubuntu. 1. Update Package Indexes The default repository may not always contain the latest software releases. Therefore, we will download Docker from its official repository to ensure the latest version. First, update the package indexes: sudo apt update 2. Install Additional Packages To install Docker, you’ll need to download four additional packages: curl: Required for interacting with web resources. software-properties-common: Enables software management via scripts. ca-certificates: Contains information about certification authorities. apt-transport-https: Necessary for data transfer over the HTTPS protocol. Download these packages with the following command: sudo apt install curl software-properties-common ca-certificates apt-transport-https -y The -y flag automatically answers "Yes" to all terminal prompts. 3. Import the GPG Key Software signatures must be verified using the GPG key. Docker's repository must be added to the local list. Use the command to import the GPG key: wget -O- https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor | sudo tee /etc/apt/keyrings/docker.gpg > /dev/null During the import process, the terminal may display a warning before confirming the successful execution of the command. 4. Add Docker Repository Add the repository for your version of Ubuntu, named "Jammy." For other versions, use their respective code names listed in the "System Requirements" section. Run the following command: echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null During execution, the terminal will prompt you to confirm the operation. Press Enter. 5. Update Package Indexes Again After making these changes, update the package indexes once more using the familiar command: sudo apt update 6. Verify the Repository Ensure that the installation will proceed from the correct repository by running the following command: apt-cache policy docker-ce Output example: Depending on the most recent Docker releases, the result could change. Verifying that the installation will be carried out from Docker's official repository is crucial. 7. Installing Docker After configuring the repositories, proceed with the Docker installation: sudo apt install docker-ce -y The installation process will begin immediately. To confirm a successful installation, check Docker's status in the system: sudo systemctl status docker Output example: The output should indicate that the Docker service is active and running. And if you’re looking for a reliable, high-performance, and budget-friendly solution for your workflows, Hostman has you covered with Linux VPS Hosting options, including Debian VPS, Ubuntu VPS, and VPS CentOS. Installing Docker Compose Docker Compose is a Docker tool designed for managing multiple containers. It is commonly used in projects where many containers must work together as a unified system. Managing this process manually can be challenging. Instead, you describe the entire configuration in a single YAML file containing the settings and configurations for all containers and their applications. There are several ways to install Docker Compose. If you need the latest version, make sure to use manual installation and installation via the Git version control system. Installation via apt-get If having the latest version is not critical for you, Docker Compose can be installed directly from the Ubuntu repository. Run the following command: sudo apt-get install docker-compose Installing via Git First, install Git: sudo apt-get install git Verify the installation by checking the Git version: git --version The output should show the Git version. Next, clone the Docker Compose repository. Navigate to the Docker Compose GitHub page and copy the repository URL. Run the following command to clone the repository: git clone https://github.com/docker/compose.git The cloning process will begin, and the repository will be downloaded from GitHub. Manual Installation Go to the Docker Compose GitHub repository and locate the latest release version under the Latest tag. At the time of writing, the Latest version of Docker Compose is v2.31.0. Let's download it: sudo curl -L "https://github.com/docker/compose/releases/download/v2.31.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose In this command, the parameters $(uname -s) and $(uname -m) automatically account for the system characteristics and architecture. After the download finishes, change the file's permissions: sudo chmod +x /usr/local/bin/docker-compose Right order of your infrastructure after installation of Docker on Ubuntu Conclusion In this guide, we covered the installation of Docker on Ubuntu 22.04, along with several ways to install Docker Compose. You can order a cloud server at Hostman for your experiments and practice. Choose your server now!
22 August 2025 · 5 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support