A tricky question often asked during technical interviews for a DevOps engineer position is: "What is the difference between a container and a virtual machine?" Most candidates get confused when answering this question, and some interviewers themselves don’t fully understand what kind of answer they want to hear. To clearly understand the differences and never have to revisit this question, we will show you how to convert a container into a virtual machine and run it in the Hostman cloud.
The process described in this article will help better understand the key differences between containers and virtual machines and demonstrate each approach's practical application. This article will be especially useful for working with systems requiring a specific environment.
We will perform all further actions in a Linux OS environment and use a virtual machine based on the KVM hypervisor created with VirtualBox to prepare the necessary image. You can also use other providers such as VMware, QEMU, or virt-manager.
Let’s start this exciting journey by creating a container. For this, we will use Docker. If it is not installed yet, install it using the command below (before that, you may need to update the list of available packages with sudo apt update
):
sudo apt install docker.io -y
Create a container based on the minimal Alpine image and attach to its shell:
sudo docker run --name test -it alpine sh
Install the necessary programs using the apk
package manager that you plan to use in the future virtual machine. You don’t necessarily have to limit yourself to packages from the standard Alpine repository — you can also add other repositories or, if needed, download or compile packages directly in the container.
apk add tmux busybox-extras openssh-client openssh-server iptables dhclient ppp socat tcpdump vim openrc mkinitfs grub grub-bios
Here’s a list of minimally required packages:
tmux
— a console multiplexer. It will be useful for saving user sessions and the context of running processes in case of a network disconnect.
busybox-extras
— an extended version of BusyBox that includes additional utilities but remains a compact distribution of standard tools.
openssh-client
and openssh-server
— OpenSSH client and server, necessary for setting up remote connections.
iptables
— a utility for configuring IP packet filtering rules.
dhclient
— a DHCP client for automating network configuration.
ppp
— a package for implementing the Point-to-Point Protocol.
socat
— a program for creating tunnels, similar to netcat
, with encryption support and an interactive shell.
tcpdump
— a utility for capturing traffic. Useful for debugging network issues.
vim
— a console text editor with rich customization options. It is popular among experienced Linux users.
openrc
— an initialization system based on dependency management that works with SysVinit. It’s a key component needed to convert a container into a virtual machine, as containers do not have it by default.
mkinitfs
— a package for generating initramfs, allowing you to build necessary drivers and modules that are loaded during the initial system initialization.
grub
and grub-bios
— OS bootloader. In this case, we are specifically interested in creating a bootloader for BIOS-based systems using an MBR partition table.
Set the root password:
export PASSWORD=<your secret password>
echo "root:$PASSWORD" | chpasswd
Create a user. You will need it for remote SSH access later:
export USERNAME=<username>
adduser -s /bin/sh $USERNAME
Set the SUID bit on the executable file busybox
. This is necessary so that the user can execute commands with superuser privileges:
chmod u+s /bin/busybox
Create a script to be executed during system initialization:
cat <<EOF > /etc/local.d/init.start
#!/bin/sh
dmesg -n 1
mount -o remount,rw /
ifconfig lo 127.0.0.1 netmask 255.0.0.0
dhclient eth0
# ifconfig eth0 172.16.0.200 netmask 255.255.255.0
# route add -net default gw 172.16.0.1
busybox-extras telnetd
EOF
Let’s go through the script line by line:
dmesg -n 1
— Displays critical messages from the Linux kernel's message buffer so that potential issues can be detected during startup.
mount -o remount,rw /
— Remounts the root file system (/
) with the rw
(read-write) flag. This allows modifications to the file system after boot.
ifconfig lo 127.0.0.1 netmask 255.0.0.0
— Configures the loopback interface (lo
) with IP address 127.0.0.1 and subnet mask 255.0.0.0. This ensures internal network communication on the machine.
dhclient eth0
— Runs the DHCP client for the eth0
interface to automatically obtain IP address settings and other network parameters from a DHCP server.
# ifconfig eth0 172.16.0.200 netmask 255.255.255.0
— This line is commented out, but if uncommented, it will assign a static IP address 172.16.0.200 and subnet mask 255.255.255.0 to the eth0
interface. We included this line in the script in case a static network configuration is needed.
# route add -net default gw 172.16.0.1
— This line is also commented out, but if uncommented, it will add a default route with gateway 172.16.0.1. This determines how packets will be routed outside the local network.
busybox-extras telnetd
— Starts the Telnet server. Please note that using the Telnet protocol in production environments is not recommended due to the lack of encryption for data transmission.
Make the script executable:
chmod +x /etc/local.d/init.start
Add the script to the autostart:
rc-update add local
Add the OpenSSH server daemon to the autostart. This will allow you to connect to the cloud server via SSH later:
rc-update add sshd default
Set the default DNS server:
echo nameserver 8.8.8.8 > /etc/resolv.conf
Exit the terminal using the exit command or the keyboard shortcut CTRL+D. The next step is to save the container's file system to the host as an archive, which can also be done using Docker. In my case, the final artifact is only 75 megabytes in size.
sudo docker export test > test.tar
Containers are a Linux-specific technology since they don't have their own kernel and instead rely on abstractions of the host's Linux kernel to provide isolation and resource management. The key abstractions include:
namespaces: isolation for USER
, TIME
, PID
, NET
, MOUNT
, UTS
, IPC
, CGROUP
namespaces.
cgroups: limitations on resources like CPU, RAM, and I/O.
capabilities: a set of capabilities for executing specific privileged operations without superuser rights.
These kernel components make Docker and other container technologies closely tied to Linux, meaning they can't natively run on other operating systems like Windows, macOS, or BSD.
For running Docker on Windows, macOS, or BSD, there is Docker Desktop, which provides a virtual machine with a minimal Linux-based operating system kernel. Docker Engine is installed and running inside this virtual machine, enabling users to manage containers and images in their usual environment.
Since we need a full operating system and not just a container, we will require our own kernel.
Create the image file we will work with:
truncate -s 200M test.img
Use fdisk to create a partition on the test.img image:
echo -e "n\np\n1\n\n\nw" | fdisk test.img
n
— create a new partitionp
— specify that this will be a primary partition1
— the partition number\n\n
— use default values for the start and end sectorsw
— write changesAssociate the test.img
file with the /dev/loop3
device, starting from an offset of 2048 blocks (1 MB):
sudo losetup -o $[2048*512] /dev/loop3 test.img
Note that /dev/loop3
may already be in use. You can check used devices with:
losetup -l
Format the partition linked to /dev/loop3
as EXT4:
sudo mkfs.ext4 /dev/loop3
Mount the partition at /mnt
:
sudo mount /dev/loop3 /mnt
Extract the Docker image (test.tar
) into the /mnt
directory:
sudo tar xvf test.tar -C /mnt
Create the /mnt/boot
directory to store the bootloader and kernel files:
sudo mkdir -pv /mnt/boot
Download the Linux kernel source code:
wget https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-6.8.9.tar.xz
Extract the Linux kernel source code in the current directory:
tar xf linux-6.8.9.tar.xz
Install the necessary packages for building the Linux kernel:
sudo apt install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison -y
Navigate to the kernel source directory and create the default configuration file:
cd linux-6.8.9
make defconfig
Add necessary configuration options to the .config
file:
echo -e "CONFIG_BRIDGE=y\nCONFIG_TUN=y\nCONFIG_PPP=y\nCONFIG_PPP_ASYNC=y\nCONFIG_PPP_DEFLATE=y" >> .config
CONFIG_BRIDGE=y
— Enables network bridge support, allowing multiple network interfaces to be combined into one.
CONFIG_TUN=y
— Enables support for virtual network interfaces like TUN/TAP, useful for VPN setups.
CONFIG_PPP=y
— Enables support for the Point-to-Point Protocol (PPP).
CONFIG_PPP_ASYNC=y
— Enables asynchronous PPP for serial ports.
CONFIG_PPP_DEFLATE=y
— Enables PPP data compression using the DEFLATE
algorithm.
Prepare the source code for building:
make prepare -j4
Create the necessary scripts, build the compressed kernel image (bzImage) and the kernel modules:
make scripts -j4
make bzImage -j4
make modules -j4
Install the built kernel and modules into the /mnt/boot
directory (which contains the virtual machine image filesystem):
sudo make INSTALL_PATH=/mnt/boot install
sudo make INSTALL_MOD_PATH=/mnt modules_install
Install the GRUB bootloader into the /mnt/boot
directory. Make sure you're in the directory containing the test.img
file:
sudo grub-install --target=i386-pc --boot-directory=/mnt/boot/test.img --modules='part_msdos'
Bind-mount the host system’s /proc
, /sys
, and /dev
directories to the /mnt
directory. This is necessary for creating the initramfs:
sudo mount --bind /proc /mnt/proc/
sudo mount --bind /sys /mnt/sys/
sudo mount --bind /dev /mnt/dev/
Change root (chroot
) into the /mnt
filesystem using a shell:
sudo chroot /mnt /bin/sh
Generate the initial RAM filesystem (initramfs) for the kernel version you are working with:
mkinitfs -k -o /boot/initrd.img-6.8.9 6.8.9
Generate the GRUB bootloader configuration file:
grub-mkconfig -o /boot/grub/grub.cfg
By completing these steps, you will have created a small virtual machine image with a fully working Linux kernel, a bootloader (GRUB), and an initramfs.
For local verification, it’s most convenient to use QEMU. This package is available for Windows, macOS, and Linux. Install it by following the instructions for your OS on the official website.
Convert the test.img to the qcow2
format. This will reduce the size of the final image from 200 MB to 134 MB.
qemu-img convert test.img -O qcow2 test.qcow2
Run the image using QEMU.
qemu-system-x86_64 -hda test.qcow2
If all steps were completed correctly, the initialization process will be successful, and an interactive menu for entering the login and password will appear.
To check the version of the installed kernel, use the uname -a
command, which will output the necessary information.
Go to the Cloud Servers section and start creating a new server. Select the prepared and tested image as the server’s base. To do this, first add it to the list of available images. Supported formats include: iso, qcow2, vmdk, vhd, vhdx, vdi, raw, img.
Upload the image in one of the available ways: from your computer or by link.
Note that after uploading, the image will also be available via URL.
Continue with the creation of the cloud server and specify the other parameters of its configuration. Since the image is minimal, it can be run even on the smallest configuration.
Once the cloud server is created, go to the Console tab and verify whether the virtual machine was successfully created from the image.
The virtual machine has been created and works correctly.
Since we added the OpenSSH daemon to the autostart in advance, it is now possible to establish a full remote connection to the server using the username, IP address, and password.
To turn a container into a full-fledged lightweight virtual machine, we sequentially added key components: the OpenRC initialization system, GRUB bootloader, Linux kernel, and initramfs. This process highlighted the importance of each component in the overall virtual machine architecture and demonstrated the practical differences from container environments.
As a result of this experiment, we realized the importance of understanding the architecture and functions of each component to successfully create images for specific needs and to manage virtual machines more effectively from a resource perspective. The image built in this article is quite minimal since it is a Proof-of-Concept, but one can go even further. For example, you could use a special guide to minimize the kernel and explore minimal Linux distributions such as Tiny Core Linux or SliTaz. On the other hand, if your choice is to add functionality by increasing the image size, we strongly recommend checking out the Gentoo Wiki. This resource offers extensive information on fine-tuning the system.