Want to know what a Docker is? Need to know how to copy files from your host to a Docker container? Read this article to learn everything about Docker containers and Docker container hosting.
There are few technologies as popular as Docker. A lot of developers and administrators buzz about it. Docker is everywhere and we all must say that even if you’re not a DevOps engineer it is important to understand what Docker containers are, how these things work, and why they are so useful. Well, let’s figure it out.
It is hard to answer the question "what is a Docker container" without explaining what containerization is. Containerization is an alternative way to create virtual machines. Instead of using hardware virtualization developers create software copies of computers where their applications are launched and run.
Docker is an open-source piece of software that is used to develop, test, deploy and run web applications in a special virtual environment. Docker helps to reduce the number of hardware resources needed to maintain running applications in a stable and efficient manner. Also, it is one of the best tools for developers who want to launch pre-made software products quickly, upscale them or move them to other environments without worrying about the reliability of their project or any other issues.
Container is a product of containerization. It is in essence quite similar to a virtual machine but a lot smaller and dedicated to maintaining only one application rather than a fully-fledged OS with a large number of software tools.
This kind of virtualization works like this:
We put all the necessary components of the web application into isolated software environments (virtual blocks).
Every block contains everything we need to launch an app properly.
Every block can be initiated multiple times.
So, containers are small virtual machines that help developers relaunch an application many times on one computer, and maintain this squadron of apps ensuring reliability and high performance.
The main part of aDocker is the Docker Engine. It consists of a server, REST API, and a client. The first is responsible for initializing and controlling containers, the second works as an interpreter between a user and a Docker, and the third is used to pass CLI commands to the Docker Engineserver.
This is the way it works:
The user sends a CLI command to the Docker server via the Docker client. For example, a command that creates a new container or pulls the image of an OS with pre-installed software tools that the user wants to use in his project.
Then the Docker server analyses the command and configuration data written in Dockerfile (a kind of document that consists of Docker settings) and performs the actionsthe user’s command dictates..
After the command is executed the Docker container is activated.
So the main idea is that the developer communicates with an app in a virtual environment using special software layers created by Docker.
There are 6 main advantages of using Docker instead of standard virtual machines:
Docker saves computer resources. Instead of emulating OS it uses core components of Linux distributive installed on a server and creates containers for applications like database, interface, etc.
Docker saves your time. You don’t have to set up every virtual machine. Set up just one and Docker will use core elements of it to create containers with the application running inside.
It protects you from malware. Docker containers are isolated from the core OS. So you can run any code inside the virtual block without worrying about malware corrupting your server.
It’s a piece of cake to scale. The only thing you have todo to scale your project — add new containers. The scope of the project depends on the number of virtual blocks used and nothing else.
Docker can launch apps on any compatible host. So you don’t have to spend time setting everything up before starting containerized software.
Docker works smoother than most virtualization technologies. It uses "integrated layer system" technology as an efficient file system.
You’re probably wondering, why should you be using Docker containers in the first place? Well, the idea behind Docker is really simple. That’s why this technology have become so popular among developers,container and Docker Engine server administrators, testers, programmers, and many others well.
It is often utilized in projects connected with large web services because of its easily scalable system where DevOps specialists can create new iterations of the app in only a few commands.
Also, administrators love Docker because of its monitoring system:it is easy to keep an eye on the whole system and individual components with containers.
The Docker ecosystem with containers and static files is an app, like classic websites or web applications. To make it accessible to users online you have to host it somewhere. On a remote PC that will maintain your service or application day and night .
Docker container hosts are not so different from any other hosts that DevOps specialists or developers work with. But it is really important to choose within the hosts the one that will give you the reliable, fully-fledged platform for your project.
What is a Docker host and how do we work with them? Generally, a Docker host is hardware that serves as a platform for the operating system and Docker containers;basically we are talking about the server. A computer that is placed somewhere in the host’s data center and works 24/7 to maintain your web application or website.
To work with containers (e.g. develop, test, deploy and run applications) you need a proper server. It can be a virtual private server or a dedicated server. Most of developers believe that dedicated is the best option. Some think that VPS is safer to use but the easiest way to host Docker containers is to use hosting platforms like Hostman. There’s an option in this platform that makes it possible to use GitHub (GitLab and Bitbucker are also supported) repository with your code and, in a few clicks, deploy it. All you have to do is:
connect your repository with the Dockerfile to a Hostman profile,
choose the option to deploy thea Docker container,
configure the host (server),
and wait till the rest is done almost automatically.
Hostman will find all the necessary files and create an instance of your Docker service. In 3 simple steps, you’ll get ready for the work environment.
Hostman not only simplifies the process of deploying websites, applications, and complex IT infrastructures but grants users a secure and trustworthy server for projects of any type. Moreover, services like Hostman dramatically reduce costs of DevOps specialists because hosting will:
Set up a cloud server for you.
Install all dependencies.
Build the code.
Set up CDN for static files.
Install SSL certificate.
Monitor the app to relaunch it if it fails.
Essentially, Hostman becomes your DevOps engineer.
At the very beginning when we create a Docker image and are ready to launch the container for the first time, we use the command docker run
. But it won’t work as expected because a developer has to forward ports so containers gain access to the host and vice versa.
It is necessary to connect Docker containers to the host because sometimes a lot of project files are saved on a local machine but are designed to be managed and accessible within containers.
So, to access the host from the Docker container, the developer has to use additional options while launching virtual machines. For example:
docker run -d -p 8099:8080 [name of the Docker container]
That will make it possible for the newly created virtual machine to interact with the local machine.
It is quite easy.
First, open the directory where your Docker project is saved.
Then launch command docker container ls
, so you can see all the containers that exist on your server.
And then launch command docker exec -it [name of the Docker container] /bin/bash
.
That’s it. After accessing the docker container from the host as described you can manipulate your container as you wish.
Sometimes you need to move files between virtual machines and your local machine. For instance, to access logs. Or to bring some data to the local database. Let’s get into how to copy a file from host to Docker container and vice versa.
File management within Docker containers and Docker host is quite similar to what developers used to see in the Linux terminal. The commanders are almost the same for moving and copying files.
Let’s start with copying. It’ll give you an idea about everything else. For example, you have a task to copy a file from the Docker container to the host. You can do it using the command docker cp
:
docker cp [options] [path to file in container] [path to host]
docker cp filename:/usr/local/apache2/conf/httpd.conf /home/fedora/dockers/httpd/
The command above copies the configuration file of Apache server from the Docker container to a local machine.
It works either way. If you need to copy a file from host to Docker container use the same command:
docker cp [options] [path to local file] [path to a container]
It’s dead simple. And it works, not only for files, but for directories.
Usually, to move files from a docker container to a host (or the other way around) developers do the following:
Get access to the Docker container.
Copy files to host.
Remove files from the Docker container.
There are ways to use commands like RUN mv
but they frequently fail to launch.
By this term users and developers usually mean one of two different things.
The first one is the IP address that is used by people who use containerized applications. Of course, users access the Docker app using a domain name. But IP also exists and works the same way as it works with different websites and classic web applications. This is what’s also called Bridge address:the bridge between users and your application.
The second is the IP address of the Docker container itself which is used by different containers on one host to communicate with each other, stay connected, and work seamlessly managing common data (databases, logs, etc.). Put simply, as a result of interaction between these containers with different IPs, the Bridge IP is generated and makes it possible for users to access the app.
It is necessary to know the IP of certain Docker containers to make the interaction between them manageable.
There are three ways to get a Docker container’s IP address:
The easiest way is to use the command docker inspect
. Using the command line you must write something like docker inspect [name of your docker container]
. If you don’t know the name of the container you need to inspect, you might want to use the command docker ps
which will show you all the containers launched at the moment. Also, instead of a specific container, you can access the data from the whole network so you’ll see IPs for every container.
Another method is docker exec
. Using the command line you must write docker exec [name of the container] cat /etc/hosts
. After that, you’ll see the result with local address and IP address at the last line of code.
The third and final method — using docker exec
but inside the Docker container. It is the same command we use for accessing containers but with a small addition. Using the command line you must write docker exec -it [name of the container]
. After that, you can access the container’s data. To get the IP address just write ip -4 -o address
and CL will return the IP of the container you’re in. Also, it is possible to get data from other containers. Just write ping [name of another docker] container
. It’ll show the IP address of a pinged resource.
So, here we are. That’s the basics of Docker. Simple commands to launch it, move files around, etc. But the coolest thing is that these commands are all that you need because services like Hostman will automate the process of deploying and basic setup.
You just need to choose a configuration of a web server on the Hostman’s website and connect it with your GitHub, GitLab, or Bitbucket account. It will cost you $5.5/month and you can try all the functions for free during the 7-days trial period.