How to Use Dockerfile

How to Use Dockerfile
Emmanuel Oyibo
Technical writer
Docker
18.09.2024
Reading time: 7 min

A Dockerfile is a simple text file that guides the creation of Docker images. It specifies the operating system, software, and settings your application needs.

Furthermore, Dockerfiles automate the image-building process. They also ensure consistent behavior across different environments. This feature makes it easier for teams to share and deploy applications.

In this guide, we’ll walk you through creating and using a Dockerfile. We’ll cover common commands and best practices for writing Dockerfiles.

Setting Up a Basic Dockerfile

Follow the steps below to create a simple Dockerfile.

1. Create a New Directory

This will keep your Dockerfile and related files organized. Open your terminal and run:

mkdir my_docker_project
cd my_docker_project

2. Create the Dockerfile

Inside the directory, create a file named Dockerfile. This file will contain the instructions on how to build your Docker image.

Use vim or any other text editor to create and open the file:

vim Dockerfile

3. Specify the Base Image

This image serves as the foundation for your Docker image.

For example, to create a lightweight container, use Alpine, which is a minimal Linux distribution:

FROM alpine:latest

4. Add the RUN Command

Next, use the RUN command to install the necessary software inside the container. The RUN command executes while the image is being built.

For example, to install curl, add the following line:

RUN apk add --no-cache curl

5. Set the Default Command

Now, define the CMD command. This command sets the default action the container performs when it starts.

In the below example, the container will display “Hello, World!” when run:

CMD ["echo", "Hello, World!"]

Finally, your Dockerfile will look like below:

FROM alpine:latest
RUN apk add --no-cache curl
CMD ["echo", "Hello, World!"]

Your Dockerfile defines a simple Docker image with a base OS, installed software, and a default command to run.

Common Dockerfile Instructions

A Dockerfile consists of specific instructions that tell Docker how to build an image. Each instruction represents a command that Docker runs during the build process.

Here are some of the most common Dockerfile instructions that are essential for building images.

FROM

Every Dockerfile starts with the FROM instruction. This tells Docker which base image to use when creating the new image. You must always use this as the first instruction.

For example:

FROM ubuntu:latest

In this case, Docker will use the latest version of Ubuntu as the base for the image.

RUN

The RUN instruction enables you to execute commands inside the container while building the image. This is useful for installing packages or running scripts.

For example, to install Node.js:

RUN apt-get update && apt-get install nodejs -y

Docker runs this command while building the image. This ensures that Node.js is installed.

CMD

The CMD instruction defines the default command that runs when a container starts. You can only use one CMD instruction in a Dockerfile.

For instance, if you want the container to start a web server:

CMD ["node", "app.js"]

The above command tells Docker to run app.js using Node.js when the container starts.

COPY

This instruction copies files from your local machine into the Docker container. COPY is important when you need to include your application code inside the container.

For example:

COPY ./app /usr/src/app

This command copies the content of the app directory on your machine and puts it into the /usr/src/app directory in the container.

WORKDIR

Use the WORKDIR instruction to set the working directory for the container. This defines where Docker will execute the next instructions. For example:

WORKDIR /usr/src/app

This instruction sets the working directory inside the container to /usr/src/app.

EXPOSE

The EXPOSE instruction signals Docker that the container will listen on a specific network port at runtime. For instance, if your application listens on port 8080, you can add:

EXPOSE 8080

This makes port 8080 available when the container starts running.

ENTRYPOINT

The ENTRYPOINT instruction configures a container to run as an executable. It defines the main command that will execute when the container starts.

Unlike CMD, which can be overridden when you start the container, ENTRYPOINT is more fixed and is designed to work like a command that always runs, no matter what.

You can still pass additional arguments when running the container.

For example, to make sure your container always runs a backup script:

ENTRYPOINT ["sh", "/usr/local/bin/backup.sh"]

This ensures that the container always runs the backup.sh script. But you can still add options or arguments when you run the container.

Building Docker Images From a Dockerfile

Now that you understand the basics of Dockerfile instructions, let’s build a Docker image for a Node.js application.

First, create a simple app.js file for the Node.js app. Add the following content:

const http = require('http');
const port = process.env.PORT || 3000;

const requestHandler = (request, response) => {
  response.end('Hello, Docker!');
};

const server = http.createServer(requestHandler);
server.listen(port, () => {
  console.log(`Server running on port ${port}`);
});

Step 1: Write the Dockerfile

Start by creating a Dockerfile for your Node.js app. Open your project directory and create the Dockerfile:

vim Dockerfile

Include the following in the Dockerfile:

  • Base Image: Use a Node.js base image:

FROM node:14
  • Set Working Directory: Define the directory where the app files will be stored inside the container:

WORKDIR /usr/src/app
  • Copy Application Files: Move the app.js file into the container:

COPY app.js .
  • Expose Port: Make port 3000 available for the app:

EXPOSE 3000
  • Default Command: Set the container to run the Node.js app when it starts:

CMD ["node", "app.js"]

Your Dockerfile should look like below:

FROM node:14
WORKDIR /usr/src/app
COPY app.js .
EXPOSE 3000
CMD ["node", "app.js"]

Step 2: Build the Docker Image

Once you have the Dockerfile ready, build the Docker image with the below command:

docker build -t my_node_app .

Docker will read the instructions in the Dockerfile and build the image.

Step 3: Verify the Docker Image

After the build completes, check that the image was successfully created by listing all the available images:

docker images

Look for my_node_app in the list to confirm that the image exists.

Step 4: Run the Docker Image

Now, run the Docker image to create a container. Use the docker run command and map the container’s port to your local machine:

docker run -p 3000:3000 my_node_app

This starts the container, and you can access the app by opening a browser and navigating to http://localhost:3000.

You should see the “Hello, Docker!” message from the Node.js app.

Best Practices for Writing Dockerfiles

Following best practices when writing Dockerfiles improves efficiency and security. Here are some essential tips:

  • Avoid hardcoding sensitive data; use environment variables instead

  • Specify exact versions of dependencies for consistency

  • Optimize build caching by ordering instructions carefully

  • Clean up after installing packages to keep the image clean

  • Use multi-stage builds to separate build and runtime environments

These practices help create optimized, maintainable Docker images.

Troubleshooting Common Dockerfile Issues

Here are a few common Dockerfile issues and how to fix them:

Large Image Size

Large images result from unnecessary files or too many layers. Use smaller base images like Alpine and combine RUN instructions. Add a .dockerignore file to exclude unwanted files.

Cache Not Working

If builds take too long, your cache might not be working. Place frequently changing instructions (e.g., COPY) at the end to optimize caching.

Dependency Errors

Specify exact versions for base images and dependencies to avoid build failures due to missing or incorrect packages.

Permission Denied

Fix permission issues by adding chmod commands in the RUN instruction.

Container Crashes

If containers crash at the start, verify CMD or ENTRYPOINT commands. Check logs for details using:

docker logs <container_name>

Therefore, understanding and addressing these common Dockerfile issues will help keep your Docker builds running smoothly and efficiently.

Conclusion

Dockerfiles provide a clear and easy way to build and manage Docker images. They help automate the process of setting up applications inside containers and ensure they work the same everywhere.

Understanding how to use Dockerfiles and following best practices will help you create efficient and reliable containerized applications.

Docker
18.09.2024
Reading time: 7 min

Similar

Docker

Using Traefik in Docker as a Reverse Proxy for Docker Containers

Docker containers allow for quick and easy deployment of services and applications. However, as the number of deployed applications grows, and when multiple instances of a single service are required (especially relevant for microservices architecture), we must distribute network traffic. For this purpose, you can use Traefik, a modern open-source reverse proxy server designed specifically to work with Docker containers. In this guide, we will configure Traefik as a reverse proxy for several applications running in Docker containers. Prerequisites To use Traefik, the following are required: A cloud server or a virtual machine with any pre-installed Linux distribution. We will be using Ubuntu 22.04. Docker and Docker Compose installed. See our installation guide.  You can also use a pre-configured image with Docker. To do this, go to the Cloud servers section in your Hostman control panel, click Create server, and select Docker in the Marketplace tab. In this guide, we will use two containers with the Nginx web server. Each container will display a specific message when accessed by its domain name. We will cover the creation of these containers further below. Configuring Traefik Let's start by setting up Traefik: Create a directory for storing configuration files and navigate into it: mkdir ~/test-traefik && cd ~/test-traefik Inside the project’s root directory, create three subdirectories: one for the Traefik configuration file and two others for the configuration files of the applications that will use Traefik: mkdir traefik app1 app2 Create the main configuration file for Traefik named traefik.yml in the previously created traefik directory: nano traefik/traefik.yml Insert the following code into the file: entryPoints: web: address: ":80" providers: docker: exposedByDefault: false api: dashboard: true insecure: true Let’s look closer at the parameters. entryPoints define the ports and protocols through which Traefik will accept requests. They specify on which port and IP address the service will listen for traffic. web — A unique name for the entry point, which can be referenced in routes. In this example, we use the name web. address: ":80" — Indicates that the entry point will listen for traffic on port 80 (HTTP) across all available network interfaces on the system. providers specify the sources of information about which routes and services should be used (e.g., Docker, Kubernetes, files, etc.). docker — Enables and uses the Docker provider. When using the Docker provider, Traefik automatically detects running containers and routes traffic to them. exposedByDefault: false — Disables automatic exposure of all Docker containers as services. This makes the configuration more secure: only containers explicitly enabled through labels (traefik.enable=true) will be routed (i.e., will accept and handle traffic). The api section contains settings for the administrative API and Traefik's built-in monitoring web interface. dashboard: true — Enables Traefik's web-based monitoring dashboard, which allows you to track active routes, entry points, and services. The dashboard is not a mandatory component and can be disabled by setting this to false. insecure: true — Allows access to the monitoring dashboard over HTTP. This is convenient for testing and getting familiar with the system but is unsafe to use in a production environment. To ensure secure access to the dashboard via HTTPS, set this to false. Preparing Configuration Files for Applications Now, let's prepare the configuration files for the applications that will use Traefik as a reverse proxy. We will deploy two Nginx containers, each displaying a specific message when accessed via its address. Create the Nginx configuration file for the first application: nano app1/default.conf Contents: server { listen 80; server_name app1.test.com; location / { root /usr/share/nginx/html; index index.html; } } For the server name, we specify the local domain app1.test.com. You can use either an IP address or a domain name. If you don't have a global domain name, you can use any name that is accessible only at the local level. Additionally, you will need to add the chosen domain to the /etc/hosts file (explained later). Next, create the html directory where the index.html file for the first application will be stored: mkdir app1/html Write the message "Welcome to App 1" into the index.html file using input redirection: echo "<h1>Welcome to App 1</h1>" > app1/html/index.html Repeat the same steps for the second application, but use values specific to the second app: nano app2/default.conf Contents: server { listen 80; server_name app2.test.com; location / { root /usr/share/nginx/html; index index.html; } } Set the local domain name for the second application as app2.test.com. Create the html directory for the second application: mkdir app2/html Write the message "Welcome to App 2" into the index.html file: echo "<h1>Welcome to App 2</h1>" > app2/html/index.html Since we used local domain names, they need to be registered in the system. To do this, open the hosts file using any text editor: nano /etc/hosts Add the following entries: 127.0.0.1 app1.test.com  127.0.0.1 app2.test.com   The final project structure should look like this: test-traefik/ ├── app1/ │ ├── default.conf │ └── html/ │ └── index.html ├── app2/ │ ├── default.conf │ └── html/ │ └── index.html └── traefik/ └── traefik.yml Launching Traefik and Applications Now let's proceed with launching Traefik and the applications. To do this, create a docker-compose.yml file in the root project directory (test-traefik): nano docker-compose.yml Insert the following configuration: version: "3.9" services: traefik: image: traefik:v2.10 container_name: traefik restart: always command: - "--configFile=/etc/traefik/traefik.yml" ports: - "80:80" - "8080:8080" volumes: - "./traefik/traefik.yml:/etc/traefik/traefik.yml" - "/var/run/docker.sock:/var/run/docker.sock:ro" app1: image: nginx:1.26-alpine container_name: nginx-app1 restart: always volumes: - "./app1/default.conf:/etc/nginx/conf.d/default.conf" - "./app1/html:/usr/share/nginx/html" labels: - "traefik.enable=true" - "traefik.http.routers.app1.rule=Host(`app1.test.com`)" - "traefik.http.services.app1.loadbalancer.server.port=80" app2: image: nginx:1.26-alpine container_name: nginx-app2 restart: always volumes: - "./app2/default.conf:/etc/nginx/conf.d/default.conf" - "./app2/html:/usr/share/nginx/html" labels: - "traefik.enable=true" - "traefik.http.routers.app2.rule=Host(`app2.test.com`)" - "traefik.http.services.app2.loadbalancer.server.port=80" Use the following command to launch the containers: docker compose up -d If Docker Compose was installed using the docker-compose-plugin package, the command to launch the containers will be as follows: docker-compose up -d Check the status of the running containers using the command: docker ps All containers should have the status Up. Let's verify whether the running containers with Nginx services can handle the traffic. To do this, send a request to the domain names using the curl utility. For the first application: curl -i app1.test.com For the second application: curl -i app2.test.com As you can see, both services returned the previously specified messages. Next, let's check the Traefik monitoring dashboard. Open a browser and go to the server's IP address on port 8080: In the Routers section, you will see the previously defined routes app1.test.com and app2.test.com. Conclusion Today, we explored Traefik's functionality using two Nginx services as an example. With Traefik, you can easily proxy applications running in Docker containers.
17 January 2025 · 7 min to read
Docker

Removing Docker Images, Containers, Volumes, and Networks

Docker is software for quickly deploying applications through containerization. However, with its active use, many objects accumulate, consuming valuable host resources: images, containers, volumes, and networks. You can remove these objects through Docker Desktop, but it is much more convenient, especially when dealing with a significant number of objects, to use command-line tools. In this article, you will find tips for working with Docker and learn how to remove various objects using both the Docker Desktop client and command-line tools. Removing Containers To interact with containers and change their current state, including removing them, go to the "Containers/Apps" tab in the Docker Desktop web interface, select the desired object, and apply the chosen action: Now, let's look at how to remove these objects using command-line tools. To remove containers, use the docker container rm command or simply docker rm. For clarity, we will use docker container rm with the following syntax: docker container rm [removal options] [object ID] Options: --force or -f: Force removal of the container (e.g., if running). --link or -l: Remove the specified link (e.g., between two objects)*. --volume or -v: Remove anonymous volumes associated with the container. Containers are isolated from each other. One way to link them is via network ports. Using the --link flag will remove this network link in Docker. There are two commands in the command-line arsenal for obtaining information about containers: docker ps and docker container ls. They have the same options and syntax: Options: --all or -a: Display all containers. By default, only running containers are displayed. --filter or -f: Filter based on a set of flags. --format: Format the output. You can display only the necessary information. --last or -n: Show the last n containers. --latest or -l: Show the most recent container. --no-trunc: Do not truncate the output. --quiet or -q: Display only the container IDs. --size or -s: Display the total size. Using these parameters, you can create a list of containers you wish to remove, then pass the container IDs to the docker container rm command. For example, to create a list of containers with the status created or exited, run this command to get such objects: docker ps -a -f status=created -f status=exited Now, pass the result to the removal command: docker container rm $(docker ps -a -f status=created -f status=exited -q) To remove running containers, you must first stop them. Of course, you can use the --force option, but this may lead to data corruption with the application's working data. It is always better to first stop the containers with the docker stop command. To remove all containers in Docker, you can simply use these two commands: docker stop $(docker ps -a -q)docker container rm $(docker ps -a -q) There is a separate command to remove all stopped containers: docker container prune. Removing Docker Images Like containers, you can also remove Docker images within the client application. To do this, go to the "Images" tab: To delete an image, click "Clean up…" in the upper right corner and select the images. If an image is currently in use, Docker will not allow you to delete it. Now, let's move on to the command-line tools. There are two commands for removing Docker images: docker rmi and docker image rm. They are identical and work in much the same way as docker rm. Here's their syntax: docker rmi [remove options] [image IDs] Options: --force or -f: Forcefully remove the image. --no-prune: Do not remove untagged parent images. To find the image IDs, we use the following command: docker images [options] [REPOSITORY:[TAG]] Options: --all or -a: Show all images. By default, intermediate images are hidden. --digests: Show digests. --filter or -f: Filter by flags. --format: Format the output. --no-trunc: Do not truncate the output. --quiet or -q: Show only the image IDs. The application of these commands is the same as in the previous section. First, we query the list of images we want and use it as input for the docker rmi command. For example, to remove images that are not associated with any containers, we can use the dangling=true flag. It is important to note that we will get untagged images. docker images --filter dangling=true After checking the list, we can safely remove it: docker rmi $(docker images --filter dangling=true -q) To remove all unused images, use the docker image prune command. Removing Volumes A volume is a file system located outside the containers and stored on the host machine. To free up disk space occupied by volumes, go to the "Volumes" section, and in the top-right corner, select the corresponding icon: To delete volumes from the command line, use the docker volume rm command with the following syntax: docker volume rm [options] [volume names] This command is not very flexible with options and only provides the --force or -f flag for forced removal. You can only remove volumes if they are not associated with running containers. Forced removal of volumes linked to active containers is not recommended, as it may lead to data corruption. To list volume names, use the docker volume ls command with the following syntax: docker volume ls [options] Again, Docker is limited on options here, with only three available: --filter or -f: Filter by flags. --format: Format the output. --quiet or -q: Show only the volume names. Volumes exist independently of containers, and after their deletion, they remain in the host's file system as unlinked volumes. Let's try deleting such volumes. Use the dangling=true flag for this purpose: docker volume ls -f dangling=true Now, pass the results to the command for deletion: docker volume rm $(docker volume ls -f dangling=true -q) Alternatively, you can use another command to remove all unused volumes: docker volume prune. However, before using this command, check the list to ensure it includes only the volumes you want to remove. If you need to remove an unnamed volume, you can delete it with its associated container. For this, add the -v flag when using docker rm. Removing Networks To remove networks, you need to use the docker network rm command with the following syntax: docker network rm [network names/IDs] This command does not have any options. You can pass either names or IDs of the networks. To find the names and IDs of the networks, use the docker network ls command: docker network ls [options] This command has four available options: --filter or -f: Filter by flags. --format: Format the output. --no-trunc: Do not truncate the output. --quiet or -q: Show only IDs. Before deleting a network, you must remove any objects (containers) that are using it. To check which containers are using a specific network, use the following command: docker ps -f network=[network ID] Afterward, you can proceed to delete the network. For example, to delete networks with the driver=bridge value, use the following commands: docker network ls -f driver=bridgedocker network rm $(docker network ls -f driver=bridge -q) Cleaning Up Docker from All Objects Sometimes, you might need to remove everything and reinstall Docker to return an application to its initial state. Instead of deleting Docker entirely, you can execute a series of commands to clean up all objects and work with a fresh environment: Stop and remove containers: docker stop $(docker ps -a -q)docker rm $(docker ps -a -q) Remove images: docker rmi $(docker images -a -q) Remove volumes: docker volume rm $(docker volume ls -a -q) Remove networks: docker network rm $(docker network ls -a -q)  
05 December 2024 · 6 min to read
Docker

How to Install Docker on Ubuntu 22.04

Docker is a free, open-source tool for application containerization. Containers are isolated environments similar to virtual machines (VMs), but they are more lightweight and portable across platforms, requiring fewer system resources. Docker uses OS-level virtualization, leveraging features built into the Linux kernel. This guide walks through installing Docker on Ubuntu 22.04 but also applies to other Ubuntu versions. Additionally, we’ll download Docker Compose, a tool essential for managing multiple containers efficiently. For this guide, we will use a Hostman cloud server. System Requirements According to Docker's documentation, the following 64-bit Ubuntu versions are supported: Ubuntu Oracular 24.10 Ubuntu Noble 24.04 (LTS) Ubuntu Jammy 22.04 (LTS) Ubuntu Focal 20.04 (LTS) Docker works on most popular architectures. The resource requirements for your device will depend on your intended use and how comfortably you want to work with Docker. The scale of applications you plan to deploy in containers will largely dictate the system needs. Some sources recommend a minimum of 2 GB of RAM. Additionally, a stable internet connection is required. Installing Docker on Ubuntu 22.04 Installing Docker on Ubuntu 22.04 involves executing a series of terminal commands. Below is a step-by-step guide with explanations. The steps are also applicable to server versions of Ubuntu. 1. Update Package Indexes The default repository may not always contain the latest software releases. Therefore, we will download Docker from its official repository to ensure the latest version. First, update the package indexes: sudo apt update 2. Install Additional Packages To install Docker, you’ll need to download four additional packages: curl: Required for interacting with web resources. software-properties-common: Enables software management via scripts. ca-certificates: Contains information about certification authorities. apt-transport-https: Necessary for data transfer over the HTTPS protocol. Download these packages with the following command: sudo apt install curl software-properties-common ca-certificates apt-transport-https -y The -y flag automatically answers "Yes" to all terminal prompts. 3. Import the GPG Key The GPG key is required to verify software signatures. It is needed to add Docker's repository to the local list. Import the GPG key with the following command: wget -O- https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor | sudo tee /etc/apt/keyrings/docker.gpg > /dev/null During the import process, the terminal may display a warning before confirming the successful execution of the command. 4. Add Docker Repository Add the repository for your version of Ubuntu, named "Jammy." For other versions, use their respective code names listed in the "System Requirements" section. Run the following command: echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null During execution, the terminal will prompt you to confirm the operation. Press Enter. 5. Update Package Indexes Again After making these changes, update the package indexes once more using the familiar command: sudo apt update 6. Verify the Repository Ensure that the installation will proceed from the correct repository by running the following command: apt-cache policy docker-ce Output example: The output may vary depending on the latest Docker releases. The key point is to confirm that the installation will be performed from Docker's official repository. 7. Installing Docker After configuring the repositories, proceed with the Docker installation: sudo apt install docker-ce -y The installation process will begin immediately. To confirm a successful installation, check Docker's status in the system: sudo systemctl status docker Output example: The output should indicate that the Docker service is active and running. Installing Docker Compose Docker Compose is a Docker tool designed for managing multiple containers. It is commonly used in projects where many containers must work together as a unified system. Managing this process manually can be challenging. Instead, you describe the entire configuration in a single YAML file containing the settings and configurations for all containers and their applications. There are several ways to install Docker Compose. If you need the latest version, make sure to use manual installation and installation via the Git version control system. Installation via apt-get If having the latest version is not critical for you, Docker Compose can be installed directly from the Ubuntu repository. Run the following command: sudo apt-get install docker-compose Installing via Git First, install Git: sudo apt-get install git Verify the installation by checking the Git version: git --version The output should show the Git version. Next, clone the Docker Compose repository. Navigate to the Docker Compose GitHub page and copy the repository URL. Run the following command to clone the repository: git clone https://github.com/docker/compose.git The cloning process will begin, and the repository will be downloaded from GitHub. Manual Installation Go to the Docker Compose GitHub repository and locate the latest release version under the Latest tag. At the time of writing, the Latest version of Docker Compose is v2.31.0. Let's download it: sudo curl -L "https://github.com/docker/compose/releases/download/v2.31.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose In this command, the parameters $(uname -s) and $(uname -m) automatically account for the system characteristics and architecture. After the download finishes, change the file's permissions: sudo chmod +x /usr/local/bin/docker-compose Conclusion In this guide, we covered the installation of Docker on Ubuntu 22.04, along with several ways to install Docker Compose. You can order a cloud server at Hostman for your experiments and practice.
04 December 2024 · 5 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support