A Dockerfile is a simple text file that guides the creation of Docker images. It specifies the operating system, software, and settings your application needs.
Furthermore, Dockerfiles automate the image-building process. They also ensure consistent behavior across different environments. This feature makes it easier for teams to share and deploy applications.
In this guide, we’ll walk you through creating and using a Dockerfile. We’ll cover common commands and best practices for writing Dockerfiles.
Follow the steps below to create a simple Dockerfile.
This will keep your Dockerfile and related files organized. Open your terminal and run:
mkdir my_docker_project
cd my_docker_project
Inside the directory, create a file named Dockerfile
. This file will contain the instructions on how to build your Docker image.
Use vim
or any other text editor to create and open the file:
vim Dockerfile
This image serves as the foundation for your Docker image.
For example, to create a lightweight container, use Alpine, which is a minimal Linux distribution:
FROM alpine:latest
Next, use the RUN
command to install the necessary software inside the container. The RUN
command executes while the image is being built.
For example, to install curl
, add the following line:
RUN apk add --no-cache curl
Now, define the CMD
command. This command sets the default action the container performs when it starts.
In the below example, the container will display “Hello, World!” when run:
CMD ["echo", "Hello, World!"]
Finally, your Dockerfile will look like below:
FROM alpine:latest
RUN apk add --no-cache curl
CMD ["echo", "Hello, World!"]
Your Dockerfile defines a simple Docker image with a base OS, installed software, and a default command to run.
A Dockerfile consists of specific instructions that tell Docker how to build an image. Each instruction represents a command that Docker runs during the build process.
Here are some of the most common Dockerfile instructions that are essential for building images.
Every Dockerfile starts with the FROM
instruction. This tells Docker which base image to use when creating the new image. You must always use this as the first instruction.
For example:
FROM ubuntu:latest
In this case, Docker will use the latest version of Ubuntu as the base for the image.
The RUN
instruction enables you to execute commands inside the container while building the image. This is useful for installing packages or running scripts.
For example, to install Node.js:
RUN apt-get update && apt-get install nodejs -y
Docker runs this command while building the image. This ensures that Node.js is installed.
The CMD
instruction defines the default command that runs when a container starts. You can only use one CMD
instruction in a Dockerfile.
For instance, if you want the container to start a web server:
CMD ["node", "app.js"]
The above command tells Docker to run app.js
using Node.js when the container starts.
This instruction copies files from your local machine into the Docker container. COPY
is important when you need to include your application code inside the container.
For example:
COPY ./app /usr/src/app
This command copies the content of the app directory on your machine and puts it into the /usr/src/app
directory in the container.
Use the WORKDIR
instruction to set the working directory for the container. This defines where Docker will execute the next instructions. For example:
WORKDIR /usr/src/app
This instruction sets the working directory inside the container to /usr/src/app
.
The EXPOSE
instruction signals Docker that the container will listen on a specific network port at runtime. For instance, if your application listens on port 8080, you can add:
EXPOSE 8080
This makes port 8080 available when the container starts running.
The ENTRYPOINT
instruction configures a container to run as an executable. It defines the main command that will execute when the container starts.
Unlike CMD
, which can be overridden when you start the container, ENTRYPOINT
is more fixed and is designed to work like a command that always runs, no matter what.
You can still pass additional arguments when running the container.
For example, to make sure your container always runs a backup script:
ENTRYPOINT ["sh", "/usr/local/bin/backup.sh"]
This ensures that the container always runs the backup.sh
script. But you can still add options or arguments when you run the container.
Now that you understand the basics of Dockerfile instructions, let’s build a Docker image for a Node.js application.
First, create a simple app.js
file for the Node.js app. Add the following content:
const http = require('http');
const port = process.env.PORT || 3000;
const requestHandler = (request, response) => {
response.end('Hello, Docker!');
};
const server = http.createServer(requestHandler);
server.listen(port, () => {
console.log(`Server running on port ${port}`);
});
Start by creating a Dockerfile for your Node.js app. Open your project directory and create the Dockerfile:
vim Dockerfile
Include the following in the Dockerfile:
Base Image: Use a Node.js base image:
FROM node:14
Set Working Directory: Define the directory where the app files will be stored inside the container:
WORKDIR /usr/src/app
Copy Application Files: Move the app.js
file into the container:
COPY app.js .
Expose Port: Make port 3000 available for the app:
EXPOSE 3000
Default Command: Set the container to run the Node.js app when it starts:
CMD ["node", "app.js"]
Your Dockerfile should look like below:
FROM node:14
WORKDIR /usr/src/app
COPY app.js .
EXPOSE 3000
CMD ["node", "app.js"]
Once you have the Dockerfile ready, build the Docker image with the below command:
docker build -t my_node_app .
Docker will read the instructions in the Dockerfile and build the image.
After the build completes, check that the image was successfully created by listing all the available images:
docker images
Look for my_node_app
in the list to confirm that the image exists.
Now, run the Docker image to create a container. Use the docker run
command and map the container’s port to your local machine:
docker run -p 3000:3000 my_node_app
This starts the container, and you can access the app by opening a browser and navigating to http://localhost:3000
.
You should see the “Hello, Docker!” message from the Node.js app.
Following best practices when writing Dockerfiles improves efficiency and security. Here are some essential tips:
Avoid hardcoding sensitive data; use environment variables instead
Specify exact versions of dependencies for consistency
Optimize build caching by ordering instructions carefully
Clean up after installing packages to keep the image clean
Use multi-stage builds to separate build and runtime environments
These practices help create optimized, maintainable Docker images.
Here are a few common Dockerfile issues and how to fix them:
Large images result from unnecessary files or too many layers. Use smaller base images like Alpine and combine RUN
instructions. Add a .dockerignore
file to exclude unwanted files.
If builds take too long, your cache might not be working. Place frequently changing instructions (e.g., COPY
) at the end to optimize caching.
Specify exact versions for base images and dependencies to avoid build failures due to missing or incorrect packages.
Fix permission issues by adding chmod commands in the RUN
instruction.
If containers crash at the start, verify CMD
or ENTRYPOINT
commands. Check logs for details using:
docker logs <container_name>
Therefore, understanding and addressing these common Dockerfile issues will help keep your Docker builds running smoothly and efficiently.
Dockerfiles provide a clear and easy way to build and manage Docker images. They help automate the process of setting up applications inside containers and ensure they work the same everywhere.
Understanding how to use Dockerfiles and following best practices will help you create efficient and reliable containerized applications.