Sign In
Sign In

Installing and Using Portainer

Installing and Using Portainer
Hostman Team
Technical writer
Docker
07.05.2024
Reading time: 12 min

Portainer is a container management tool that seamlessly works with both Docker and Kubernetes.

It is available in two versions:

  • free and open Community Edition;

  • paid Business Edition with additional features for corporate clients.

In this article, we will focus on installing Portainer on Ubuntu 22.04 with Docker and using the Community Edition. Although we will use Ubuntu as an example, most of the steps are similar for other operating systems, making this tutorial applicable to a variety of use cases.

Portainer is excellent for both beginners and professionals. Its intuitive graphical interface greatly simplifies management, making container technology accessible even to those new to the field. Experienced users will also find a rich selection of options for fine-tuning and personalization.

The article will focus on installing, overviewing the main functions and settings, connecting an external server as an environment, and providing a practical example of deploying WordPress on an external server using Portainer.

Prerequisites

  • A computer or a cloud server running a Linux-based OS such as Ubuntu, Debian etc.

In this article, we'll demonstrate installing Portainer on a local machine; however, if you plan to use it as a team, the application can also be installed on a cloud server, providing centralized management and accessibility to all team members. 

Installing Portainer in Docker

Step 1: Install Docker and Docker Compose

Before installing Portainer, make sure Docker is installed on your system. If it is, you can skip this step. Otherwise, run the following commands to install:

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh ./get-docker.sh

After installation, check the versions by running the commands:

docker -v
docker compose version

This will confirm successful installation and show the versions of installed programs.

Step 2: Create a working directory

Create a directory for the application in /opt and move to it:

cd /opt
sudo mkdir hostmanportainer
cd ./hostmanportainer

Step 3: Create a configuration file

Now create a docker-compose.yml file in the hostmanportainer directory. This file will describe the startup configuration. Use nano or any other text editor to create the file:

sudo nano docker-compose.yml

Paste the following content into the file:

version: "3.3"
services:
	hostmanportainer:
		image: portainer/portainer-ce:latest
		container_name: hostmanportainer
		environment:
			- TZ=Europe/London
		volumes:
			- /var/run/docker.sock:/var/run/docker.sock
			- /opt/hostmanportainer/portainer_data:/data
		ports:
			- "8000:8000"
			- "9443:9443"
		restart: always

Description of parameters:

  • version: "3.3": Indicates the version of Docker-compose you are using. Version 3.3 is suitable for most modern applications.

  • services: This section describes the services to start.

  • hostmanportainer: Service name. Used as an identifier.

  • image: portainer/portainer-ce:latest: Specifies the image to be used. The latest version of Community Edition is used here.

  • container_name: hostmanportainer: Assigns a name to the container to make it easier to identify.

  • environment: Allows you to set environment variables. For example, TZ=Europe/London sets the time zone of the container.

  • volumes:

    • /var/run/docker.sock: /var/run/docker.sock allows Portainer to communicate with Docker on your host;

    • /opt/hostmanportainer/portainer_data:/data creates a persistent data store.

  • ports:

    • "8000:8000" and "9443:9443" open the corresponding ports to access the Portainer. 9443 is used for HTTPS connection.

  • restart: always: Ensures that the container will automatically restart when necessary, for example after a server reboot.

Step 4: Launch

After creating the configuration file, run Portainer with Docker using the command:

docker compose up -d

Step 5: Access the Interface

Portainer is now running and accessible at https://<ip_or_localhost>:9443. Open this address in your browser to access the web interface.

Image1

Step 6: Create an Administrator Account

When you first log in, you will be asked to create an administrator account. Please note that the password requires a minimum of 12 characters. After completing the registration process, you will have access to the settings and container management functionality in the interface.

General Settings

To access the settings, go to the Settings section. Here we will cover the key settings that are most important for the basic configuration. We recommend reading the official documentation for a deeper understanding of all available settings.

  • Application settings. In this section, you can configure settings such as the frequency of creating state snapshots and sending anonymous application usage statistics.

409d2664 Ef50 4763 B2c4 297f82a5d0e7

  • App Templates. Here you can specify the URL of a JSON file with templates for quickly deploying containers. You can also use pre-installed templates, making launching new applications easier.
  • SSL certificate. This section allows you to upload your own SSL certificates for a secure connection. While this is not required for a local installation, attaching your own SSL certificate increases security when deploying Portainer on a remote server.

8542340f 252e 4b81 9e5b 5c1f9cc98a55

  • Backup up Portainer. This section allows you to create a backup copy of the application settings and configuration. It is useful for ensuring data security and simplifying migration to other systems.

194e333b A7a0 47bb Aa37 21a9fabbc6b8

  • Authentication. Here you can configure the user session duration and select the authentication method. The following methods are available in Community Edition: Internal (default), LDAP, and OAuth. However, it is worth noting that configuring OAuth in Community Edition has limitations, and popular services such as Microsoft OAuth, Google OAuth, Github OAuth are not supported, which requires manual configuration. When using Internal authentication, you can change the password requirements, such as reducing the minimum number of characters.

A8a9c1e8 D2df 424e A551 D02c8019c7cd

To change your password, go to the upper right corner of the screen, click on your account name and select My account. This will allow you to update your password and other personal settings.

After studying the basic settings, let's move on to other important sections available in the left menu of the interface.

  • Users. This section is for managing users. It is especially useful for working with a team as it allows you to restrict access to resources. Here you can create and manage individual users. In addition, in the Teams section you can create teams with different users for more granular access control. It should be noted that more advanced role settings are only available in the Business Edition.

61465395 4bfa 46d5 8dd6 1a06e46ff892

  • Registries. In the Registries section, users can configure access to image repositories. The management interface facilitates integration with popular repositories such as DockerHub, AWS ECR, Quay.io, ProGet, Azure and GitLab, allowing you to efficiently manage container images directly through a graphical user interface.

5d0d9718 871c 4ea5 A038 Ec96ff917869

  • Environments. The key section of Portainer for connecting to and managing external servers or environments. You can manage a variety of environments here, including Docker, Docker Swarm, Kubernetes, and ACI. Nomad is also available in the Business Edition. This section allows Portainer Server to manage multiple environments, simplifying scaling and infrastructure management.

Adding a new environment

To demonstrate the process of adding a new environment, we will connect a server on Ubuntu 22.04 with Docker pre-installed. This can be either a new server or a server on which containers are already running.

  1. Start by clicking the Add environment button in the Environments section.
  2. Select Docker Standalone and use the setup wizard by clicking Start Wizard.

6ef5896a 6a99 4d4d Ab9b E96bc432aa90

  1. During the setup process, select Agent and run the following command on the server that you plan to connect as an environment:
docker run -d\
-p 9001:9001\
--name portainer_agent\
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/lib/docker/volumes:/var/lib/docker/volumes \
portainer/agent:2.19.4

This command will launch the Portainer Agent, allowing Portainer Server to connect to the server and manage containers.

  1. After successfully installing and launching the agent on the server, return to the web interface and complete the connection process by specifying the name of the environment and its address in the format server_ip:9001.
  2. Click Connect to complete the connection. After successfully adding an environment, a pop-up notification Environment created will be displayed in the interface.

Environments Settings

When you go to the Home page, you will see two environments: local (the device where the application is running) and the previously added server. After selecting the previously added server, the menu on the left will update, adding management functions for the environment.

4fcdf77d 67d1 400b 9f98 3872b2a95245

Environment Management

  • Images. The Images section displays all available images in the system. Here you can delete images individually or en masse, as well as download new images using the Pull image option.

304ccebd Eaf3 45e3 A1af 0a26639a40d2

  • Networks. The Networks page displays all available networks. Using an intuitive setup wizard accessible through Add Network, users can create new networks, expanding the connectivity between containers.

73746096 7c0f 4b1c B644 D25632e4df4d

  • Volumes. The Volumes section contains information about all volumes. This section allows you to view existing volumes, delete them, or create new ones using the Add volume setup wizard.
  • Containers. The Containers section provides extensive container management capabilities. In this section, all existing containers are visible. You can delete, suspend, activate, or restart them. The Quick Actions menu provides additional functions, including viewing container information, statistics, and console access.

1bb5993b 7235 47db B7cb 0848690441e7

To create a new container, click on Add container. For example. let's create a container with Nginx. Specify  the name, the nginx image and set up the network ports: click on publish a new network port and enter port 9090 for host, and port 80 for container.

386dace1 Ff8f 426d 8d49 A6023de4e8d6

Next, click on the Deploy the container button below and wait until the container is deployed.

Upon completion, you will be redirected to the Container list page. After deploying the container, going to http://server_ip:9090 will show running Nginx.

Advanced Features: App Templates and Stacks

The App Templates section is a collection of pre-configured templates for deploying common applications and services. These templates are designed to simplify the process of creating new containers by minimizing the need for manual configuration. Users can choose from a variety of available templates that range from basic web servers to complex multi-tier applications.

When using a template, it is enough to specify some basic parameters, such as the container name, network settings and, in some cases, specific settings such as passwords or environment variables. This makes the App Templates section particularly useful for quickly testing new ideas and utilities, as well as learning and experimenting with new technologies.

6f329daa Af5e 47d4 9063 9078cdb3573e

Stacks are an efficient way to manage groups of containers. They are defined through docker-compose files. This simplifies the management of complex applications and provides automation and consistency in deployment.

Use Case: Using Stacks for WordPress Deployment

A special feature of Stacks is choosing the configuration definition method: you can use the built-in editor to directly write or edit Docker Compose files, download a ready-made docker-compose.yml file, or even connect a git repository to update and deploy containers automatically.

Now let's put this technology into practice. Using WordPress deployment as an example, we'll show you how to use Stacks to create and manage multi-container applications. This example will help you understand how to simplify and automate your application deployment processes using Stacks.

  1. In the Stacks section, click Add stack to open the configurator.
  2. Using the Web editor, describe the application configuration in YAML format. For WordPress and MariaDB database, the configuration might look like this:
services:
	db:
		image: mariadb:10.6.4-focal
		command: '--default-authentication-plugin=mysql_native_password'
		volumes:
        	- db_data:/var/lib/mysql
 		restart: always
    	expose:
			- 3306
 			- 33060
		environment:
			- MYSQL_ROOT_PASSWORD=hostmantest
			- MYSQL_DATABASE=hostman_wp
			- MYSQL_USER=hostman
			- MYSQL_PASSWORD=password
 	wordpress:
 		image: wordpress:latest
 		ports:
 			- 80:80
 		restart: always
		environment:
   			- WORDPRESS_DB_HOST=db
   			- WORDPRESS_DB_USER=hostman
   			- WORDPRESS_DB_PASSWORD=password
   			- WORDPRESS_DB_NAME=hostman_wp
volumes:
    db_data:
  • Environment variables can be placed in a separate section. In Environment variables select Advanced mode and specify the variables:
MYSQL_ROOT_PASSWORD=hostmantest
MYSQL_DATABASE=hostman_wp
MYSQL_USER=hostman
MYSQL_PASSWORD=password
WORDPRESS_DB_HOST=db
WORDPRESS_DB_USER=hostman
WORDPRESS_DB_PASSWORD=password
WORDPRESS_DB_NAME=hostman_wp
  • Remove the environment section from the main YAML file to avoid duplication:
services:
  db:
    image: mariadb:10.6.4-focal
    command: '--default-authentication-plugin=mysql_native_password'
    volumes:
      - db_data:/var/lib/mysql
    restart: always
    expose:
      - 3306
      - 33060
  wordpress:
    image: wordpress:latest
    ports:
      - 80:80
    restart: always
volumes:
    db_data:

923371b4 C89a 4dd9 9807 93143925f53d

  1. Click Deploy the stack. A bit later, if the deployment is successful, you will be redirected to the Stacks list page, where our Wordpress instance will be displayed.

Test your WordPress functionality by going to http://server_ip:80. You should see the WordPress start page, confirming successful deployment.

Conclusion

In our review, we covered all the important aspects of working with Portainer, from its installation with Docker to the details of deploying applications through Stacks. We took a detailed look at the tool's various features and settings, including user management, image repository handling, and environment coordination. The WordPress deployment example clearly showed how Portainer simplifies working with complex systems, making the management process more efficient. The article provided a comprehensive understanding of Portainer as a solution to simplify and streamline application deployment processes.

Docker
07.05.2024
Reading time: 12 min

Similar

Docker

Running Selenium with Chrome in Docker

Sometimes, it’s useful to work with Selenium in Python within a Docker container. This raises questions about the benefits of using such tools, version compatibility between ChromeDriver and Chromium, and the nuances of their implementation. In this article, we’ll cover key considerations and provide solutions to common issues. Why Run Selenium in Docker? Running Selenium in a container offers several advantages: Portability: Easily transfer the environment between different machines, avoiding version conflicts and OS-specific dependencies. Isolation: The Selenium container can be quickly replaced or updated without affecting other components on the server. CI/CD Compatibility: Dockerized Selenium fits well into CI/CD pipelines — you can spin up a clean test environment from scratch each time your system needs testing. Preparing an Ubuntu Server for Selenium with Docker First, make sure Docker and Docker Compose are installed on the server: docker --version && docker compose version In some Docker Compose versions, the command is docker-compose instead of docker compose. If the tools are installed, you’ll see output confirming their versions. If not, follow this guide. Selenium in Docker Example When deploying Selenium in Docker containers, consider the host architecture, functional requirements, and performance. Official selenium/standalone-* images are designed for AMD64 (x86_64) CPUs, while seleniarm/standalone-* images are adapted for ARM architectures (e.g., Apple silicon or ARM64 server CPUs). First, create a docker-compose.yml file in your project root. It will contain two services: version: "3" services: app: build: . restart: always volumes: - .:/app depends_on: - selenium platform: linux/amd64 selenium: image: selenium/standalone-chromium:latest # For AMD64 # image: seleniarm/standalone-chromium:latest # For ARM64 container_name: selenium-container restart: unless-stopped shm_size: 2g ports: - "4444:4444" # Selenium WebDriver API - "7900:7900" # VNC Viewer environment: - SE_NODE_MAX_SESSIONS=1 - SE_NODE_OVERRIDE_MAX_SESSIONS=true - SE_NODE_SESSION_TIMEOUT=300 - SE_NODE_GRID_URL=http://localhost:4444 - SE_NODE_DETECT_DRIVERS=false You must choose the correct image for your system architecture by uncommenting the appropriate line. The app service will run your main Python code. Let’s define a standard Dockerfile for this service: # Use a minimal Python image FROM python:3.11-slim # Set working directory WORKDIR /app # Install Python dependencies COPY requirements.txt /app/ RUN pip install --no-cache-dir -r requirements.txt # Copy project files COPY . /app/ # Set environment variables (Chromium is in a separate container) ENV SELENIUM_REMOTE_URL="http://selenium:4444/wd/hub" # Run Python script CMD ["python", "main.py"] This Dockerfile uses a base Python image and automatically installs the necessary dependencies. Now let’s add the driver initialization script to main.py: import time # Used to create a delay for checking browser functionality import os from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options # WebDriver settings chrome_options = Options() chrome_options.add_argument("--no-sandbox") chrome_options.add_argument("--disable-dev-shm-usage") chrome_options.add_argument("--disable-gpu") chrome_options.add_argument("--disable-webrtc") chrome_options.add_argument("--hide-scrollbars") chrome_options.add_argument("--disable-notifications") chrome_options.add_argument("--start-maximized") SELENIUM_REMOTE_URL = os.getenv("SELENIUM_REMOTE_URL", "http://selenium:4444/wd/hub") driver = webdriver.Remote( command_executor=SELENIUM_REMOTE_URL, options=chrome_options ) # Open a test page driver.get("https://www.timeweb.cloud") time.sleep(9999) # Shut down WebDriver driver.quit() In the requirements.txt file, list standard dependencies, including Selenium: attrs==25.1.0 certifi==2025.1.31 h11==0.14.0 idna==3.10 outcome==1.3.0.post0 PySocks==1.7.1 selenium==4.28.1 sniffio==1.3.1 sortedcontainers==2.4.0 trio==0.28.0 trio-websocket==0.11.1 typing_extensions==4.12.2 urllib3==2.3.0 websocket-client==1.8.0 wsproto==1.2.0 Now you can launch the containers: docker compose up -d Expected output: Docker will build and launch the containers. To verify everything is running correctly: docker compose ps You should see two running containers which means everything was loaded successfully. You can now integrate a script in main.py to interact with any site. Debugging Selenium in Docker with VNC In official Selenium Docker images (like seleniarm/standalone-chromium, selenium/standalone-chrome, etc.), direct access to the Chrome DevTools Protocol is usually overridden by Selenium Grid. It generates a new port for each session and proxies it via WebSocket. Arguments like --remote-debugging-port=9229 are ignored or overwritten by Selenium, making direct browser port access impossible from outside the container. Instead, these Docker images offer built-in VNC (Virtual Network Computing), similar to TeamViewer or AnyDesk, but working differently. VNC requires headless mode to be disabled, since it transmits the actual screen content — and if the screen is blank, there will be nothing to see. You can connect to the VNC web interface at: http://<server_ip>:7900 When connecting, you'll be asked for a password. To generate one, connect to the selenium-container via terminal: docker exec -it selenium-container bash Then enter: x11vnc -storepasswd You’ll be prompted to enter and confirm a password interactively. Enter the created password into the VNC web interface, and you’ll gain access to the browser controlled by Selenium inside Docker. From there, you can open DevTools to inspect DOM elements or debug network requests. Conclusion Running Selenium in Docker containers simplifies environment portability and reduces the risk of version conflicts between tools. It also allows visual debugging of tests via VNC, if needed. Just make sure to choose the correct image for your system architecture and disable headless mode when a graphical interface is required. This provides a more flexible and convenient infrastructure for testing and accelerates Selenium integration into CI/CD pipelines.
19 June 2025 · 5 min to read
Docker

Building Docker Images and Deploying Applications

Containerizing applications offers a convenient and flexible way to quickly deploy software, including web servers, databases, monitoring systems, and others. Containers are also widely used in microservices architectures. Docker is ideal for these purposes, as it greatly simplifies working with containerized apps. Introduced in 2013, Docker has seen continuous support and usage ever since. In this tutorial, you’ll learn how to create Docker images for three different applications written in different programming languages and how to run Docker containers from these images. Prerequisites To work with the Docker platform, you’ll need: A VPS or virtual machine with any Linux distribution preinstalled. In this tutorial, we use Ubuntu 22.04. Docker installed. You can find the Docker installation guide for Ubuntu 22.04 in our tutorials. Alternatively, you can use a prebuilt cloud server image with Docker — just select it in the “Marketplace” tab when creating a server. What Is a Docker Image? At the core of Docker’s concept is the image. A Docker image is a template—an executable file—you can use to start a Docker container. It contains everything needed to launch a ready-to-run application: source code, configuration files, third-party software, utilities, and libraries. Docker image architecture is layer-based. Each layer represents an action performed during the image build process, such as creating files and directories or installing software. Docker uses the OverlayFS file system, which merges multiple mount points into one, resulting in a unified directory structure. You can move Docker images between systems and use them in multiple locations, much like .exe executables in Windows systems. Creating Custom Docker Images Let’s walk through how to create Docker images for Flask, Node.js, and Go applications. Creating a Docker Image for a Flask Application To create images, a Dockerfile is used. Dockerfile is a plain text file without an extension that defines the steps to build a container image. You can find more details about Dockerfile instructions in the official documentation. We’ll create a Docker image with a web application built with Flask and run the container. The application will show a basic HTML page that displays the current date. 1. Install Required Packages Install the pip package manager and python3-venv for managing virtual environments: apt -y install python3-pip python3-venv 2. Create the Project Directory mkdir dockerfile-flask && cd dockerfile-flask 3. Create and Activate a Virtual Environment python -m venv env source env/bin/activate After activation, you'll see (env) in your prompt, indicating the virtual environment is active. Packages installed via pip will now only affect this environment. 4. Install Flask and Dependencies pip install flask pip install MarkupSafe==2.1.5 5. Create the Flask Application Create a file named app.py that will store the source code of our application: from flask import Flask import datetime app = Flask(__name__) @app.route('/') def display_current_date(): current_date = datetime.datetime.now().date() return f"Current date is: {current_date}" if __name__ == '__main__': app.run(debug=True) 6. Run and Test the Application flask run --host=0.0.0.0 --port=80 In your browser, visit your server’s IP address (port 80 doesn’t need to be specified as it’s the default one). You should see today’s date. 7. Freeze Dependencies Now, we need to save all the dependencies (just the flask package in our case) to a requirements.txt file, which stores all packages used in the project and installed via pip. pip freeze > requirements.txt Your project structure should now look like this: dockerfile-flask/ ├── app.py ├── env/ ├── requirements.txt Now we can proceed to creating a Docker image. 8. Create the Dockerfile Create a file named Dockerfile with the following contents: FROM python:3.8-slim-buster WORKDIR /app COPY requirements.txt requirements.txt RUN pip3 install -r requirements.txt COPY . . CMD [ "python3", "-m", "flask", "run", "--host=0.0.0.0", "--port=80" ] Explanation: FROM python:3.8-slim-buster: Use Python 3.8 base image on a lightweight Debian Buster base. WORKDIR /app: Set the working directory inside the container (similar to the mkdir command in Linux systems) COPY requirements.txt requirements.txt: Copy the dependency list into the image. RUN pip3 install -r requirements.txt: The RUN directive runs the commands in the image. In this case, it’s used to install dependencies. COPY . .: Copy all project files into the container. CMD [...]: CMD defines the commands and app parameters to be used when the container starts. 9. Use a .dockerignore File Create a .dockerignore file to exclude unnecessary directories. It helps to decrease the image size. In our case, we have two directories that we don’t need to launch the app. Add them to the .dockerignore file: env __pycache__ 10. Build the Docker Image When building the image, we need to use a tag that would work as an identifier for the image. We’ll use the flask-app:01 tag. docker build -t flask-app:01 . The dot at the end means the Dockerfile is located in the same directory where we run the command. Check the created image: docker images 11. Run the Docker Container docker run -d -p 80:80 flask-app:01 -d: Run the container in the background. -p: Forward host port 80 to container port 80. Check running containers: docker ps The STATUS column should show “Up”.  Open your browser and navigate to your server's IP address to view the app. Creating a Docker Image for a Node.js Application Our simple Node.js app will display the message: “This app was created using Node.js!” Make sure you have Node.js installed on your system. 1. Create the Project Directory mkdir dockerfile-nodejs && cd dockerfile-nodejs 2. Initialize the Project npm init --yes 3. Install Express npm install express --save 4. Create the Application File Create app.js with the following code: const express = require("express"); const app = express(); app.get("/", function(req, res) { return res.send("This app was created using Node.js!"); }); app.listen(3000, '0.0.0.0', function(){ console.log('Listening on port 3000'); }); 5. Test the Application node app.js Open http://<your-server-ip>:3000 in a browser to verify it works. 6. Create the Dockerfile FROM node:20 WORKDIR /app COPY package.json /app RUN npm install COPY . /app CMD ["node", "app.js"] 7. Add .dockerignore Create .dockerignore and the following line: **/node_modules/ 8. Build the Image docker build -t nodejs-app:01 . 9. Start the Container from Image docker run -d -p 80:3000 nodejs-app:01 Visit http://<your-server-ip> in your browser. The app should be running. Creating a Docker Image for a Go Application This Go application will display: “Hello from GO!” Make sure you have Go installed in your system. 1. Create the Project Directory mkdir dockerfile-go && cd dockerfile-go 2. Initialize the Go Module go mod init go-test-app 3. Create the Application File Create main.go with this code of our application: package main import "fmt" func main() { fmt.Println("Hello from GO!") } Verify it works: go run . 4. Create the Dockerfile FROM golang:1.23-alpine WORKDIR /app COPY go.mod ./ RUN go mod download COPY *.go ./ RUN go build -o /go-test CMD [ "/go-test" ] COPY go.mod ./: Adds dependencies file. RUN go mod download: Installs dependencies. COPY *.go ./: Adds source code. RUN go build -o /go-test: Compiles the binary. 5. Build the Image docker build -t go:01 . 6. Run the Container docker run go:01 You should see the output: Hello from GO! Conclusion In this guide, we walked through building custom Docker images for three applications written in different programming languages. Docker allows you to package any application and deploy it with ease.
18 June 2025 · 7 min to read
Docker

How to Install Docker on MacOS

Docker is a platform that makes it easier to create, deploy, and operate applications in containers. Containers enable developers to bundle an application's dependencies, including as libraries, frameworks, and runtime environments, and ship it as a single package. This ensures that the program runs reliably and consistently, independent of the environment in which it is deployed. If you have troubles with that, here's our instruction how to deploy server with Docker. Docker allows you to automate the deployment of software inside lightweight, portable containers. These containers may operate on any system with Docker installed, making it simple to deploy apps across several settings, such as a developer's laptop, a testing server, or a production environment on the cloud. Docker also includes tools for managing and orchestrating containers at scale, making it simpler to deploy, scale, and manage complex applications in production environments. Below are the requirements to prepare for the installation of docker on MacOS:  A supported version of MacOS. Docker Desktop is compatible with the latest macOS versions. This includes the current macOS release as well as the two previous releases. As new major versions of macOS become widely available, Docker stops supporting the oldest version and instead supports the most recent version (along with the prior two). RAM: minimum of 4 GB. This is to optimize Docker performance especially when operating multiple containers.  In installing docker, you can either install it interactively or manually or via the command line interface. Here’s the guide on how to do the installation with both methods. Manual Installation Download the installer from the official docker website using the following links: Apple Silicon processor Intel chip processor Install Docker Desktop by double-clicking the Docker.dmg:  And drag and drop it to the Application folder. By default, the Docker Desktop is installed at /Applications/Docker. Wait for the copying to finish. Double-click the Docker from the Applications folder to proceed with the installation. Click Accept to continue in the Docker Subscription Service Agreement page. From the installation window, choose either: Use recommended settings (Requires password) Use advanced settings Click Finish.  Verify if installation is successful. A Docker icon should appear on the menu bar when the Docker Desktop is installed and running. A notification will appear stating that Docker is running. Install using Command Line Interface Once Docker.dmg is downloaded from the official docker website, login as a super user / root in a terminal to install Docker Desktop in the Application folder. Execute the below commands respectively.  hdiutil attach Docker.dmg /Volumes/Docker/Docker.app/Contents/MacOS/install hdiutil detach /Volumes/Docker If running as a normal user, execute the command with sudo: sudo hdiutil attach Docker.dmg sudo /Volumes/Docker/Docker.app/Contents/MacOS/install sudo hdiutil detach /Volumes/Docker Installation might take some time to complete since the system may do various security checks while installing Docker on Mac. Troubleshooting Some of the common issues that the user might encounter during the installation of MacOS are:  Users may not check the MacOs version of their machine. Take note of the system requirements when installing Docker on MacOS to avoid installation failure and unexpected behavior (like docker image becomes corrupted). Errors during the installation process may occur such as failed downloads, incomplete installation, etc. Go back and check the system and hardware compatibility of the machine.  Conflict with existing software. This requires troubleshooting and investigating system logs. Usually, this can be solved by removing the problematic software.  Permission and security issues. When installing Docker on MacOS, ensure that the machine has all the required permission to access system resources, like directory, network, etc.  Conclusion To summarize, installing Docker on MacOS provides various opportunities for both developers and system administrators. Docker technology provides resources with an enhanced development workflow, an efficient procedure for delivering apps, and consistent system environments. Embrace containerization to broaden your development horizons. Check what Hostman VPS Servers can give you.
30 April 2025 · 4 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support