Sign In
Sign In

Docker Complete Guide: All You Need to Know About Docker and Docker Containers

Docker Complete Guide: All You Need to Know About Docker and Docker Containers
Hostman Team
Technical writer
Infrastructure

Want to know what a Docker container is? In this guide, we’ll learn everything about Docker and find out what a Docker container is. We’ll also tell you how to run Docker containers and what’s the difference between docker images and containers.

If you want to really understand the thought process behind Docker, there’s no better source than the person who created it - Solomon Hykes, founder and CTO of dotCloud. Although this YouTube introduction was uploaded 7+ years ago, it is still perfectly relevant.

That said, you might not have 47 minutes to spare on watching the full presentation, especially since it’s pretty technical and might require multiple viewings. That’s why we’ve created this quick guide for you.

What is docker?

Docker is a lightweight, open-source virtualization tool.

Here’s the description on Docker’s official website:

“Docker takes away repetitive, mundane configuration tasks and is used throughout the development lifecycle for fast, easy and portable application development - desktop and cloud. Docker’s comprehensive end to end platform includes UIs, CLIs, APIs and security that are engineered to work together across the entire application delivery lifecycle.”

Architecture

Docker runs at the operating system level. It automates the deployment of applications in Linux containers, and allows you to package an application with all the necessary dependency structures (code, runtime, libraries, environment variables, configuration files) into a container.

In his presentation, Solomon breaks down the unique proposition that distinguishes Docker from other solutions out there:

"A lot of [container] tools use containers as miniature servers… just like a VM [virtual machine] but way faster…. We [Docker] use containers as a unit of software delivery."

What are Docker containers and what are they used for?

A container is an isolated environment whose processes do not interfere with operations outside of it.

Docker Containerized Appliction Blue Border 2

The container only uses a portion of the operating system. This allows you to significantly reduce the consumption of system resources by allocating the application and its data exclusively to the container, rather than to the entire operating system as in the case of a virtual machine.

This makes Docker containers particularly suited to rapid application deployment, ease of testing, maintenance, and troubleshooting, while enhancing security.

One of the practical benefits of Docker containers is simplifying big development team work. Tasks can be easily split up and implemented in different programming languages.

Common use cases for Docker include:

  • Automating the packaging and deployment of applications

  • Creating lightweight, private PAAS environments

  • Automating testing and continuous integration/deployment

  • Deploying and scaling web apps, databases and backend services

How does a container work?

There are five basic tenets of Docker containers:

  1. The lifespan of a container is tied to the lifespan of the process contained within it.

  2. Inside the container, this process has pid = 1, which means it is the parent process that starts before all other processes.

  3. Alongside the process with pid = 1, you can spawn as many other processes as you wish (within the limitations of the OS). Killing (restarting) the process with pid = 1 stops the container. (see item 1)

  4. Inside the container, you will see the usual FHS-compliant directory layout. This location is identical to the source distribution (from which the container is taken).

  5. The data created inside the container remains in the container and is not saved anywhere else. The host OS has access to this layer, but deleting the container will discard all changes. For this reason, the data is not stored in containers, but taken out to the host OS.

How to create a Docker container

In this guide, we’ll be showing you how to install Docker on Ubuntu 20.04 - the most popular repository.

We recommend you to use official Docker files for installation, you can find all the needed info here. No complicated configuration is required at this stage. Once it’s finished installing, start the service, check its status and set it to start at boot:

sudo apt-get update

sudo apt-get install \

apt-transport-https \

ca-certificates \

curl \

gnupg \

lsb-release

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo \

"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu

\$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io

systemctl start docker

systemctl enable docker

systemctl status docker

99291a659bfe8515e256a2a86458da8e

Next, run a test image to check that Docker is working correctly:

docker run hello-world

You should see the following message:

“Hello from Docker. This message shows that your installation appears to be working correctly."

7fad108ef513aff62bda5d0163401ee7

How to list Docker containers

To avoid having to recognize and remember the container ID, you can assign a unique name to each container by using the -name command line option when creating it. See the example below:

docker run --name myname ubuntu cat / etc / debian_version

Once you’ve done this, you can start using the container (execute the start, stop, remove, top, stats commands) by referring to it by name, for example:

docker start myname – container start

docker stats myname – display resource usage statistics

docker top myname – displays the processes running in the container

How to start and restart Docker containers?

To start the container in daemon mode, use the -d option.

docker run -td ubuntu bash

To reconnect to a running container, you need to know its ID or name. Run the Docker ps command, followed by attach and the container’s ID or name.

docker attach CONTAINER

How to stop Docker container

A container is usually terminated automatically after the completion of the process. However, there might be instances where you want to terminate the container yourself. The stop command performs a "soft" shutdown of the container, by default allowing 10 seconds for all processes to terminate:

docker container stop CONTAINER

If you want immediate termination, you can execute the kill command. However, in most situations, using stop is preferable.

docker container kill CONTAINER

Here’s the command for immediate termination of all running containers:

docker container kill $ (docker ps -q)

How to delete a container

To delete a Docker container, use the following command:

docker container rm CONTAINER

Here’s how to remove all Docker containers that are not running:

docker container rm $ (docker ps -a -q)

How to run Docker containers

To interactively connect to the shell of the container and run commands, as in a regular Linux system, you need to create a container with the following settings:

docker container run -it ubuntu bash

The structure of this command is as follows:

  • -i sets the start of an interactive session.

  • -t allocates TTYs and includes standard input and output streams.

  • ubuntu is the image used to create the container.

  • /bin/bash is a command run in an Ubuntu container.

After starting the container with the above settings, we sort of fall into the container. Use the exit command to exit a running session and return to your node's terminal. This interrupts all container processes and stops the container:

exit

If you are interactively connected to a container and need to log out without interrupting the session, you can exit the console and return to your host's terminal by pressing Ctrl + P and then Ctrl + Q.

There are several services that help users to easily deploy Docker containers with just a few clicks. Hostman has swiftly earned a reputation for being the go-option when it comes to GitHub, Bitbucket, or GitLab repositories.

How to SSH into a Docker container?

For SSH authentication, or when connecting remotely (for example, rsync), the main methods are a login-password pair (the password is entered from the keyboard in the console) and key authorization (a private-public key pair is created on the server and the public key is transmitted to the remote server). The first method cannot be used in any way in scripts executed on the crown, and it is necessary to configure a transparent input.

Generating keys for the current user:

ssh-keygen -t rsa

After entering this command you’ll be asked some questions. You can just agree with all the default options.

Copy the key to the remote server (enter the password of the remote server once).

ssh-copy-id -i ~ / .ssh / id_rsa user@ssh-server

Checking the ability to log in:

ssh user@ssh-server

How to connect to a running container?

If you have multiple Docker containers running and want to choose which one to work with, you will need to list them by using the ls command. In addition to displaying a list of containers, this command also displays useful information about them. The command without any settings displays a list of running containers:

docker container ls

7d95801c8da635db78c9016a4f02abe4

The -a setting tells the command to list all containers, not just running ones, while the -s option displays the size of each container:

docker container ls -a

B1740f948e903784f709c66967e2cba6

The inspect setting displays a lot of useful information about the container:

docker container inspect CONTAINER

To display container logs, run the logs command:

docker container logs CONTAINER

What is the difference between a Docker container and a Docker image?

Docker works with the following fundamental objects:

  • A container is an application environment. When a container is launched from an image containing the necessary configuration data, a new level with a variable structure is built on top of this image. If you save the changes, the new image level is saved and the old one remains unchanged.

  • An image is a static snapshot of a container's configuration state. The image is a permanent layer, all changes are made at the highest level and are saved only by creating a new image. Each image depends on one or more parent images.

Why are Docker images so valuable?

Docker images are valuable because they are used to create and conduct the environment to get started with Docker. First, you will need to download an image from the Docker Hub to your machine.

What benefits do containers offer?

When working with application containerization in Big Data, the following advantages of this technology are most significant:

  1. Standardization – thanks to the base of open standards, containers can work in all major distributions of Linux, Microsoft and other popular operating systems;

  2. The independence of the container from the resources or architecture of the physical host on which it runs, facilitates portability of the container image from one environment to another, providing a continuous pipeline of DevOps processes from development and testing to deployment (CI / CD pipeline);

  3. Isolation – the application in the container runs in an isolated environment and does not use the memory, processor or disk of the host OS. This guarantees the isolation of processes inside the container and provides some level of security.

  4. Reusability – all the components required to run the application are packaged into one image that can be run multiple times;

  5. Fast deployment – creating and launching a container is considerably less time-consuming than using a virtual machine instance or setting up a full-fledged working environment;

  6. Increasing labor productivity – having each microservice of a complex system packaged in a separate container for which one developer is responsible, makes it possible to parallelize work tasks without mutual dependencies and conflicts;

  7. Simplified Monitoring – versioning container images makes it possible to track updates and prevent synchronization issues.

Summary

Supporting Big Data applications (yours or others’) that no longer fit in your head, doesn’t have to be a nightmare anymore.

With this handy guide to Docker, you’ll be able to cold-start an application on a new machine, with just a few clicks, in under a minute.

More importantly, with the reassurance that all of your data is safe, you’ll finally be able to focus exclusively on writing useful code instead of wasting time and energy on server-related troubleshooting.

Infrastructure

Similar

Infrastructure

Microservices Architecture Explained: Benefits, Real-World Use Cases & Design Patterns

Every developer strives to make the product development process faster while maintaining flexibility and effective control. Microservices architecture makes this possible, and over the past 7–10 years, it has become a strong alternative to the traditional monolithic approach. Let’s begin by exploring the difference between the two. Microservices Architecture vs. Monolith The difference between these two approaches to software development is best illustrated with an example. Suppose we have two online stores: one built as a monolith, the other using microservices. A monolithic online store is a single, indivisible structure that combines all components: databases (product catalog, customer data), cart, order and payment forms. All elements are tightly coupled and reside on the same server. In a microservices-based system, each component is an independent module that developers can work on separately. Of course, these modules don't have to be hosted on the same server. Thus, microservices architecture is like a modular building kit where you can easily add new components and scale the application. A monolith, in contrast, is like a solid wall — and scaling it typically means duplicating the entire structure. It’s also worth noting that microservices are sometimes mistakenly thought of as just a collection of tiny services. That’s not true. For example, the database of a large e-commerce site may contain millions of records and take up tens of gigabytes, yet still be just one module within a microservices-based application. Comparing Microservices and Monoliths by Key Criteria Let’s compare how microservices and monoliths address the same development needs. Release Cycle Microservices allow for faster development and more frequent releases thanks to their modularity — updates affect individual modules rather than the whole codebase. With a monolith, you must update the entire platform, which increases testing time and delays releases. Technology Stack Microservices offer much greater flexibility: each service can use its own programming language, libraries, and data storage technologies. With a monolith, the technology stack is fixed and hard to change, forcing developers to stick with the original tools. Onboarding Developers Since each microservice is a standalone unit, developers can be onboarded to specific modules without needing to understand the entire system. In a monolith, new developers must familiarize themselves with the entire application codebase before contributing effectively, making the team more dependent on specific individuals. Optimization The modularity of microservices simplifies optimization, as each module can be tuned separately. In monoliths, optimization is more complex due to tight coupling — changes in one part often affect the entire system. Scalability Microservices, being distributed and potentially deployed on separate servers, make it easier and faster to scale specific components. In monoliths, scaling one part usually means scaling the entire application, which is inefficient. Fault Tolerance Thanks to their distributed and modular nature, microservices offer higher fault tolerance. A failure in one module does not affect the whole system. In a monolith, components are tightly connected, and a failure in one part can bring down the entire application. Should You Switch to Microservices Now? Microservices clearly offer several advantages. But does that mean monoliths are outdated and should be replaced immediately? Not necessarily — it depends on your current project status. In fact, switching to microservices isn't always the best move. Distributed systems also come with their own challenges: Network Dependency: Microservices require robust network communication. Unstable connections can cause delays or data inconsistencies, potentially disrupting the application. Increased Overhead: Each module must be separately tested and monitored. You’ll also need to allocate cloud resources for each, which can increase costs. Team Coordination: Microservices can introduce coordination challenges between teams managing different modules. This often requires DevOps specialists to bridge gaps between developers and streamline collaboration. Considering all these factors, the switch to microservices should be well-timed. In most early-stage projects, especially those with limited teams or budgets, there's no urgent need to move away from a monolith. You should consider transitioning to microservices when: You have a large team — it makes sense to split them into independent groups, each managing a specific service. Your application is complex and modular — maintaining and updating modules separately is more practical. Your application experiences traffic spikes — distributed microservices allow you to scale quickly during peak times and scale down afterward. Your application is frequently updated — working on individual modules speeds up release cycles. If your project meets even one of these criteria, it's worth exploring microservices. But if your app is relatively small and doesn’t need frequent updates, it might be best to stick with the monolithic approach for now. Useful Tools for Microservices Architecture Modern development requires a containerization platform. In most cases, developers use Docker to isolate applications from infrastructure, enabling them to run seamlessly both locally and in the cloud. As the number of containers grows, you need an orchestrator to manage them. The most popular tool is Kubernetes, which integrates well with Docker. Docker also has its own orchestrator: Docker Swarm. Another essential tool is a load balancer, which evenly distributes network traffic across cloud resources, significantly improving the application's fault tolerance.
06 June 2025 · 5 min to read
Infrastructure

How to Choose the Best Password Manager in 2025

Although passwords are not considered the most secure method of authentication, they remain widely used for interacting with various services and applications. Today, more and more users face the need to manage dozens or even hundreds of passwords for different platforms. Storing them in notes, personal messages, or browser memory is not only inconvenient but also unsafe. To solve this problem, there are special types of password security software that not only store but also protect sensitive data, providing a secure space for your credentials. The market offers dozens of password management tools. In this article, we’ll take a closer look at password manager software and examine their key features. What Is a Password Manager? A password manager is software designed for securely storing and using passwords and other confidential data. Password managers simplify password handling by allowing users to remember just one code (commonly known as the master password) instead of multiple complex combinations. Most password managers also offer additional features, such as data breach monitoring, integration with third-party services, and support for storing other types of information like logins and payment card details. They also minimize human error in security management. For example, they eliminate the need to invent and remember complex passwords by offering cryptographically secure auto-generated alternatives. This greatly reduces the risk of weak or reused passwords — one of the main causes of account compromise. Key Features of Password Managers Before diving into reviews of specific software products, it's important to outline the minimum essential features a password manager should offer: Password Generation Service This feature enables the creation of unique, long, and cryptographically strong passwords. A major advantage is having flexible settings to meet the requirements of various services (e.g., length, special characters, etc.). Autofill Automating the process of entering passwords improves user experience and streamlines interactions with the password manager. Browser, OS, and app integration allow autofill to speed up logins and reduce error rates. Data Synchronization Especially relevant for cross-platform password apps that run on multiple operating systems. Synchronization can be cloud-based or local. It ensures access to your private data from any supported device, anywhere. For security, encrypted data transfer channels are essential to minimize leakage risks. Supported Security Measures These include encryption (e.g., AES-256) and two-factor authentication (2FA). Some managers also support biometric authentication using fingerprint scanners or facial recognition. Security Level The most important criterion to prioritize. Ensure that the app uses modern encryption algorithms (specifically AES-256) and supports 2FA. Regular security audits are also crucial. Many password manager developers publish the results of independent security checks, which builds trust. Pricing Depending on user needs, there are various pricing options. Free plans are good for basic use but may be limited (e.g., single-device access, no cloud sync). Paid plans offer expanded functionality, tech support, better security, and business features. Open-Source Options It’s also worth noting that free open-source solutions can offer functionality comparable to paid options. Top Proprietary Password Managers Now let’s review three popular proprietary password management services: NordPass NordPass is a password vault developed by Nord Security. It helps users keep their credentials safe with a user-friendly interface and secure storage. Key Features Secure password storage: Unlimited encrypted password storage with cloud sync for cross-device access. Password generator: Automatically creates strong combinations; includes checks for length, special characters, and other criteria. Autofill: Streamlines login by auto-filling credentials on websites. Data breach monitoring: Scans accounts for potential compromise from hacks or data leaks. Offline mode: Allows access to stored passwords even without an internet connection. Advantages Advanced encryption: Uses the XChaCha20 algorithm for data protection. Cross-platform support: Available for Windows, macOS, Linux, Android, and iOS; also includes browser plugins. Ease of use: Clean interface that is accessible even to non-technical users. Family and business plans: Offers flexible plans for individuals, families, and organizations. Disadvantages Limited free version: The free plan only offers basic features and doesn’t include multi-device sync. Cloud-only storage: No local-only storage option, which may concern users who prefer full control over their data. Closed-source software: Unlike some competitors, NordPass is proprietary, which may deter open-source advocates. Pricing Plans Free: Basic functionality with no sync across devices. Family: Supports up to six accounts. Business: Team management features for organizations. Pricing varies by region and subscription length, with longer terms offering better value. 1Password 1Password is one of the most popular password managers, offering secure data storage and access control. It’s designed to enhance cybersecurity and protect accounts and sensitive information online. Key Features Password storage: Secure login credential storage. Password generation: Built-in tool for creating strong, security-compliant passwords. Form autofill: Fast access to websites without manual data entry. Personal data storage: Supports storing not just passwords but also bank cards, licenses, notes, documents, and other important files. Leak monitoring: Alerts users in case of password leaks or data breaches. Advantages Robust security: Uses top-tier encryption algorithms. Flexible organization: Create multiple vaults for different users or purposes. Cross-platform compatibility: Works on Windows, macOS, Linux, iOS, Android, with browser integration. Business solutions: Includes tools for team access control and administration. Disadvantages No permanent free plan: Only a 14-day free trial is available, after which a subscription is required. Cloud-only storage: While convenient for syncing, some users prefer local-only data management. Pricing Plans Individual: $2.99/month (billed annually) with full features and cross-device sync. Family: $4.99/month for up to 5 users with individual vaults. Teams Starter Pack: $19.95/month for up to 10 users. Encryption AlgorithmDashlane Dashlane is an app for storing passwords and confidential information that provides strong protection. The program helps users simplify access to credentials and protect them from unauthorized use. Key Features Password storage: A secure vault for passwords from various websites and applications. Built-in password generator: A tool for creating reliable and unique character combinations. Autofill: Automatically fills in passwords, logins, financial, and other data on web pages. Data breach monitoring: A monitoring system warns about potential breaches and recommends password changes. Cross-device synchronization: Access your information from various devices, including PCs, smartphones, and tablets. Digital wallet: Secure storage for bank cards and payment details for convenient online shopping. Secure data sharing: Alerts about potential unauthorized access attempts and suggests password changes. Advantages High level of protection: Uses AES-256 encryption and Zero-Knowledge architecture, ensuring complete privacy. User-friendly interface: Simple and intuitive UI suitable even for beginners. Disadvantages Subscription cost: Dashlane is among the more expensive solutions, which may be a barrier for budget-conscious users. Limited functionality in the free version: The free plan offers only basic features and works on a single device. Pricing Plans Free plan: Store up to 25 passwords on one device. Premium: $4.99/month. Includes credit monitoring and identity theft protection. Family plan: $7.49/month (billed annually), supports up to 6 users, each with their own vault. Comparison Table Criteria NordPass 1Password Dashlane Free version available Yes Yes Yes Autosave Yes Yes Yes Passkey support Yes Yes Yes Data breach alerts Yes Yes Yes Multi-factor authentication Yes Yes Yes Email masking Yes Yes No Password generator Yes Yes Yes Supported devices Unlimited Unlimited Unlimited Family plan Yes (up to 6) Yes (up to 5) Yes (up to 10) Encryption algorithm used XChaCha20 AES-GCM-256 AES-256 Among proprietary password managers, we compared three programs: NordPass, 1Password, and Dashlane. All three offer similar functionality, differing mainly in the encryption algorithms they use. Each product also features a free version, allowing users to try it out and select the one that best suits their needs. Top Open-Source Password Managers In contrast to proprietary solutions, the market also offers open-source options. Notably, some open-source solutions can be self-hosted in your own infrastructure. KeePass  KeePass is a popular free password manager for Windows that ensures secure storage of passwords and credentials. It operates in offline mode, providing maximum control over stored data. Key Features Password management: Stores passwords accessible via a master password. Storage is limited only by vault size. Local data storage: User data is stored locally on the device, not in the cloud. Autofill: Automatically fills in data on websites and in apps. Cross-platform support: Versions exist for Windows, macOS, Linux, Android, and iOS. Advantages High security: Supports multiple encryption algorithms, including AES-256, ChaCha20, and Twofish. Offline mode: No cloud dependency reduces the risk of data leaks. Disadvantages Cumbersome synchronization: Requires manual configuration for cross-device syncing. KeePassXC KeePassXC is a free, open-source, cross-platform tool for secure password storage. It is a modern adaptation of the original KeePass, tailored for use on Windows, macOS, and Linux. Key Features Local encrypted storage: Data is stored locally and securely on the user's machine. No cloud dependency unless manually configured. 2FA support: Stores 2FA codes and integrates with hardware security keys (e.g., YubiKey). Autofill: Supports browser integration for auto-filling credentials. Cross-platform: Available on Windows, macOS, and Linux. Mobile access through compatible apps like KeePassDX for Android. Password generator: Customizable password creation tool. Advantages Ease of use: Offers a more user-friendly interface than the original KeePass. Offline operation: Does not require cloud storage; all data remains local. Disadvantages No official mobile apps: KeePassXC is limited to desktop; mobile support is only via third-party apps. Limited plugin options: Compared to KeePass, KeePassXC has fewer plugins available. Bitwarden Bitwarden is an open-source password manager popular for its reliability, simplicity, and transparency. Key Features Password storage: Stores unlimited passwords with access from anywhere. Data is encrypted using AES-256. Password generator: Allows custom password generation based on length, character type, etc. Autofill: Automatically fills in credentials on websites and in apps. Cross-platform support: Available on Windows, macOS, Linux, Android, and iOS. Two-factor authentication (2FA): Supports 2FA via apps, email, or hardware tokens (e.g., YubiKey). Advantages Open source: Public code base allows independent security audits. High security: Client-side end-to-end encryption ensures privacy even from Bitwarden developers. Affordable and accessible: The free tier includes many features often restricted in paid plans elsewhere. Local and cloud storage options: Can be hosted in the cloud or self-hosted for full control. Disadvantages Complex setup for beginners: Self-hosting and advanced configuration may be difficult for inexperienced users. Pricing Plans Self-hosted: Users can deploy Bitwarden on their own server. Premium plan: $10/year, adds breach monitoring and 1 GB of encrypted file storage. Family plan: $40/year, supports up to 6 users. Business plan: Starts at $3/user/month, with advanced team management features. Padloc Padloc is a cross-platform, open-source password management app focused on simplicity and ease of use. It allows users to store, manage, and synchronize passwords across multiple devices. Key Features Open Source: The project’s source code is available on GitHub and is distributed under the GPL-3 license. Cloud Synchronization: Supports storing data on cloud servers with an option for local encryption. Encryption Support: Utilizes AES-256 and Argon2 encryption algorithms. Cross-Platform: Available for Windows, macOS, Linux, Android, and iOS. Browser extensions are also available. Password Generator: Enables creation of strong passwords with customizable options. Advantages Ease of Use: Minimalist and beginner-friendly interface. Team Collaboration Support: Allows sharing of passwords within a team. Disadvantages No Offline Mode: Fully dependent on the cloud. Fewer Features Compared to Alternatives: Lacks features like 2FA support, SSH agent integration, and advanced security settings. Pricing Plans Premium: $3.49/month. Includes 2FA support, 1GB of encrypted file storage, breach report generation, and note-taking functionality with Markdown support. Family Plan: $5.95/month. Includes all Premium features and allows up to 5 users. Team: $3.49/month. Includes Premium features and supports up to 10 user groups with flexible management. Business: $6.99/month. Includes all Team features and supports up to 50 user groups with flexible configuration. Enterprise: Price upon request. Includes all Business features, unlimited user groups, and custom branding options. Psono Psono is a password manager geared toward self-hosting and enterprise use. It can be deployed on a private server, giving users full control over their data. Psono offers strong security, team features, and multi-factor authentication (MFA). Key Features Open Source: Source code is available on GitHub under the Apache 2.0 license. Self-Hosted: Can be deployed on a private server for full data control. Encryption Support: Uses AES-256, RSA, and Argon2 for encryption. Advantages High Security: Supports modern encryption standards and hardware keys. Team Collaboration Support: Ideal for businesses and IT teams. Disadvantages Setup Complexity: Requires server deployment for full functionality. Pricing Plans Self-Hosted: Free option for private deployment. SaaS Edition (Business): $3.50/month. Adds SAML & OIDC SSO, audit logging, and extended support on top of the free version’s features. Comparison Criteria KeePass KeePassXC Bitwarden Padloc Psono Cloud Sync No No Yes Yes Yes Auto-Save Yes Yes Yes Yes Yes Passkey Support Yes Yes Yes No Yes Data Breach Alerts No No No No Yes Multi-Factor Authentication (MFA) Yes Yes Yes Yes Yes Email Masking No No Yes No No Password Generator Yes Yes Yes Yes Yes Supported Devices Single device Single device Unlimited Two (free version) Unlimited (paid) Family Plan Available No No Yes (up to 6 users) Yes (up to 5 users) No Encryption Algorithm AES-256, SHA-256, HMAC-SHA-256/512 AES256 AES-256 E2EE, salted hashing, PBKDF2 SHA-256 AES XSalsa20 + Poly1305 Conclusion In this article, we explored password managers and thoroughly analyzed the most popular software solutions for secure information storage—both paid and free. Each reviewed product has its own strengths and weaknesses. A well-chosen password manager can simplify the management of personal data and protect it from unauthorized access. When selecting a solution, it’s important to consider the functionality, security level, and ease of use.
05 June 2025 · 13 min to read
Infrastructure

Network Protocols: What They Are and How They Work

A network protocol is a set of rules and agreements used to facilitate communication between devices at a specific network layer. Protocols define and regulate how information is exchanged between participants in computer networks. Many protocols are involved in network operation. For example, loading a webpage in a browser is the result of a process governed by several protocols: HTTP: The browser forms a request to the server. DNS: The browser resolves the domain name to an IP address. TCP: A connection is established, and data integrity is ensured. IP: Network addressing is performed. Ethernet: Physical data transmission occurs between devices on the network. These numerous protocols can be categorized according to the network layers they operate on. The most common network models are the OSI and TCP/IP models. In this article, we will explain these models and describe the most widely used protocols. Key Terminology This section introduces essential network-related terms needed for understanding the rest of the article. Network. A network is a collection of digital devices and systems that are connected to each other (physically or logically) and exchange data. Network elements may include servers, computers, phones, routers, even a smart Wi-Fi-enabled lightbulb—and the list goes on. The size of a network can vary significantly—even two devices connected by a cable form a network. Data transmitted over a network is packaged into packets, which are special blocks of data. Protocols define the rules for creating and handling these packets. Some communication systems, such as point-to-point telecommunications, do not support packet-based transmission and instead transmit data as a continuous bit stream. Packet-based transmission enables more efficient traffic distribution among network participants. Network Node. A node is any device that is part of a computer network. Nodes are typically divided into two types: End Nodes. These are devices that send and/or receive data. Simply put, these are sources or destinations of information. Intermediate Nodes. These nodes connect end nodes together. For example, a smartphone sends a request to a server via Wi-Fi. The smartphone and server are end nodes, while the Wi-Fi router is an intermediate node. Depending on node placement and quantity, a network may be classified as: Global Network. A network that spans the entire globe. The most well-known example is the Internet. Local Network (LAN). A network covering a limited area. For example, your home Wi-Fi connects your phone, computer, and laptop into a local network. The router (an intermediate node) acts as a bridge to the global network. An exception to geographic classification is networks of space-based systems, such as satellites or orbital stations. Distributed Network. A network with geographically distributed nodes. Network Medium. This refers to the environment in which data transmission occurs. The medium can be cables, wires, air, or optical fiber. If copper wire is used, data is transmitted via electricity; with fiber optics, data is transmitted via light pulses. If no cables are used and data is transmitted wirelessly, radio waves are used. OSI Model In the early days of computer networks, no universal model existed to standardize network operation and design. Each company implemented its own approach, often incompatible with others. This fragmented landscape became problematic—networks, which were supposed to connect computers, instead created barriers due to incompatible architectures. In 1977, the International Organization for Standardization (ISO) took on the task of solving this issue. After seven years of research, the OSI model was introduced in 1984. OSI stands for Open Systems Interconnection, meaning systems that use publicly available specifications to allow interoperability, regardless of their architecture. (This "openness" should not be confused with Open Source.) The model consists of seven network layers, each responsible for specific tasks. Let’s look at each: 1. Physical Layer This layer deals with the physical aspects of data transmission, including transmission methods, medium characteristics, and signal modulation. 2. Data Link Layer The data link layer operates within a local network. It frames the raw bit stream from the physical layer into recognizable data units (frames), determines start and end points, handles addressing within a local network, detects errors, and ensures data integrity. Standard protocols are Ethernet and PPP. 3. Network Layer This layer handles communication between different networks. It builds larger networks from smaller subnets and provides global addressing and routing, selecting the optimal path. For example, the IP protocol, which gives each device a unique address, operates at this layer. Key protocols are IP and ICMP. 4. Transport Layer The transport layer ensures end-to-end communication between processes on different computers. It directs data to the appropriate application using ports. Protocols such as: UDP — Unreliable transmission of datagrams. TCP — Reliable byte-stream transmission. 5. Session Layer This layer manages communication sessions: establishing, maintaining, and terminating connections, as well as synchronizing data. 6. Presentation Layer Responsible for translating data formats into forms understandable to both sender and receiver. Examples: text encoding (ASCII, UTF-8), file formats (JPEG, PNG, GIF), encryption and decryption. 7. Application Layer The user-facing layer where applications operate. Examples include web browsers using HTTP, email clients, and video/audio communication apps. Some OSI protocols span more than one layer. For instance, Ethernet covers both the physical and data link layers. When data is sent from one node to another, it passes through each OSI layer from top to bottom. Each layer processes and encapsulates the data before passing it to the next lower layer. This process is called encapsulation. On the receiving end, the process is reversed: each layer decapsulates and processes the data, from bottom to top, until it reaches the application. This is called decapsulation. While the OSI model is not used in practical network implementations today, it remains highly valuable for educational purposes, as many network architectures share similar principles. TCP/IP While the OSI model was being developed and debated over, others were implementing practical solutions. The most widely adopted was the TCP/IP stack, also known as the DoD model. According to RFC 1122, the TCP/IP model has four layers: Application Layer Transport Layer Internet Layer (sometimes just called "Network") Link Layer (also called Network Access or Interface Layer) Though different in structure, TCP/IP follows the same fundamental principles as OSI. For example: The OSI session, presentation, and application layers are merged into a single application layer in TCP/IP. The OSI physical and data link layers are merged into the link layer in TCP/IP. Since terminology may vary across sources, we will clarify which model we are referring to throughout this article. Let’s take a closer look at each layer and the protocols involved, starting from the bottom. Data Link Layer in TCP/IP As mentioned earlier, the Data Link Layer in the TCP/IP model combines two layers from the OSI model: the Data Link and Physical layers. The most widely used data link protocol in TCP/IP is Ethernet, so we’ll focus on that. Ethernet Let’s forget about IP addresses and network models for a moment. Imagine a local network consisting of 4 computers and a switch. We'll ignore the switch itself; in our example, it's simply a device that connects the computers into a single local network. Each computer has its own MAC address. In our simplified example, a MAC address consists of 3 numbers, which is not accurate in reality. MAC Address In reality, a MAC address is 48 bits long. It’s a unique identifier assigned to a network device. If two devices have the same MAC address, it can cause network issues. The first 24 bits of a MAC address are assigned by the IEEE — an organization responsible for developing electronics and telecommunications standards. The device manufacturer assigns the remaining 24 bits. Now, back to our local network. If one computer wants to send data to another, it needs the recipient's MAC address. Data in Ethernet networks is transmitted in the form of Ethernet frames. Ethernet is a relatively old protocol, developed in 1973, and has gone through several upgrades and format changes over time. Here are the components of an Ethernet frame: Preamble indicates the beginning of a frame. Destination MAC address is the recipient’s address. Source MAC address is the sender’s address. Type/Length indicates the network protocol being used, such as IPv4 or IPv6. SNAP/LLC and Data are the payload. Ethernet frames have a minimum size requirement to prevent collisions. FCS (Frame Check Sequence) is a checksum used to detect transmission errors. ARP So far, we’ve talked about a simple local network where all nodes share the same data link environment. That’s why this is called the data link layer. However, MAC addressing alone is not enough for modern TCP/IP networks. It works closely with IP addressing, which belongs to the network layer. We’ll go into more detail on IP in the network layer section. For now, let’s look at how IP addresses interact with MAC addresses. Let’s assign an IP address to each computer: In everyday life, we rarely interact with MAC addresses directly — computers do that. Instead, we use IP addresses or domain names. The ARP (Address Resolution Protocol) helps map an IP address to its corresponding MAC address. When a computer wants to send data but doesn’t know the recipient’s MAC address, it broadcasts a message like: "Computer with IP 1.1.1.2, please send your MAC address to the computer with MAC:333." If a computer with that IP exists on the network, it replies: "1.1.1.2 — that’s me, my MAC is 111." So far, we've worked within a single network. Now, let’s expand to multiple subnets. Network Layer Protocols in TCP/IP Now we add a router to our local network and connect it to another subnet. Two networks are connected via the router. This device acts as an intermediate node, allowing communication between different data link environments. In simple terms, it allows a computer from one subnet to send data to a computer in another subnet. How does a device know it’s sending data outside its own subnet? Every network has a parameter called a subnet mask. By applying this mask to a node’s IP address, the device can determine the subnet address. This is done using a bitwise AND operation. You can check the subnet mask in Windows using the ipconfig command:  In this example, the mask is 255.255.255.0. This is a common subnet mask. It means that if the first three octets of two IP addresses match, they are in the same subnet. For example: IP 1.1.1.2 and 1.1.1.3 are in the same subnet. IP 1.1.2.2 is in a different subnet. When a device detects that the recipient is in another subnet, it sends data to the default gateway, which is the router’s IP address. Let’s simulate a situation: A device with MAC 111 wants to send data to the IP 1.1.2.3. The sender realizes this is a different subnet and sends the data to the default gateway. First, it uses ARP to get the MAC address of the gateway, then sends the packet. The router receives the packet, sees that the destination IP is different, and forwards the data. In the second subnet, it again uses ARP to find the MAC address of the target device and finally delivers the data. IP Protocol The IP (Internet Protocol) was introduced in the 1980s to connect computer networks. Today, there are two versions: IPv4 – uses 32-bit addressing. The number of available IP addresses is limited. IPv6 – uses 128-bit addressing and was introduced to solve IPv4 address exhaustion. In IPv6, ARP is not used. Both protocols serve the same function. IPv6 was meant to replace IPv4, but because of technologies like NAT, IPv4 is still widely used. In this guide, we’ll focus on IPv4. An IP packet consists of the following fields: Version – IPv4 or IPv6. IHL (Internet Header Length) – indicates the size of the header. Type of Service – used for QoS (Quality of Service). Total Length – includes header and data. Identification – groups fragmented packets together. Flags – indicate if a packet is fragmented. Fragment Offset – position of the fragment. Time to Live (TTL) – limits the number of hops. Protocol – defines the transport protocol (e.g., TCP, UDP). Header Checksum – verifies the header’s integrity. Source IP Address Destination IP Address Options – additional parameters for special use. Data – the actual payload. Transport Layer Protocols The most common transport layer protocols in TCP/IP are UDP and TCP. They deliver data to specific applications identified by port numbers. Let’s start with UDP — it’s simpler than TCP. UDP A UDP datagram contains: Source port Destination port Length Checksum Payload (from the higher layer) UDP’s role is to handle ports and verify frames. However, it does not guarantee delivery. If some data is lost or corrupted, UDP will not request a retransmission — unlike TCP. TCP TCP packets are called segments. A TCP segment includes: Source and destination ports Sequence number Acknowledgment number (used for confirming receipt) Header length Reserved flags Control flags (for establishing or ending connections) Window size (how many segments should be acknowledged) Checksum Urgent pointer Options Data (from the higher layer) TCP guarantees reliable data transmission. A connection is established between endpoints before sending data. If delivery cannot be guaranteed, the connection is terminated. TCP handles packet loss, ensures order, and reassembles fragmented data. Application Layer Protocols In both the TCP/IP model and the OSI model, the top layer is the application layer. Here are some widely used application protocols: DNS (Domain Name System) – resolves domain names to IP addresses. HTTP – transfers hypertext over the web, allowing communication between browsers and web servers. HTTPS – does the same as HTTP, but with encryption for secure communication. DNS servers use UDP, which is faster but less reliable. In contrast, protocols like FTP and HTTP rely on TCP, which provides reliable delivery. Other popular application protocols include: FTP (File Transfer Protocol) – for managing file transfers. POP3 (Post Office Protocol version 3) – used by email clients to retrieve messages. IMAP (Internet Message Access Protocol) – allows access to emails over the internet. Conclusion This guide covered the most commonly used protocols in computer networks. These protocols form the backbone of most real-world network communications. In total, there are around 7,000 protocols, many of which are used for more specialized tasks.
05 June 2025 · 12 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support