Sign In
Sign In

What is OpenVPN and Why Should You Care About That?

What is OpenVPN and Why Should You Care About That?
Hostman Team
Technical writer
Infrastructure

There are many different VPN protocols and OpenVPN is quite often considered to be one of them. But to understand correctly what OpenVPN really represents we should learn more about such terms as the "VPN protocol" itself.

Simply put, a protocol is a set of rules that define how different devices in one network communicate with each other. For example, when we are connecting to the Internet we use the Hypertext Transfer Protocol (also known as HTTP). This protocol makes it possible to pass data from the computer to the website and vice versa. VPN protocols are special. They are made for those who want to organize secure virtual private networks that can be used inside the company to pass confidential data.

OpenVPN is also considered to be a protocol because it helps to establish similar private networks that help to secure the data passed from one device to another.

But it is more correct to call OpenVPN an open-source implementation of VPN technology with an alike piece of software that together can compose a protected tunnel for sending data from one computer to another. OpenVPN helps to diminish the risks of losing any private information (and risks are quite high especially when we talk about public networks). That's why it is always recommended to connect to VPN before login into some critical internet resources and applications (mail, banking apps, etc.).

Nowadays, OpenVPN has a reputation of being next to the most popular and secure instruments that make it possible to deploy virtual private networks (even though a new protocol called Wireguard is trying hard to take this title away).

Another reason to use OpenVPN is its cross-platform essence. The protocol supports a lot of different platforms including Linux, Windows, FreeBSD, macOS, Solaris, etc.

How does OpenVPN work

OpenVPN creates a secure tunnel between the VPN client and VPN server, so there is a protected channel to pass any data through. While using VPN, the geolocation of the IP address is changed to the geolocation of the VPN server, so your actual geolocation becomes invisible "outside". And because of such technologies cybercriminals can't access your personal data or somehow harm it while processing by the server.

Also, your ISP can't see the data. That's why it is possible to connect to a VPN in case of avoiding blocked resources (like torrent trackers or social media in some countries). Furthermore, VPN is a great solution for those who want to give access to the corporate network for remote workers or create a common virtual network for different offices in different parts of the city/country/world.

When we talk about sending data through VPN we imply that protocol UDP is used which is much more performant than traditional TCP. But if it is necessary you can switch to TCP (sometimes this variant is preferred because of the instability of UDP).

Encryption of data passed through OpenVPN is possible thanks to an open-source software library called OpenSSL. And that's great news because OpenSSL is a powerful product that is widely used across different web servers around the world that are secured via HTTPS connection (at the moment, it is the most reliable technology of this type). Using such technologies provides support to any popular encryption algorithms like AES, Chacha2, Poly1305, Triple DES, SM4, MD5, SHA-2, SHA-3, BLAKE2, Whirlpool, RSA, Diffie-Hellman, Elliptic curve, and many more.

How to start using OpenVPN?

If you want to use OpenVPN you might want to create your own VPN server by deploying a special piece of software on the virtual private server that you have already rented and used. Or you can find active VPN services that use the protocol OpenVPN. That won't be a problem for you because most of the VPN providers use this technology. OpenVPN underlies most of the VPN services you might know. It happens because there are no reasons to use anything else (yes, OpenVPN is so powerful and useful that you'd also prefer to use it instead of other protocols). OpenVPN is not only secure but also performant and does not imply any vital disadvantages.

If you wish to create your own VPN server you have to start with downloading, installing, and configuring special software. It is quite hard and will take some time. But it will be paid off by a more robust, secure, and reliable platform because you'll be the one who controls all the processes, monitors traffic, etc. Nobody will be in charge of manipulating the data in any way. And nobody will have access to the server's inners. Moreover, you'll have to create a hardware and software platform that is always available and works without any failures. To make it possible you have to consider renting a reliable and fast server with unlimited bandwidth. And it is easy to find such a server. You can rent one on Hostman.

There is an easier way to start using OpenVPN (and, to be honest, more preferable in most cases) — paying for pre-configured VPN service (there are thousands of them). If you choose this variant, the only thing you have to do is to download a special VPN application, install it on your device and log in using the credentials you used while creating an account

Infrastructure

Similar

Infrastructure

Microservices Architecture Explained: Benefits, Real-World Use Cases & Design Patterns

Every developer strives to make the product development process faster while maintaining flexibility and effective control. Microservices architecture makes this possible, and over the past 7–10 years, it has become a strong alternative to the traditional monolithic approach. Let’s begin by exploring the difference between the two. Microservices Architecture vs. Monolith The difference between these two approaches to software development is best illustrated with an example. Suppose we have two online stores: one built as a monolith, the other using microservices. A monolithic online store is a single, indivisible structure that combines all components: databases (product catalog, customer data), cart, order and payment forms. All elements are tightly coupled and reside on the same server. In a microservices-based system, each component is an independent module that developers can work on separately. Of course, these modules don't have to be hosted on the same server. Thus, microservices architecture is like a modular building kit where you can easily add new components and scale the application. A monolith, in contrast, is like a solid wall — and scaling it typically means duplicating the entire structure. It’s also worth noting that microservices are sometimes mistakenly thought of as just a collection of tiny services. That’s not true. For example, the database of a large e-commerce site may contain millions of records and take up tens of gigabytes, yet still be just one module within a microservices-based application. Comparing Microservices and Monoliths by Key Criteria Let’s compare how microservices and monoliths address the same development needs. Release Cycle Microservices allow for faster development and more frequent releases thanks to their modularity — updates affect individual modules rather than the whole codebase. With a monolith, you must update the entire platform, which increases testing time and delays releases. Technology Stack Microservices offer much greater flexibility: each service can use its own programming language, libraries, and data storage technologies. With a monolith, the technology stack is fixed and hard to change, forcing developers to stick with the original tools. Onboarding Developers Since each microservice is a standalone unit, developers can be onboarded to specific modules without needing to understand the entire system. In a monolith, new developers must familiarize themselves with the entire application codebase before contributing effectively, making the team more dependent on specific individuals. Optimization The modularity of microservices simplifies optimization, as each module can be tuned separately. In monoliths, optimization is more complex due to tight coupling — changes in one part often affect the entire system. Scalability Microservices, being distributed and potentially deployed on separate servers, make it easier and faster to scale specific components. In monoliths, scaling one part usually means scaling the entire application, which is inefficient. Fault Tolerance Thanks to their distributed and modular nature, microservices offer higher fault tolerance. A failure in one module does not affect the whole system. In a monolith, components are tightly connected, and a failure in one part can bring down the entire application. Should You Switch to Microservices Now? Microservices clearly offer several advantages. But does that mean monoliths are outdated and should be replaced immediately? Not necessarily — it depends on your current project status. In fact, switching to microservices isn't always the best move. Distributed systems also come with their own challenges: Network Dependency: Microservices require robust network communication. Unstable connections can cause delays or data inconsistencies, potentially disrupting the application. Increased Overhead: Each module must be separately tested and monitored. You’ll also need to allocate cloud resources for each, which can increase costs. Team Coordination: Microservices can introduce coordination challenges between teams managing different modules. This often requires DevOps specialists to bridge gaps between developers and streamline collaboration. Considering all these factors, the switch to microservices should be well-timed. In most early-stage projects, especially those with limited teams or budgets, there's no urgent need to move away from a monolith. You should consider transitioning to microservices when: You have a large team — it makes sense to split them into independent groups, each managing a specific service. Your application is complex and modular — maintaining and updating modules separately is more practical. Your application experiences traffic spikes — distributed microservices allow you to scale quickly during peak times and scale down afterward. Your application is frequently updated — working on individual modules speeds up release cycles. If your project meets even one of these criteria, it's worth exploring microservices. But if your app is relatively small and doesn’t need frequent updates, it might be best to stick with the monolithic approach for now. Useful Tools for Microservices Architecture Modern development requires a containerization platform. In most cases, developers use Docker to isolate applications from infrastructure, enabling them to run seamlessly both locally and in the cloud. As the number of containers grows, you need an orchestrator to manage them. The most popular tool is Kubernetes, which integrates well with Docker. Docker also has its own orchestrator: Docker Swarm. Another essential tool is a load balancer, which evenly distributes network traffic across cloud resources, significantly improving the application's fault tolerance.
06 June 2025 · 5 min to read
Infrastructure

How to Choose the Best Password Manager in 2025

Although passwords are not considered the most secure method of authentication, they remain widely used for interacting with various services and applications. Today, more and more users face the need to manage dozens or even hundreds of passwords for different platforms. Storing them in notes, personal messages, or browser memory is not only inconvenient but also unsafe. To solve this problem, there are special types of password security software that not only store but also protect sensitive data, providing a secure space for your credentials. The market offers dozens of password management tools. In this article, we’ll take a closer look at password manager software and examine their key features. What Is a Password Manager? A password manager is software designed for securely storing and using passwords and other confidential data. Password managers simplify password handling by allowing users to remember just one code (commonly known as the master password) instead of multiple complex combinations. Most password managers also offer additional features, such as data breach monitoring, integration with third-party services, and support for storing other types of information like logins and payment card details. They also minimize human error in security management. For example, they eliminate the need to invent and remember complex passwords by offering cryptographically secure auto-generated alternatives. This greatly reduces the risk of weak or reused passwords — one of the main causes of account compromise. Key Features of Password Managers Before diving into reviews of specific software products, it's important to outline the minimum essential features a password manager should offer: Password Generation Service This feature enables the creation of unique, long, and cryptographically strong passwords. A major advantage is having flexible settings to meet the requirements of various services (e.g., length, special characters, etc.). Autofill Automating the process of entering passwords improves user experience and streamlines interactions with the password manager. Browser, OS, and app integration allow autofill to speed up logins and reduce error rates. Data Synchronization Especially relevant for cross-platform password apps that run on multiple operating systems. Synchronization can be cloud-based or local. It ensures access to your private data from any supported device, anywhere. For security, encrypted data transfer channels are essential to minimize leakage risks. Supported Security Measures These include encryption (e.g., AES-256) and two-factor authentication (2FA). Some managers also support biometric authentication using fingerprint scanners or facial recognition. Security Level The most important criterion to prioritize. Ensure that the app uses modern encryption algorithms (specifically AES-256) and supports 2FA. Regular security audits are also crucial. Many password manager developers publish the results of independent security checks, which builds trust. Pricing Depending on user needs, there are various pricing options. Free plans are good for basic use but may be limited (e.g., single-device access, no cloud sync). Paid plans offer expanded functionality, tech support, better security, and business features. Open-Source Options It’s also worth noting that free open-source solutions can offer functionality comparable to paid options. Top Proprietary Password Managers Now let’s review three popular proprietary password management services: NordPass NordPass is a password vault developed by Nord Security. It helps users keep their credentials safe with a user-friendly interface and secure storage. Key Features Secure password storage: Unlimited encrypted password storage with cloud sync for cross-device access. Password generator: Automatically creates strong combinations; includes checks for length, special characters, and other criteria. Autofill: Streamlines login by auto-filling credentials on websites. Data breach monitoring: Scans accounts for potential compromise from hacks or data leaks. Offline mode: Allows access to stored passwords even without an internet connection. Advantages Advanced encryption: Uses the XChaCha20 algorithm for data protection. Cross-platform support: Available for Windows, macOS, Linux, Android, and iOS; also includes browser plugins. Ease of use: Clean interface that is accessible even to non-technical users. Family and business plans: Offers flexible plans for individuals, families, and organizations. Disadvantages Limited free version: The free plan only offers basic features and doesn’t include multi-device sync. Cloud-only storage: No local-only storage option, which may concern users who prefer full control over their data. Closed-source software: Unlike some competitors, NordPass is proprietary, which may deter open-source advocates. Pricing Plans Free: Basic functionality with no sync across devices. Family: Supports up to six accounts. Business: Team management features for organizations. Pricing varies by region and subscription length, with longer terms offering better value. 1Password 1Password is one of the most popular password managers, offering secure data storage and access control. It’s designed to enhance cybersecurity and protect accounts and sensitive information online. Key Features Password storage: Secure login credential storage. Password generation: Built-in tool for creating strong, security-compliant passwords. Form autofill: Fast access to websites without manual data entry. Personal data storage: Supports storing not just passwords but also bank cards, licenses, notes, documents, and other important files. Leak monitoring: Alerts users in case of password leaks or data breaches. Advantages Robust security: Uses top-tier encryption algorithms. Flexible organization: Create multiple vaults for different users or purposes. Cross-platform compatibility: Works on Windows, macOS, Linux, iOS, Android, with browser integration. Business solutions: Includes tools for team access control and administration. Disadvantages No permanent free plan: Only a 14-day free trial is available, after which a subscription is required. Cloud-only storage: While convenient for syncing, some users prefer local-only data management. Pricing Plans Individual: $2.99/month (billed annually) with full features and cross-device sync. Family: $4.99/month for up to 5 users with individual vaults. Teams Starter Pack: $19.95/month for up to 10 users. Encryption AlgorithmDashlane Dashlane is an app for storing passwords and confidential information that provides strong protection. The program helps users simplify access to credentials and protect them from unauthorized use. Key Features Password storage: A secure vault for passwords from various websites and applications. Built-in password generator: A tool for creating reliable and unique character combinations. Autofill: Automatically fills in passwords, logins, financial, and other data on web pages. Data breach monitoring: A monitoring system warns about potential breaches and recommends password changes. Cross-device synchronization: Access your information from various devices, including PCs, smartphones, and tablets. Digital wallet: Secure storage for bank cards and payment details for convenient online shopping. Secure data sharing: Alerts about potential unauthorized access attempts and suggests password changes. Advantages High level of protection: Uses AES-256 encryption and Zero-Knowledge architecture, ensuring complete privacy. User-friendly interface: Simple and intuitive UI suitable even for beginners. Disadvantages Subscription cost: Dashlane is among the more expensive solutions, which may be a barrier for budget-conscious users. Limited functionality in the free version: The free plan offers only basic features and works on a single device. Pricing Plans Free plan: Store up to 25 passwords on one device. Premium: $4.99/month. Includes credit monitoring and identity theft protection. Family plan: $7.49/month (billed annually), supports up to 6 users, each with their own vault. Comparison Table Criteria NordPass 1Password Dashlane Free version available Yes Yes Yes Autosave Yes Yes Yes Passkey support Yes Yes Yes Data breach alerts Yes Yes Yes Multi-factor authentication Yes Yes Yes Email masking Yes Yes No Password generator Yes Yes Yes Supported devices Unlimited Unlimited Unlimited Family plan Yes (up to 6) Yes (up to 5) Yes (up to 10) Encryption algorithm used XChaCha20 AES-GCM-256 AES-256 Among proprietary password managers, we compared three programs: NordPass, 1Password, and Dashlane. All three offer similar functionality, differing mainly in the encryption algorithms they use. Each product also features a free version, allowing users to try it out and select the one that best suits their needs. Top Open-Source Password Managers In contrast to proprietary solutions, the market also offers open-source options. Notably, some open-source solutions can be self-hosted in your own infrastructure. KeePass  KeePass is a popular free password manager for Windows that ensures secure storage of passwords and credentials. It operates in offline mode, providing maximum control over stored data. Key Features Password management: Stores passwords accessible via a master password. Storage is limited only by vault size. Local data storage: User data is stored locally on the device, not in the cloud. Autofill: Automatically fills in data on websites and in apps. Cross-platform support: Versions exist for Windows, macOS, Linux, Android, and iOS. Advantages High security: Supports multiple encryption algorithms, including AES-256, ChaCha20, and Twofish. Offline mode: No cloud dependency reduces the risk of data leaks. Disadvantages Cumbersome synchronization: Requires manual configuration for cross-device syncing. KeePassXC KeePassXC is a free, open-source, cross-platform tool for secure password storage. It is a modern adaptation of the original KeePass, tailored for use on Windows, macOS, and Linux. Key Features Local encrypted storage: Data is stored locally and securely on the user's machine. No cloud dependency unless manually configured. 2FA support: Stores 2FA codes and integrates with hardware security keys (e.g., YubiKey). Autofill: Supports browser integration for auto-filling credentials. Cross-platform: Available on Windows, macOS, and Linux. Mobile access through compatible apps like KeePassDX for Android. Password generator: Customizable password creation tool. Advantages Ease of use: Offers a more user-friendly interface than the original KeePass. Offline operation: Does not require cloud storage; all data remains local. Disadvantages No official mobile apps: KeePassXC is limited to desktop; mobile support is only via third-party apps. Limited plugin options: Compared to KeePass, KeePassXC has fewer plugins available. Bitwarden Bitwarden is an open-source password manager popular for its reliability, simplicity, and transparency. Key Features Password storage: Stores unlimited passwords with access from anywhere. Data is encrypted using AES-256. Password generator: Allows custom password generation based on length, character type, etc. Autofill: Automatically fills in credentials on websites and in apps. Cross-platform support: Available on Windows, macOS, Linux, Android, and iOS. Two-factor authentication (2FA): Supports 2FA via apps, email, or hardware tokens (e.g., YubiKey). Advantages Open source: Public code base allows independent security audits. High security: Client-side end-to-end encryption ensures privacy even from Bitwarden developers. Affordable and accessible: The free tier includes many features often restricted in paid plans elsewhere. Local and cloud storage options: Can be hosted in the cloud or self-hosted for full control. Disadvantages Complex setup for beginners: Self-hosting and advanced configuration may be difficult for inexperienced users. Pricing Plans Self-hosted: Users can deploy Bitwarden on their own server. Premium plan: $10/year, adds breach monitoring and 1 GB of encrypted file storage. Family plan: $40/year, supports up to 6 users. Business plan: Starts at $3/user/month, with advanced team management features. Padloc Padloc is a cross-platform, open-source password management app focused on simplicity and ease of use. It allows users to store, manage, and synchronize passwords across multiple devices. Key Features Open Source: The project’s source code is available on GitHub and is distributed under the GPL-3 license. Cloud Synchronization: Supports storing data on cloud servers with an option for local encryption. Encryption Support: Utilizes AES-256 and Argon2 encryption algorithms. Cross-Platform: Available for Windows, macOS, Linux, Android, and iOS. Browser extensions are also available. Password Generator: Enables creation of strong passwords with customizable options. Advantages Ease of Use: Minimalist and beginner-friendly interface. Team Collaboration Support: Allows sharing of passwords within a team. Disadvantages No Offline Mode: Fully dependent on the cloud. Fewer Features Compared to Alternatives: Lacks features like 2FA support, SSH agent integration, and advanced security settings. Pricing Plans Premium: $3.49/month. Includes 2FA support, 1GB of encrypted file storage, breach report generation, and note-taking functionality with Markdown support. Family Plan: $5.95/month. Includes all Premium features and allows up to 5 users. Team: $3.49/month. Includes Premium features and supports up to 10 user groups with flexible management. Business: $6.99/month. Includes all Team features and supports up to 50 user groups with flexible configuration. Enterprise: Price upon request. Includes all Business features, unlimited user groups, and custom branding options. Psono Psono is a password manager geared toward self-hosting and enterprise use. It can be deployed on a private server, giving users full control over their data. Psono offers strong security, team features, and multi-factor authentication (MFA). Key Features Open Source: Source code is available on GitHub under the Apache 2.0 license. Self-Hosted: Can be deployed on a private server for full data control. Encryption Support: Uses AES-256, RSA, and Argon2 for encryption. Advantages High Security: Supports modern encryption standards and hardware keys. Team Collaboration Support: Ideal for businesses and IT teams. Disadvantages Setup Complexity: Requires server deployment for full functionality. Pricing Plans Self-Hosted: Free option for private deployment. SaaS Edition (Business): $3.50/month. Adds SAML & OIDC SSO, audit logging, and extended support on top of the free version’s features. Comparison Criteria KeePass KeePassXC Bitwarden Padloc Psono Cloud Sync No No Yes Yes Yes Auto-Save Yes Yes Yes Yes Yes Passkey Support Yes Yes Yes No Yes Data Breach Alerts No No No No Yes Multi-Factor Authentication (MFA) Yes Yes Yes Yes Yes Email Masking No No Yes No No Password Generator Yes Yes Yes Yes Yes Supported Devices Single device Single device Unlimited Two (free version) Unlimited (paid) Family Plan Available No No Yes (up to 6 users) Yes (up to 5 users) No Encryption Algorithm AES-256, SHA-256, HMAC-SHA-256/512 AES256 AES-256 E2EE, salted hashing, PBKDF2 SHA-256 AES XSalsa20 + Poly1305 Conclusion In this article, we explored password managers and thoroughly analyzed the most popular software solutions for secure information storage—both paid and free. Each reviewed product has its own strengths and weaknesses. A well-chosen password manager can simplify the management of personal data and protect it from unauthorized access. When selecting a solution, it’s important to consider the functionality, security level, and ease of use.
05 June 2025 · 13 min to read
Infrastructure

Network Protocols: What They Are and How They Work

A network protocol is a set of rules and agreements used to facilitate communication between devices at a specific network layer. Protocols define and regulate how information is exchanged between participants in computer networks. Many protocols are involved in network operation. For example, loading a webpage in a browser is the result of a process governed by several protocols: HTTP: The browser forms a request to the server. DNS: The browser resolves the domain name to an IP address. TCP: A connection is established, and data integrity is ensured. IP: Network addressing is performed. Ethernet: Physical data transmission occurs between devices on the network. These numerous protocols can be categorized according to the network layers they operate on. The most common network models are the OSI and TCP/IP models. In this article, we will explain these models and describe the most widely used protocols. Key Terminology This section introduces essential network-related terms needed for understanding the rest of the article. Network. A network is a collection of digital devices and systems that are connected to each other (physically or logically) and exchange data. Network elements may include servers, computers, phones, routers, even a smart Wi-Fi-enabled lightbulb—and the list goes on. The size of a network can vary significantly—even two devices connected by a cable form a network. Data transmitted over a network is packaged into packets, which are special blocks of data. Protocols define the rules for creating and handling these packets. Some communication systems, such as point-to-point telecommunications, do not support packet-based transmission and instead transmit data as a continuous bit stream. Packet-based transmission enables more efficient traffic distribution among network participants. Network Node. A node is any device that is part of a computer network. Nodes are typically divided into two types: End Nodes. These are devices that send and/or receive data. Simply put, these are sources or destinations of information. Intermediate Nodes. These nodes connect end nodes together. For example, a smartphone sends a request to a server via Wi-Fi. The smartphone and server are end nodes, while the Wi-Fi router is an intermediate node. Depending on node placement and quantity, a network may be classified as: Global Network. A network that spans the entire globe. The most well-known example is the Internet. Local Network (LAN). A network covering a limited area. For example, your home Wi-Fi connects your phone, computer, and laptop into a local network. The router (an intermediate node) acts as a bridge to the global network. An exception to geographic classification is networks of space-based systems, such as satellites or orbital stations. Distributed Network. A network with geographically distributed nodes. Network Medium. This refers to the environment in which data transmission occurs. The medium can be cables, wires, air, or optical fiber. If copper wire is used, data is transmitted via electricity; with fiber optics, data is transmitted via light pulses. If no cables are used and data is transmitted wirelessly, radio waves are used. OSI Model In the early days of computer networks, no universal model existed to standardize network operation and design. Each company implemented its own approach, often incompatible with others. This fragmented landscape became problematic—networks, which were supposed to connect computers, instead created barriers due to incompatible architectures. In 1977, the International Organization for Standardization (ISO) took on the task of solving this issue. After seven years of research, the OSI model was introduced in 1984. OSI stands for Open Systems Interconnection, meaning systems that use publicly available specifications to allow interoperability, regardless of their architecture. (This "openness" should not be confused with Open Source.) The model consists of seven network layers, each responsible for specific tasks. Let’s look at each: 1. Physical Layer This layer deals with the physical aspects of data transmission, including transmission methods, medium characteristics, and signal modulation. 2. Data Link Layer The data link layer operates within a local network. It frames the raw bit stream from the physical layer into recognizable data units (frames), determines start and end points, handles addressing within a local network, detects errors, and ensures data integrity. Standard protocols are Ethernet and PPP. 3. Network Layer This layer handles communication between different networks. It builds larger networks from smaller subnets and provides global addressing and routing, selecting the optimal path. For example, the IP protocol, which gives each device a unique address, operates at this layer. Key protocols are IP and ICMP. 4. Transport Layer The transport layer ensures end-to-end communication between processes on different computers. It directs data to the appropriate application using ports. Protocols such as: UDP — Unreliable transmission of datagrams. TCP — Reliable byte-stream transmission. 5. Session Layer This layer manages communication sessions: establishing, maintaining, and terminating connections, as well as synchronizing data. 6. Presentation Layer Responsible for translating data formats into forms understandable to both sender and receiver. Examples: text encoding (ASCII, UTF-8), file formats (JPEG, PNG, GIF), encryption and decryption. 7. Application Layer The user-facing layer where applications operate. Examples include web browsers using HTTP, email clients, and video/audio communication apps. Some OSI protocols span more than one layer. For instance, Ethernet covers both the physical and data link layers. When data is sent from one node to another, it passes through each OSI layer from top to bottom. Each layer processes and encapsulates the data before passing it to the next lower layer. This process is called encapsulation. On the receiving end, the process is reversed: each layer decapsulates and processes the data, from bottom to top, until it reaches the application. This is called decapsulation. While the OSI model is not used in practical network implementations today, it remains highly valuable for educational purposes, as many network architectures share similar principles. TCP/IP While the OSI model was being developed and debated over, others were implementing practical solutions. The most widely adopted was the TCP/IP stack, also known as the DoD model. According to RFC 1122, the TCP/IP model has four layers: Application Layer Transport Layer Internet Layer (sometimes just called "Network") Link Layer (also called Network Access or Interface Layer) Though different in structure, TCP/IP follows the same fundamental principles as OSI. For example: The OSI session, presentation, and application layers are merged into a single application layer in TCP/IP. The OSI physical and data link layers are merged into the link layer in TCP/IP. Since terminology may vary across sources, we will clarify which model we are referring to throughout this article. Let’s take a closer look at each layer and the protocols involved, starting from the bottom. Data Link Layer in TCP/IP As mentioned earlier, the Data Link Layer in the TCP/IP model combines two layers from the OSI model: the Data Link and Physical layers. The most widely used data link protocol in TCP/IP is Ethernet, so we’ll focus on that. Ethernet Let’s forget about IP addresses and network models for a moment. Imagine a local network consisting of 4 computers and a switch. We'll ignore the switch itself; in our example, it's simply a device that connects the computers into a single local network. Each computer has its own MAC address. In our simplified example, a MAC address consists of 3 numbers, which is not accurate in reality. MAC Address In reality, a MAC address is 48 bits long. It’s a unique identifier assigned to a network device. If two devices have the same MAC address, it can cause network issues. The first 24 bits of a MAC address are assigned by the IEEE — an organization responsible for developing electronics and telecommunications standards. The device manufacturer assigns the remaining 24 bits. Now, back to our local network. If one computer wants to send data to another, it needs the recipient's MAC address. Data in Ethernet networks is transmitted in the form of Ethernet frames. Ethernet is a relatively old protocol, developed in 1973, and has gone through several upgrades and format changes over time. Here are the components of an Ethernet frame: Preamble indicates the beginning of a frame. Destination MAC address is the recipient’s address. Source MAC address is the sender’s address. Type/Length indicates the network protocol being used, such as IPv4 or IPv6. SNAP/LLC and Data are the payload. Ethernet frames have a minimum size requirement to prevent collisions. FCS (Frame Check Sequence) is a checksum used to detect transmission errors. ARP So far, we’ve talked about a simple local network where all nodes share the same data link environment. That’s why this is called the data link layer. However, MAC addressing alone is not enough for modern TCP/IP networks. It works closely with IP addressing, which belongs to the network layer. We’ll go into more detail on IP in the network layer section. For now, let’s look at how IP addresses interact with MAC addresses. Let’s assign an IP address to each computer: In everyday life, we rarely interact with MAC addresses directly — computers do that. Instead, we use IP addresses or domain names. The ARP (Address Resolution Protocol) helps map an IP address to its corresponding MAC address. When a computer wants to send data but doesn’t know the recipient’s MAC address, it broadcasts a message like: "Computer with IP 1.1.1.2, please send your MAC address to the computer with MAC:333." If a computer with that IP exists on the network, it replies: "1.1.1.2 — that’s me, my MAC is 111." So far, we've worked within a single network. Now, let’s expand to multiple subnets. Network Layer Protocols in TCP/IP Now we add a router to our local network and connect it to another subnet. Two networks are connected via the router. This device acts as an intermediate node, allowing communication between different data link environments. In simple terms, it allows a computer from one subnet to send data to a computer in another subnet. How does a device know it’s sending data outside its own subnet? Every network has a parameter called a subnet mask. By applying this mask to a node’s IP address, the device can determine the subnet address. This is done using a bitwise AND operation. You can check the subnet mask in Windows using the ipconfig command:  In this example, the mask is 255.255.255.0. This is a common subnet mask. It means that if the first three octets of two IP addresses match, they are in the same subnet. For example: IP 1.1.1.2 and 1.1.1.3 are in the same subnet. IP 1.1.2.2 is in a different subnet. When a device detects that the recipient is in another subnet, it sends data to the default gateway, which is the router’s IP address. Let’s simulate a situation: A device with MAC 111 wants to send data to the IP 1.1.2.3. The sender realizes this is a different subnet and sends the data to the default gateway. First, it uses ARP to get the MAC address of the gateway, then sends the packet. The router receives the packet, sees that the destination IP is different, and forwards the data. In the second subnet, it again uses ARP to find the MAC address of the target device and finally delivers the data. IP Protocol The IP (Internet Protocol) was introduced in the 1980s to connect computer networks. Today, there are two versions: IPv4 – uses 32-bit addressing. The number of available IP addresses is limited. IPv6 – uses 128-bit addressing and was introduced to solve IPv4 address exhaustion. In IPv6, ARP is not used. Both protocols serve the same function. IPv6 was meant to replace IPv4, but because of technologies like NAT, IPv4 is still widely used. In this guide, we’ll focus on IPv4. An IP packet consists of the following fields: Version – IPv4 or IPv6. IHL (Internet Header Length) – indicates the size of the header. Type of Service – used for QoS (Quality of Service). Total Length – includes header and data. Identification – groups fragmented packets together. Flags – indicate if a packet is fragmented. Fragment Offset – position of the fragment. Time to Live (TTL) – limits the number of hops. Protocol – defines the transport protocol (e.g., TCP, UDP). Header Checksum – verifies the header’s integrity. Source IP Address Destination IP Address Options – additional parameters for special use. Data – the actual payload. Transport Layer Protocols The most common transport layer protocols in TCP/IP are UDP and TCP. They deliver data to specific applications identified by port numbers. Let’s start with UDP — it’s simpler than TCP. UDP A UDP datagram contains: Source port Destination port Length Checksum Payload (from the higher layer) UDP’s role is to handle ports and verify frames. However, it does not guarantee delivery. If some data is lost or corrupted, UDP will not request a retransmission — unlike TCP. TCP TCP packets are called segments. A TCP segment includes: Source and destination ports Sequence number Acknowledgment number (used for confirming receipt) Header length Reserved flags Control flags (for establishing or ending connections) Window size (how many segments should be acknowledged) Checksum Urgent pointer Options Data (from the higher layer) TCP guarantees reliable data transmission. A connection is established between endpoints before sending data. If delivery cannot be guaranteed, the connection is terminated. TCP handles packet loss, ensures order, and reassembles fragmented data. Application Layer Protocols In both the TCP/IP model and the OSI model, the top layer is the application layer. Here are some widely used application protocols: DNS (Domain Name System) – resolves domain names to IP addresses. HTTP – transfers hypertext over the web, allowing communication between browsers and web servers. HTTPS – does the same as HTTP, but with encryption for secure communication. DNS servers use UDP, which is faster but less reliable. In contrast, protocols like FTP and HTTP rely on TCP, which provides reliable delivery. Other popular application protocols include: FTP (File Transfer Protocol) – for managing file transfers. POP3 (Post Office Protocol version 3) – used by email clients to retrieve messages. IMAP (Internet Message Access Protocol) – allows access to emails over the internet. Conclusion This guide covered the most commonly used protocols in computer networks. These protocols form the backbone of most real-world network communications. In total, there are around 7,000 protocols, many of which are used for more specialized tasks.
05 June 2025 · 12 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support