Sign In
Sign In

What is a Service Level Agreement (or SLA)

What is a Service Level Agreement (or SLA)
Hostman Team
Technical writer
Infrastructure

SLA is an agreement that outlines what kind (and what level) of service a certain company can provide. This term is mostly used in industries like television or Information Technology.

Unlike regular service contracts Service Level Agreement offers an exceptional amount of detail provided with descriptions of service quality, tech support response time and other indicators.

General SLA principles

The service level agreement usually follows these principles:

  • The interaction between the provider and the client must be as transparent as possible. Every process has to have a clear and reasonable purpose. No blurred terms and puzzled wordings allowed. Both sides should avoid using specific expressions that might be misunderstood.

  • The rules and rights for both sides have to be totally understandable. For instance, a company promises that all the provided services will be accessible 99.99% of the time and if the user finds out that it is not true he should have an opportunity to receive compensation.

  • Expectations management. For example, clients expect tech support to be available at any time as well as answers to the most insignificant questions. But providers can't offer such service. Accordingly a client must change provider or lower his expectations. Or the company has to make the tech support team more performant.

SLA usually contains such data as the amount of time that is needed to resolve a client's problems or what kind of compensation and in what cases the user has the right to ask for it, etc.

SLA doesn't have to be a giant pile of sheets. The most important thing for any company is to make the service level agreement as transparent and natural as possible. Look at successful and large corporations such as Amazon. SLA for their service S3 is fully described on just one page.

Here (link to Amazon) you can read about the monthly uptime of the services and about the level of compensation you'll receive if they are not achieved.

What typical SLA consists of

We peeked into Amazon SLA a couple of lines ago. That is not a standard. It is just one of the ways to design your SLA which takes into consideration the specific characteristics of the service provided by the company (and authors of SLA).

If we're talking about the IT industry, a typical SLA would contain:

  • The rules for using the product or providing some service.

  • Responsibilities of both sides. Mechanisms that help users and providers to control each other in some way.

  • Concrete procedures that might be undertaken by the provider to fix any flaws the user stumbles upon.

You can also find the exactly how long an actual service level agreement will be legitimate. Sometimes both client and provider describe ways of adding new demands to the functionality of the services if necessary.

Moreover, it is normal to list indicators that somehow refer to the actual level of service quality.

  • The reliability and availability of the service.

  • The time it takes to react to system faults and malfunctions.

  • The time it takes to resolve system faults and malfunctions.

You might want to add the way of settling the scores with the client. As an example, some companies ask for money after providing a certain level of service, some companies insist on paying for a fixed plan, etc. Don't forget to tell users about fines if they exist. If it is possible for the client to receive compensation, the job of the service provider is to explain why, how and where the customer can get it.

Key parameters of SLA

The parameters of SLA — is a set of metrics that can be measured somehow. There's no way you would write in SLA something along the lines of "We will fix any fault before you know about it". It is an example of a blurred statement that will only make it harder to achieve a level of agreement between the service provider and the customer.

Let us talk about such a metric as operation mode. It shouldn't be abstract. It must include concrete dates and periods of time when customers can count on the technical support team.

There are examples when a company divides all the customers into separate groups. One of them is allowed to access tech support any time. The second is only allowed to ask for help on workdays. The third can't call for help at all.

Such metrics are extremely important because there's no other way to clearly understand what both sides can expect from their collaboration. That's why you have to consider a few things:

  • Metrics must be published and accessible for anyone.

  • There shouldn't be any statements that can be misunderstood.

  • Any changes in metrics should not happen without warning. Customers have the right to know about any change beforehand.

When you work on establishing metrics do not overdo it. It might increase the price of services provided by the company.

Let's see. We have a problem that might be solved in about 4 hours by a mediocre specialist. An expert can solve the same problem in 2 hours. It is not a good practice to write "2 hours" in your SLA. The job done by a specialist will become much more expensive in the quickest way possible. If you write "1 hour" you will not only pay much more but also will often pay compensations to thoughtful users who believed you but were cheated on.

Operation mode and work hours are not the only metrics that you should care about. What else is important? For example, the time it takes for tech support to respond. Metrics themselves can differ because of external variables like customer status or the seriousness of the problem.

Let's say some company is outsourcing some kind of IT service. This company has a group of users that pays for the premium plan and another group that does not. The time it takes for a tech support team to respond to clients from different groups might vary because one of them is obviously more privileged. One group might get help in 15 minutes and the other in a day. If there are such differences it is extremely important to reflect it in a service level agreement.

Beside the reaction time it is important to speak about the time it takes to resolve the problem the user has run into. The logic of regulating this metric is exactly the same. Even if the customer is really important to the company his queries might be dealt with at differing speeds depending on the seriousness of the problem.

We have a client that has an extremely severe problem — the local network is down and all the inner processes are consequently stuck. Such problems must be prioritized. SLA might include the details for this kind of problem and what type of help the client can expect.

The same customer can ask for help another day but with less critical malfunction. For example, the whole network works well but a few new devices need to be connected to it. It is ok to spend hours and days on such things.

These and a lot of other considerations should be reflected in SLA and accepted both by customer and service provider. Such an approach can help to lessen the amount of potential conflicts. Everything becomes clear and understandable for anyone.

Availability of the service

For the provider, one of the most important parameters in SLA is availability. This metric can be measured in days, hours or minutes for a certain period of time. For instance, a provider can guarantee anyone that its cloud storage will be accessible 99.99% of the time during the year.

In absolute numbers 99 and 100 seem to be quite the same thing. But the difference becomes huge if we analyze those numbers considering that this percentage refers to a period of 365 days. If we say 99% it actually means that the customers agree that the server might be not available for about 4 days per year. And when we talk about 100% there shouldn't be any stand by. But it is impossible to guarantee such reliability. It is always 99.**% with some numbers after the dot.

Considering Hostman, we guarantee 99,99% of uptime. It means that servers might not work for as long as 52 minutes per year.

You might find providers that promise uptime up to 99.9999% and swear that servers will be off for 15 minutes at most. But it's not a good idea to say such things for two important reasons:

  1. The higher the promised uptime the higher the price of the service.

  2. Not that many clients even need such uptime. In most cases 99.98% is more than enough.

The amount of 9s is less important than the actual time that is fixed in SLA. The year is the default period of time used as a metric in SLAs. That means that 99.95% of uptime is 4.5 hours of stand by per year.

But some providers might use different metrics. If there's no concrete info, the user must ask what period of time is used to evaluate the uptime. Some companies try to cheat customers and boast of 99.95% of uptime but mean results per month and not per year.

Another important point is cumulative accessibility. It is equal to the lowest indicator reflected in SLA.

Pros of SLA

Signing and observance of SLA pays off for both sides. Using SLA a company can protect itself from unexpected customer demands (like fixing a not critical problem at 3 AM) and strictly describe its own responsibilities.

There are other advantages of SLA. Providers can settle and put in order not only external processes but also inner ones. For example, with correctly composed SLA a company can implement different layers of technical support and control it in a more efficient manner.

At the same time, customers that sign an agreement will clearly understand what kind of service will be provided and how they can communicate with the company.

The difference between SLA and SLO

SLA can be used as an indication of user-satisfaction level. The highest level is 100% and the lowest is 0%.

Of course, it is impossible to achieve 100% as it is impossible to provide 100% uptime and reflect it in the company's SLA. That's why it is important to choose metrics wisely and be realistic enough about the numbers used in SLA.

If you don't have a team that is ready to work at night, don't promise your customers technical support that is available 24/7. Remember that it is possible to change SLA anytime in future when the team grows and it will be viable for the company to provide a more advanced level of support. Customers will be very happy about that.

There is another system that is used inside companies to monitor the service level. This one is called SLO. O stands for "objectives". It means that the metric is oriented at future company goals. This metric reflects what level of service the company wants to achieve in future.

Here we go again, examples based on tech support. Let's say, at the moment a company can process about 50 requests and work 5 days a week from 9 AM to 6 PM. This data should be fixed and described in SLA so the customers can see it.

At the same time a company creates a second document (service level objectives). It is a foundation of future service improvements. SLO contains current metrics and a list of tasks that should be done so the company achieves a new level of quality growth. For example, the aim to raise the amount of processed user requests from 50 to 75 during the day. The future of SLA strongly depends on a current SLO.

How to create SLA

Starting the process of SLA compiling you'd better begin with the describing part. Usually this part of SLA contains a kind of glossary, abstract system description, roles of users and tech support team, etc. In the same part you can reflect boundaries: territory where service is provided, time, functionality.

The next section — service description (what functions, features and goods a user can get by working with a certain company). In this part of SLA a company must describe in detail what the user can count on after signing the contract and on what terms.

After finishing the first part you can narrow and make further details more specific. That's the main part where the actual level of service is explained minutely. Here you would write about:

  • Metrics that reflect the quality of service provided (and they must be easy to measure).

  • The definition of every metric. That should be concrete numbers and not abstract statements so both sides can refer to this part of SLA.

It is common to put additional useful links (where another set of conditions explained in detail) in the last part of SLA.

In all the stages of preparing an SLA a company must remember that it is a regulation document that helps to control everything connected with the service. The more control a company has over all the processes the better. If SLA doesn't give a company some level of control, there's no reason for such a document to exist.

Checklist: what you should consider while compiling SLA

If you are not signing the SLA but creating your own and composing it to offer the potential clients, keep these things in mind:

  1. Customers. In large systems it is recommended to divide users into separate groups and communicate with every of them individually. This approach helps to distribute resources more effectively and do the job more effectively even in the moments of high loading.

  2. Services. At this stage it is important to consider what group of customers need certain types of services. For example, your company might offer access to a CRM system for every e-commerce business. If they can't access it their business will fail and the clients will start to lose money. And consequently it will lead them to the service provider who failed them. That's why such services get the highest importance rating and must be prioritized over some simple tasks like changing the printer or creating a new account.

  3. Parameters of service quality. These parameters should be connected with the business targets your company follows and the desires of the users. For example, time and conditions at which any service is provided. One company may want to work 24/7 and the other only offers access to a tech support team 5 days a week from 9 AM to 9 PM.

    Any changes to SLA should be explained to every user (regardless of his status or level of privilege) before the actual changes come into force.

    SLA is an ever-changing technology. In real use cases you will see that some parameters or aims do not correlate well with the general direction the business is taking. And that's why the management team often decides to correct SLA and optimize it.

    Remember, SLA is not a marketing tool, it is a way for the company to talk to its users in the clearest, most efficient way. Everyone accepts the rules in SLA.

Infrastructure

Similar

Infrastructure

Microservices Architecture Explained: Benefits, Real-World Use Cases & Design Patterns

Every developer strives to make the product development process faster while maintaining flexibility and effective control. Microservices architecture makes this possible, and over the past 7–10 years, it has become a strong alternative to the traditional monolithic approach. Let’s begin by exploring the difference between the two. Microservices Architecture vs. Monolith The difference between these two approaches to software development is best illustrated with an example. Suppose we have two online stores: one built as a monolith, the other using microservices. A monolithic online store is a single, indivisible structure that combines all components: databases (product catalog, customer data), cart, order and payment forms. All elements are tightly coupled and reside on the same server. In a microservices-based system, each component is an independent module that developers can work on separately. Of course, these modules don't have to be hosted on the same server. Thus, microservices architecture is like a modular building kit where you can easily add new components and scale the application. A monolith, in contrast, is like a solid wall — and scaling it typically means duplicating the entire structure. It’s also worth noting that microservices are sometimes mistakenly thought of as just a collection of tiny services. That’s not true. For example, the database of a large e-commerce site may contain millions of records and take up tens of gigabytes, yet still be just one module within a microservices-based application. Comparing Microservices and Monoliths by Key Criteria Let’s compare how microservices and monoliths address the same development needs. Release Cycle Microservices allow for faster development and more frequent releases thanks to their modularity — updates affect individual modules rather than the whole codebase. With a monolith, you must update the entire platform, which increases testing time and delays releases. Technology Stack Microservices offer much greater flexibility: each service can use its own programming language, libraries, and data storage technologies. With a monolith, the technology stack is fixed and hard to change, forcing developers to stick with the original tools. Onboarding Developers Since each microservice is a standalone unit, developers can be onboarded to specific modules without needing to understand the entire system. In a monolith, new developers must familiarize themselves with the entire application codebase before contributing effectively, making the team more dependent on specific individuals. Optimization The modularity of microservices simplifies optimization, as each module can be tuned separately. In monoliths, optimization is more complex due to tight coupling — changes in one part often affect the entire system. Scalability Microservices, being distributed and potentially deployed on separate servers, make it easier and faster to scale specific components. In monoliths, scaling one part usually means scaling the entire application, which is inefficient. Fault Tolerance Thanks to their distributed and modular nature, microservices offer higher fault tolerance. A failure in one module does not affect the whole system. In a monolith, components are tightly connected, and a failure in one part can bring down the entire application. Should You Switch to Microservices Now? Microservices clearly offer several advantages. But does that mean monoliths are outdated and should be replaced immediately? Not necessarily — it depends on your current project status. In fact, switching to microservices isn't always the best move. Distributed systems also come with their own challenges: Network Dependency: Microservices require robust network communication. Unstable connections can cause delays or data inconsistencies, potentially disrupting the application. Increased Overhead: Each module must be separately tested and monitored. You’ll also need to allocate cloud resources for each, which can increase costs. Team Coordination: Microservices can introduce coordination challenges between teams managing different modules. This often requires DevOps specialists to bridge gaps between developers and streamline collaboration. Considering all these factors, the switch to microservices should be well-timed. In most early-stage projects, especially those with limited teams or budgets, there's no urgent need to move away from a monolith. You should consider transitioning to microservices when: You have a large team — it makes sense to split them into independent groups, each managing a specific service. Your application is complex and modular — maintaining and updating modules separately is more practical. Your application experiences traffic spikes — distributed microservices allow you to scale quickly during peak times and scale down afterward. Your application is frequently updated — working on individual modules speeds up release cycles. If your project meets even one of these criteria, it's worth exploring microservices. But if your app is relatively small and doesn’t need frequent updates, it might be best to stick with the monolithic approach for now. Useful Tools for Microservices Architecture Modern development requires a containerization platform. In most cases, developers use Docker to isolate applications from infrastructure, enabling them to run seamlessly both locally and in the cloud. As the number of containers grows, you need an orchestrator to manage them. The most popular tool is Kubernetes, which integrates well with Docker. Docker also has its own orchestrator: Docker Swarm. Another essential tool is a load balancer, which evenly distributes network traffic across cloud resources, significantly improving the application's fault tolerance.
06 June 2025 · 5 min to read
Infrastructure

How to Choose the Best Password Manager in 2025

Although passwords are not considered the most secure method of authentication, they remain widely used for interacting with various services and applications. Today, more and more users face the need to manage dozens or even hundreds of passwords for different platforms. Storing them in notes, personal messages, or browser memory is not only inconvenient but also unsafe. To solve this problem, there are special types of password security software that not only store but also protect sensitive data, providing a secure space for your credentials. The market offers dozens of password management tools. In this article, we’ll take a closer look at password manager software and examine their key features. What Is a Password Manager? A password manager is software designed for securely storing and using passwords and other confidential data. Password managers simplify password handling by allowing users to remember just one code (commonly known as the master password) instead of multiple complex combinations. Most password managers also offer additional features, such as data breach monitoring, integration with third-party services, and support for storing other types of information like logins and payment card details. They also minimize human error in security management. For example, they eliminate the need to invent and remember complex passwords by offering cryptographically secure auto-generated alternatives. This greatly reduces the risk of weak or reused passwords — one of the main causes of account compromise. Key Features of Password Managers Before diving into reviews of specific software products, it's important to outline the minimum essential features a password manager should offer: Password Generation Service This feature enables the creation of unique, long, and cryptographically strong passwords. A major advantage is having flexible settings to meet the requirements of various services (e.g., length, special characters, etc.). Autofill Automating the process of entering passwords improves user experience and streamlines interactions with the password manager. Browser, OS, and app integration allow autofill to speed up logins and reduce error rates. Data Synchronization Especially relevant for cross-platform password apps that run on multiple operating systems. Synchronization can be cloud-based or local. It ensures access to your private data from any supported device, anywhere. For security, encrypted data transfer channels are essential to minimize leakage risks. Supported Security Measures These include encryption (e.g., AES-256) and two-factor authentication (2FA). Some managers also support biometric authentication using fingerprint scanners or facial recognition. Security Level The most important criterion to prioritize. Ensure that the app uses modern encryption algorithms (specifically AES-256) and supports 2FA. Regular security audits are also crucial. Many password manager developers publish the results of independent security checks, which builds trust. Pricing Depending on user needs, there are various pricing options. Free plans are good for basic use but may be limited (e.g., single-device access, no cloud sync). Paid plans offer expanded functionality, tech support, better security, and business features. Open-Source Options It’s also worth noting that free open-source solutions can offer functionality comparable to paid options. Top Proprietary Password Managers Now let’s review three popular proprietary password management services: NordPass NordPass is a password vault developed by Nord Security. It helps users keep their credentials safe with a user-friendly interface and secure storage. Key Features Secure password storage: Unlimited encrypted password storage with cloud sync for cross-device access. Password generator: Automatically creates strong combinations; includes checks for length, special characters, and other criteria. Autofill: Streamlines login by auto-filling credentials on websites. Data breach monitoring: Scans accounts for potential compromise from hacks or data leaks. Offline mode: Allows access to stored passwords even without an internet connection. Advantages Advanced encryption: Uses the XChaCha20 algorithm for data protection. Cross-platform support: Available for Windows, macOS, Linux, Android, and iOS; also includes browser plugins. Ease of use: Clean interface that is accessible even to non-technical users. Family and business plans: Offers flexible plans for individuals, families, and organizations. Disadvantages Limited free version: The free plan only offers basic features and doesn’t include multi-device sync. Cloud-only storage: No local-only storage option, which may concern users who prefer full control over their data. Closed-source software: Unlike some competitors, NordPass is proprietary, which may deter open-source advocates. Pricing Plans Free: Basic functionality with no sync across devices. Family: Supports up to six accounts. Business: Team management features for organizations. Pricing varies by region and subscription length, with longer terms offering better value. 1Password 1Password is one of the most popular password managers, offering secure data storage and access control. It’s designed to enhance cybersecurity and protect accounts and sensitive information online. Key Features Password storage: Secure login credential storage. Password generation: Built-in tool for creating strong, security-compliant passwords. Form autofill: Fast access to websites without manual data entry. Personal data storage: Supports storing not just passwords but also bank cards, licenses, notes, documents, and other important files. Leak monitoring: Alerts users in case of password leaks or data breaches. Advantages Robust security: Uses top-tier encryption algorithms. Flexible organization: Create multiple vaults for different users or purposes. Cross-platform compatibility: Works on Windows, macOS, Linux, iOS, Android, with browser integration. Business solutions: Includes tools for team access control and administration. Disadvantages No permanent free plan: Only a 14-day free trial is available, after which a subscription is required. Cloud-only storage: While convenient for syncing, some users prefer local-only data management. Pricing Plans Individual: $2.99/month (billed annually) with full features and cross-device sync. Family: $4.99/month for up to 5 users with individual vaults. Teams Starter Pack: $19.95/month for up to 10 users. Encryption AlgorithmDashlane Dashlane is an app for storing passwords and confidential information that provides strong protection. The program helps users simplify access to credentials and protect them from unauthorized use. Key Features Password storage: A secure vault for passwords from various websites and applications. Built-in password generator: A tool for creating reliable and unique character combinations. Autofill: Automatically fills in passwords, logins, financial, and other data on web pages. Data breach monitoring: A monitoring system warns about potential breaches and recommends password changes. Cross-device synchronization: Access your information from various devices, including PCs, smartphones, and tablets. Digital wallet: Secure storage for bank cards and payment details for convenient online shopping. Secure data sharing: Alerts about potential unauthorized access attempts and suggests password changes. Advantages High level of protection: Uses AES-256 encryption and Zero-Knowledge architecture, ensuring complete privacy. User-friendly interface: Simple and intuitive UI suitable even for beginners. Disadvantages Subscription cost: Dashlane is among the more expensive solutions, which may be a barrier for budget-conscious users. Limited functionality in the free version: The free plan offers only basic features and works on a single device. Pricing Plans Free plan: Store up to 25 passwords on one device. Premium: $4.99/month. Includes credit monitoring and identity theft protection. Family plan: $7.49/month (billed annually), supports up to 6 users, each with their own vault. Comparison Table Criteria NordPass 1Password Dashlane Free version available Yes Yes Yes Autosave Yes Yes Yes Passkey support Yes Yes Yes Data breach alerts Yes Yes Yes Multi-factor authentication Yes Yes Yes Email masking Yes Yes No Password generator Yes Yes Yes Supported devices Unlimited Unlimited Unlimited Family plan Yes (up to 6) Yes (up to 5) Yes (up to 10) Encryption algorithm used XChaCha20 AES-GCM-256 AES-256 Among proprietary password managers, we compared three programs: NordPass, 1Password, and Dashlane. All three offer similar functionality, differing mainly in the encryption algorithms they use. Each product also features a free version, allowing users to try it out and select the one that best suits their needs. Top Open-Source Password Managers In contrast to proprietary solutions, the market also offers open-source options. Notably, some open-source solutions can be self-hosted in your own infrastructure. KeePass  KeePass is a popular free password manager for Windows that ensures secure storage of passwords and credentials. It operates in offline mode, providing maximum control over stored data. Key Features Password management: Stores passwords accessible via a master password. Storage is limited only by vault size. Local data storage: User data is stored locally on the device, not in the cloud. Autofill: Automatically fills in data on websites and in apps. Cross-platform support: Versions exist for Windows, macOS, Linux, Android, and iOS. Advantages High security: Supports multiple encryption algorithms, including AES-256, ChaCha20, and Twofish. Offline mode: No cloud dependency reduces the risk of data leaks. Disadvantages Cumbersome synchronization: Requires manual configuration for cross-device syncing. KeePassXC KeePassXC is a free, open-source, cross-platform tool for secure password storage. It is a modern adaptation of the original KeePass, tailored for use on Windows, macOS, and Linux. Key Features Local encrypted storage: Data is stored locally and securely on the user's machine. No cloud dependency unless manually configured. 2FA support: Stores 2FA codes and integrates with hardware security keys (e.g., YubiKey). Autofill: Supports browser integration for auto-filling credentials. Cross-platform: Available on Windows, macOS, and Linux. Mobile access through compatible apps like KeePassDX for Android. Password generator: Customizable password creation tool. Advantages Ease of use: Offers a more user-friendly interface than the original KeePass. Offline operation: Does not require cloud storage; all data remains local. Disadvantages No official mobile apps: KeePassXC is limited to desktop; mobile support is only via third-party apps. Limited plugin options: Compared to KeePass, KeePassXC has fewer plugins available. Bitwarden Bitwarden is an open-source password manager popular for its reliability, simplicity, and transparency. Key Features Password storage: Stores unlimited passwords with access from anywhere. Data is encrypted using AES-256. Password generator: Allows custom password generation based on length, character type, etc. Autofill: Automatically fills in credentials on websites and in apps. Cross-platform support: Available on Windows, macOS, Linux, Android, and iOS. Two-factor authentication (2FA): Supports 2FA via apps, email, or hardware tokens (e.g., YubiKey). Advantages Open source: Public code base allows independent security audits. High security: Client-side end-to-end encryption ensures privacy even from Bitwarden developers. Affordable and accessible: The free tier includes many features often restricted in paid plans elsewhere. Local and cloud storage options: Can be hosted in the cloud or self-hosted for full control. Disadvantages Complex setup for beginners: Self-hosting and advanced configuration may be difficult for inexperienced users. Pricing Plans Self-hosted: Users can deploy Bitwarden on their own server. Premium plan: $10/year, adds breach monitoring and 1 GB of encrypted file storage. Family plan: $40/year, supports up to 6 users. Business plan: Starts at $3/user/month, with advanced team management features. Padloc Padloc is a cross-platform, open-source password management app focused on simplicity and ease of use. It allows users to store, manage, and synchronize passwords across multiple devices. Key Features Open Source: The project’s source code is available on GitHub and is distributed under the GPL-3 license. Cloud Synchronization: Supports storing data on cloud servers with an option for local encryption. Encryption Support: Utilizes AES-256 and Argon2 encryption algorithms. Cross-Platform: Available for Windows, macOS, Linux, Android, and iOS. Browser extensions are also available. Password Generator: Enables creation of strong passwords with customizable options. Advantages Ease of Use: Minimalist and beginner-friendly interface. Team Collaboration Support: Allows sharing of passwords within a team. Disadvantages No Offline Mode: Fully dependent on the cloud. Fewer Features Compared to Alternatives: Lacks features like 2FA support, SSH agent integration, and advanced security settings. Pricing Plans Premium: $3.49/month. Includes 2FA support, 1GB of encrypted file storage, breach report generation, and note-taking functionality with Markdown support. Family Plan: $5.95/month. Includes all Premium features and allows up to 5 users. Team: $3.49/month. Includes Premium features and supports up to 10 user groups with flexible management. Business: $6.99/month. Includes all Team features and supports up to 50 user groups with flexible configuration. Enterprise: Price upon request. Includes all Business features, unlimited user groups, and custom branding options. Psono Psono is a password manager geared toward self-hosting and enterprise use. It can be deployed on a private server, giving users full control over their data. Psono offers strong security, team features, and multi-factor authentication (MFA). Key Features Open Source: Source code is available on GitHub under the Apache 2.0 license. Self-Hosted: Can be deployed on a private server for full data control. Encryption Support: Uses AES-256, RSA, and Argon2 for encryption. Advantages High Security: Supports modern encryption standards and hardware keys. Team Collaboration Support: Ideal for businesses and IT teams. Disadvantages Setup Complexity: Requires server deployment for full functionality. Pricing Plans Self-Hosted: Free option for private deployment. SaaS Edition (Business): $3.50/month. Adds SAML & OIDC SSO, audit logging, and extended support on top of the free version’s features. Comparison Criteria KeePass KeePassXC Bitwarden Padloc Psono Cloud Sync No No Yes Yes Yes Auto-Save Yes Yes Yes Yes Yes Passkey Support Yes Yes Yes No Yes Data Breach Alerts No No No No Yes Multi-Factor Authentication (MFA) Yes Yes Yes Yes Yes Email Masking No No Yes No No Password Generator Yes Yes Yes Yes Yes Supported Devices Single device Single device Unlimited Two (free version) Unlimited (paid) Family Plan Available No No Yes (up to 6 users) Yes (up to 5 users) No Encryption Algorithm AES-256, SHA-256, HMAC-SHA-256/512 AES256 AES-256 E2EE, salted hashing, PBKDF2 SHA-256 AES XSalsa20 + Poly1305 Conclusion In this article, we explored password managers and thoroughly analyzed the most popular software solutions for secure information storage—both paid and free. Each reviewed product has its own strengths and weaknesses. A well-chosen password manager can simplify the management of personal data and protect it from unauthorized access. When selecting a solution, it’s important to consider the functionality, security level, and ease of use.
05 June 2025 · 13 min to read
Infrastructure

Network Protocols: What They Are and How They Work

A network protocol is a set of rules and agreements used to facilitate communication between devices at a specific network layer. Protocols define and regulate how information is exchanged between participants in computer networks. Many protocols are involved in network operation. For example, loading a webpage in a browser is the result of a process governed by several protocols: HTTP: The browser forms a request to the server. DNS: The browser resolves the domain name to an IP address. TCP: A connection is established, and data integrity is ensured. IP: Network addressing is performed. Ethernet: Physical data transmission occurs between devices on the network. These numerous protocols can be categorized according to the network layers they operate on. The most common network models are the OSI and TCP/IP models. In this article, we will explain these models and describe the most widely used protocols. Key Terminology This section introduces essential network-related terms needed for understanding the rest of the article. Network. A network is a collection of digital devices and systems that are connected to each other (physically or logically) and exchange data. Network elements may include servers, computers, phones, routers, even a smart Wi-Fi-enabled lightbulb—and the list goes on. The size of a network can vary significantly—even two devices connected by a cable form a network. Data transmitted over a network is packaged into packets, which are special blocks of data. Protocols define the rules for creating and handling these packets. Some communication systems, such as point-to-point telecommunications, do not support packet-based transmission and instead transmit data as a continuous bit stream. Packet-based transmission enables more efficient traffic distribution among network participants. Network Node. A node is any device that is part of a computer network. Nodes are typically divided into two types: End Nodes. These are devices that send and/or receive data. Simply put, these are sources or destinations of information. Intermediate Nodes. These nodes connect end nodes together. For example, a smartphone sends a request to a server via Wi-Fi. The smartphone and server are end nodes, while the Wi-Fi router is an intermediate node. Depending on node placement and quantity, a network may be classified as: Global Network. A network that spans the entire globe. The most well-known example is the Internet. Local Network (LAN). A network covering a limited area. For example, your home Wi-Fi connects your phone, computer, and laptop into a local network. The router (an intermediate node) acts as a bridge to the global network. An exception to geographic classification is networks of space-based systems, such as satellites or orbital stations. Distributed Network. A network with geographically distributed nodes. Network Medium. This refers to the environment in which data transmission occurs. The medium can be cables, wires, air, or optical fiber. If copper wire is used, data is transmitted via electricity; with fiber optics, data is transmitted via light pulses. If no cables are used and data is transmitted wirelessly, radio waves are used. OSI Model In the early days of computer networks, no universal model existed to standardize network operation and design. Each company implemented its own approach, often incompatible with others. This fragmented landscape became problematic—networks, which were supposed to connect computers, instead created barriers due to incompatible architectures. In 1977, the International Organization for Standardization (ISO) took on the task of solving this issue. After seven years of research, the OSI model was introduced in 1984. OSI stands for Open Systems Interconnection, meaning systems that use publicly available specifications to allow interoperability, regardless of their architecture. (This "openness" should not be confused with Open Source.) The model consists of seven network layers, each responsible for specific tasks. Let’s look at each: 1. Physical Layer This layer deals with the physical aspects of data transmission, including transmission methods, medium characteristics, and signal modulation. 2. Data Link Layer The data link layer operates within a local network. It frames the raw bit stream from the physical layer into recognizable data units (frames), determines start and end points, handles addressing within a local network, detects errors, and ensures data integrity. Standard protocols are Ethernet and PPP. 3. Network Layer This layer handles communication between different networks. It builds larger networks from smaller subnets and provides global addressing and routing, selecting the optimal path. For example, the IP protocol, which gives each device a unique address, operates at this layer. Key protocols are IP and ICMP. 4. Transport Layer The transport layer ensures end-to-end communication between processes on different computers. It directs data to the appropriate application using ports. Protocols such as: UDP — Unreliable transmission of datagrams. TCP — Reliable byte-stream transmission. 5. Session Layer This layer manages communication sessions: establishing, maintaining, and terminating connections, as well as synchronizing data. 6. Presentation Layer Responsible for translating data formats into forms understandable to both sender and receiver. Examples: text encoding (ASCII, UTF-8), file formats (JPEG, PNG, GIF), encryption and decryption. 7. Application Layer The user-facing layer where applications operate. Examples include web browsers using HTTP, email clients, and video/audio communication apps. Some OSI protocols span more than one layer. For instance, Ethernet covers both the physical and data link layers. When data is sent from one node to another, it passes through each OSI layer from top to bottom. Each layer processes and encapsulates the data before passing it to the next lower layer. This process is called encapsulation. On the receiving end, the process is reversed: each layer decapsulates and processes the data, from bottom to top, until it reaches the application. This is called decapsulation. While the OSI model is not used in practical network implementations today, it remains highly valuable for educational purposes, as many network architectures share similar principles. TCP/IP While the OSI model was being developed and debated over, others were implementing practical solutions. The most widely adopted was the TCP/IP stack, also known as the DoD model. According to RFC 1122, the TCP/IP model has four layers: Application Layer Transport Layer Internet Layer (sometimes just called "Network") Link Layer (also called Network Access or Interface Layer) Though different in structure, TCP/IP follows the same fundamental principles as OSI. For example: The OSI session, presentation, and application layers are merged into a single application layer in TCP/IP. The OSI physical and data link layers are merged into the link layer in TCP/IP. Since terminology may vary across sources, we will clarify which model we are referring to throughout this article. Let’s take a closer look at each layer and the protocols involved, starting from the bottom. Data Link Layer in TCP/IP As mentioned earlier, the Data Link Layer in the TCP/IP model combines two layers from the OSI model: the Data Link and Physical layers. The most widely used data link protocol in TCP/IP is Ethernet, so we’ll focus on that. Ethernet Let’s forget about IP addresses and network models for a moment. Imagine a local network consisting of 4 computers and a switch. We'll ignore the switch itself; in our example, it's simply a device that connects the computers into a single local network. Each computer has its own MAC address. In our simplified example, a MAC address consists of 3 numbers, which is not accurate in reality. MAC Address In reality, a MAC address is 48 bits long. It’s a unique identifier assigned to a network device. If two devices have the same MAC address, it can cause network issues. The first 24 bits of a MAC address are assigned by the IEEE — an organization responsible for developing electronics and telecommunications standards. The device manufacturer assigns the remaining 24 bits. Now, back to our local network. If one computer wants to send data to another, it needs the recipient's MAC address. Data in Ethernet networks is transmitted in the form of Ethernet frames. Ethernet is a relatively old protocol, developed in 1973, and has gone through several upgrades and format changes over time. Here are the components of an Ethernet frame: Preamble indicates the beginning of a frame. Destination MAC address is the recipient’s address. Source MAC address is the sender’s address. Type/Length indicates the network protocol being used, such as IPv4 or IPv6. SNAP/LLC and Data are the payload. Ethernet frames have a minimum size requirement to prevent collisions. FCS (Frame Check Sequence) is a checksum used to detect transmission errors. ARP So far, we’ve talked about a simple local network where all nodes share the same data link environment. That’s why this is called the data link layer. However, MAC addressing alone is not enough for modern TCP/IP networks. It works closely with IP addressing, which belongs to the network layer. We’ll go into more detail on IP in the network layer section. For now, let’s look at how IP addresses interact with MAC addresses. Let’s assign an IP address to each computer: In everyday life, we rarely interact with MAC addresses directly — computers do that. Instead, we use IP addresses or domain names. The ARP (Address Resolution Protocol) helps map an IP address to its corresponding MAC address. When a computer wants to send data but doesn’t know the recipient’s MAC address, it broadcasts a message like: "Computer with IP 1.1.1.2, please send your MAC address to the computer with MAC:333." If a computer with that IP exists on the network, it replies: "1.1.1.2 — that’s me, my MAC is 111." So far, we've worked within a single network. Now, let’s expand to multiple subnets. Network Layer Protocols in TCP/IP Now we add a router to our local network and connect it to another subnet. Two networks are connected via the router. This device acts as an intermediate node, allowing communication between different data link environments. In simple terms, it allows a computer from one subnet to send data to a computer in another subnet. How does a device know it’s sending data outside its own subnet? Every network has a parameter called a subnet mask. By applying this mask to a node’s IP address, the device can determine the subnet address. This is done using a bitwise AND operation. You can check the subnet mask in Windows using the ipconfig command:  In this example, the mask is 255.255.255.0. This is a common subnet mask. It means that if the first three octets of two IP addresses match, they are in the same subnet. For example: IP 1.1.1.2 and 1.1.1.3 are in the same subnet. IP 1.1.2.2 is in a different subnet. When a device detects that the recipient is in another subnet, it sends data to the default gateway, which is the router’s IP address. Let’s simulate a situation: A device with MAC 111 wants to send data to the IP 1.1.2.3. The sender realizes this is a different subnet and sends the data to the default gateway. First, it uses ARP to get the MAC address of the gateway, then sends the packet. The router receives the packet, sees that the destination IP is different, and forwards the data. In the second subnet, it again uses ARP to find the MAC address of the target device and finally delivers the data. IP Protocol The IP (Internet Protocol) was introduced in the 1980s to connect computer networks. Today, there are two versions: IPv4 – uses 32-bit addressing. The number of available IP addresses is limited. IPv6 – uses 128-bit addressing and was introduced to solve IPv4 address exhaustion. In IPv6, ARP is not used. Both protocols serve the same function. IPv6 was meant to replace IPv4, but because of technologies like NAT, IPv4 is still widely used. In this guide, we’ll focus on IPv4. An IP packet consists of the following fields: Version – IPv4 or IPv6. IHL (Internet Header Length) – indicates the size of the header. Type of Service – used for QoS (Quality of Service). Total Length – includes header and data. Identification – groups fragmented packets together. Flags – indicate if a packet is fragmented. Fragment Offset – position of the fragment. Time to Live (TTL) – limits the number of hops. Protocol – defines the transport protocol (e.g., TCP, UDP). Header Checksum – verifies the header’s integrity. Source IP Address Destination IP Address Options – additional parameters for special use. Data – the actual payload. Transport Layer Protocols The most common transport layer protocols in TCP/IP are UDP and TCP. They deliver data to specific applications identified by port numbers. Let’s start with UDP — it’s simpler than TCP. UDP A UDP datagram contains: Source port Destination port Length Checksum Payload (from the higher layer) UDP’s role is to handle ports and verify frames. However, it does not guarantee delivery. If some data is lost or corrupted, UDP will not request a retransmission — unlike TCP. TCP TCP packets are called segments. A TCP segment includes: Source and destination ports Sequence number Acknowledgment number (used for confirming receipt) Header length Reserved flags Control flags (for establishing or ending connections) Window size (how many segments should be acknowledged) Checksum Urgent pointer Options Data (from the higher layer) TCP guarantees reliable data transmission. A connection is established between endpoints before sending data. If delivery cannot be guaranteed, the connection is terminated. TCP handles packet loss, ensures order, and reassembles fragmented data. Application Layer Protocols In both the TCP/IP model and the OSI model, the top layer is the application layer. Here are some widely used application protocols: DNS (Domain Name System) – resolves domain names to IP addresses. HTTP – transfers hypertext over the web, allowing communication between browsers and web servers. HTTPS – does the same as HTTP, but with encryption for secure communication. DNS servers use UDP, which is faster but less reliable. In contrast, protocols like FTP and HTTP rely on TCP, which provides reliable delivery. Other popular application protocols include: FTP (File Transfer Protocol) – for managing file transfers. POP3 (Post Office Protocol version 3) – used by email clients to retrieve messages. IMAP (Internet Message Access Protocol) – allows access to emails over the internet. Conclusion This guide covered the most commonly used protocols in computer networks. These protocols form the backbone of most real-world network communications. In total, there are around 7,000 protocols, many of which are used for more specialized tasks.
05 June 2025 · 12 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support