Sign In
Sign In

NoSQL Databases Explained: Types, Use Cases & Core Characteristics

NoSQL Databases Explained: Types, Use Cases & Core Characteristics
Hostman Team
Technical writer
Infrastructure

NoSQL (which stands for "Not Only SQL") represents a new class of data management systems that deviate from the traditional relational approach to information storage. Unlike conventional DBMSs, such as MySQL or PostgreSQL, which store data in tables with fixed structures and strict relationships, NoSQL offers more flexible methods for organizing and storing information. This technology doesn't reject SQL; rather, it expands the ways to handle data.

The origin of the term NoSQL has an interesting backstory that began not with technology but with the name of a tech conference. In 2009, organizers of a database event in San Francisco adopted the term, and it unexpectedly caught on in the industry. Interestingly, a decade earlier, in 1998, developer Carlo Strozzi had already used the term "NoSQL" for his own project, which had no connection to modern non-relational systems.

Modern NoSQL databases fall into several key categories of data storage systems. These include:

  • Document-oriented databases (led by MongoDB)
  • Key-value stores (e.g., Redis)
  • Graph databases (Neo4j is a prominent example)
  • Column-family stores (such as ClickHouse)

The unifying feature among these systems is their rejection of the classic SQL language in favor of proprietary data processing methods.

Unlike relational DBMSs, where SQL serves as a standardized language for querying and joining data through operations like JOIN and UNION, NoSQL databases have developed their own query languages. Each NoSQL database offers a unique syntax for manipulating data. Here are some examples:

// MongoDB (uses a JavaScript-like syntax):
db.users.find({ age: { $gt: 21 } })

// Redis (uses command-based syntax):
HGET user:1000 email
SET session:token "abc123"

NoSQL databases are particularly efficient in handling large volumes of unstructured data. A prime example is the architecture of modern social media platforms, where MongoDB enables storage of a user's profile, posts, responses, and activity in a single document, thereby optimizing data retrieval performance.

NoSQL vs SQL: Relational and Non-Relational Databases

The evolution of NoSQL databases has paralleled the growing complexity of technological and business needs. The modern digital world, which generates terabytes of data every second, necessitated new data processing approaches. As a result, two fundamentally different data management philosophies have emerged:

  1. Relational approach, focused on data integrity and reliability
  2. NoSQL approach, prioritizing adaptability and scalability

Each concept is grounded in its own core principles, which define its practical applications.

Relational systems adhere to ACID principles:

  • Atomicity ensures that transactions are all-or-nothing.
  • Consistency guarantees that data remains valid throughout.
  • Isolation keeps concurrent transactions from interfering.
  • Durability ensures that once a transaction is committed, it remains so.

NoSQL systems follow the BASE principles:

  • Basically Available – the system prioritizes continuous availability.
  • Soft state – the system state may change over time.
  • Eventually consistent – consistency is achieved eventually, not instantly.

Key Differences:

Aspect

Relational Databases

NoSQL Databases

Data Organization

Structured in predefined tables and schemas

Flexible format, supports semi-structured/unstructured data

Scalability

Vertical (via stronger servers)

Horizontal (adding more nodes to the cluster)

Data Integrity

Maintained at the DBMS core level

Managed at the application level

Performance

Efficient for complex transactions

High performance in basic I/O operations

Data Storage

Distributed across multiple interrelated tables

Groups related data into unified blocks/documents

These fundamental differences define their optimal use cases:

  • Relational systems are irreplaceable where data precision is critical (e.g., financial systems).
  • NoSQL solutions excel in processing high-volume data flows (e.g., social media, analytics platforms).

Key Features and Advantages of NoSQL

Most NoSQL systems are open source, allowing developers to explore and modify the core system without relying on expensive proprietary software.

Schema Flexibility

One of the main advantages of NoSQL is its schema-free approach. Unlike relational databases, where altering the schema often requires modifying existing records, NoSQL allows the dynamic addition of attributes without reorganizing the entire database.

// MongoDB: Flexible schema supports different structures in the same collection
db.users.insertMany([
  { name: "Emily", email: "emily@email.com" },
  { name: "Maria", email: "maria@email.com", phone: "+35798765432" },
  { name: "Peter", social: { twitter: "@peter", facebook: "peter.fb" } }
])

Horizontal Scalability

NoSQL databases employ a fundamentally different strategy for boosting performance. While traditional relational databases rely on upgrading a single server, NoSQL architectures use distributed clusters. Performance is improved by adding nodes, with workload automatically balanced across the system.

Sharding and Replication

NoSQL databases support sharding—a method of distributing data across multiple servers. Conceptually similar to RAID 0 (striping), sharding enables:

  • Enhanced system performance
  • Improved fault tolerance
  • Efficient load distribution

High Performance

NoSQL systems offer exceptional performance due to optimized storage mechanisms and avoidance of resource-heavy operations like joins. They perform best in scenarios such as:

  • Basic read/write operations
  • Large-scale data management
  • Concurrent user request handling
  • Unstructured data processing

Handling Unstructured Data

NoSQL excels in working with:

  • Large volumes of unstructured data
  • Heterogeneous data types
  • Rapidly evolving data structures

Support for Modern Technologies

NoSQL databases integrate well with:

  • Cloud platforms
  • Microservice architectures
  • Big Data processing systems
  • Modern development frameworks

Cost Efficiency

NoSQL solutions can be cost-effective due to:

  • Open-source licensing
  • Efficient use of commodity hardware
  • Scalability using standard servers
  • Reduced administrative overhead

Main Types of NoSQL Databases

In modern distributed system development, several core types of NoSQL solutions are distinguished, each with a mature ecosystem and strong community support.

Document-Oriented Databases

Document-based systems are the most mature and widely adopted type of NoSQL databases. MongoDB, the leading technology in this segment, is the benchmark example of document-oriented data storage architecture.

Data Storage Principle

In document-oriented databases, information is stored as documents grouped into collections. Unlike relational databases, where data is distributed across multiple tables, here, all related information about an object is contained within a single document.

Example of a user document with orders:

{
  "_id": ObjectId("507f1f77bcf86cd799439011"),
  "user": {
    "username": "stephanie",
    "email": "steph@example.com",
    "registered": "2024-02-01"
  },
  "orders": [
    {
      "orderId": "ORD-001",
      "date": "2024-02-02",
      "items": [
        {
          "name": "Phone",
          "price": 799.99,
          "quantity": 1
        }
      ],
      "status": "delivered"
    }
  ],
  "preferences": {
    "notifications": true,
    "language": "en"
  }
}

Basic Operations with MongoDB

// Insert a document
db.users.insertOne({
  username: "stephanie",
  email: "steph@example.com"
})

// Find documents
db.users.find({ "preferences.language": "en" })

// Update data
db.users.updateOne(
  { username: "stephanie" },
  { $set: { "preferences.notifications": false }}
)

// Delete a document
db.users.deleteOne({ username: "stephanie" })

Advantages of the Document-Oriented Approach

Flexible Data Schema

  • Each document can have its own structure
  • Easy to add new fields
  • No need to modify the overall database schema

Natural Data Representation

  • Documents resemble programming objects
  • Intuitive structure
  • Developer-friendly

Performance

  • Fast retrieval of complete object data
  • Efficient handling of nested structures
  • Horizontal scalability

Working with Hierarchical Data

  • Naturally stores tree-like structures
  • Convenient nested object representation
  • Effective processing of complex structures

Use Cases

The architecture is particularly effective in:

  • Developing systems with dynamically evolving data structures
  • Processing large volumes of unstandardized data
  • Building high-load distributed platforms

Typical Use Scenarios

  • Digital content management platforms
  • Distributed social media platforms
  • Enterprise content organization systems
  • Event aggregation and analytics services
  • Complex analytical platforms

Key-Value Stores

Among key-value stores, Redis (short for Remote Dictionary Server) holds a leading position in the NoSQL market. A core architectural feature of this technology is that the entire data set is stored in memory, ensuring exceptional performance.

Working Principle

The architecture of key-value stores is based on three fundamental components for each data record:

  • Unique key (record identifier)
  • Associated data (value)
  • Optional TTL (Time To Live) parameter

Data Types in Redis

# Strings
SET user:name "Stephanie"
GET user:name

# Lists
LPUSH notifications "New message"
RPUSH notifications "Payment received"

# Sets
SADD user:roles "admin" "editor"
SMEMBERS user:roles

# Hashes
HSET user:1000 name "Steph" email "steph@example.com"
HGET user:1000 email

# Sorted Sets
ZADD leaderboard 100 "player1" 85 "player2"
ZRANGE leaderboard 0 -1

Key Advantages

High Performance

  • In-memory operations
  • Simple data structure
  • Minimal overhead

Storage Flexibility

  • Support for multiple data types
  • Ability to set data expiration
  • Atomic operations

Reliability

  • Data persistence options
  • Master-slave replication
  • Clustering support

Typical Use Scenarios

Caching

# Cache query results
SET "query:users:active" "{json_result}"
EXPIRE "query:users:active" 3600  # Expires in one hour

Counters and Rankings

# Increase view counter
INCR "views:article:1234"

# Update ranking
ZADD "top_articles" 156 "article:1234"

Message Queues

# Add task to queue
LPUSH "task_queue" "process_order:1234"

# Get task from queue
RPOP "task_queue"

Redis achieves peak efficiency when deployed in systems with intensive operational throughput, where rapid data access and instant processing are critical. A common architectural solution is to integrate Redis as a high-performance caching layer alongside the primary data store, significantly boosting the overall application performance.

Graph Databases

Graph DBMS (Graph Databases) stand out among NoSQL solutions due to their specialization in managing relationships between data entities. In this segment, Neo4j has established a leading position thanks to its efficiency in handling complex network data structures where relationships between objects are of fundamental importance.

Core Components

Nodes

  • Represent entities
  • Contain properties
  • Have labels

Relationships

  • Connect nodes
  • Are directional
  • Can contain properties
  • Define the type of connection

Example of a Graph Model in Neo4j

// Create nodes
CREATE (anna:Person { name: 'Anna', age: 30 })
CREATE (mary:Person { name: 'Mary', age: 28 })
CREATE (post:Post { title: 'Graph Databases', date: '2024-02-04' })

// Create relationships
CREATE (anna)-[:FRIENDS_WITH]->(mary)
CREATE (anna)-[:AUTHORED]->(post)
CREATE (mary)-[:LIKED]->(post)

Typical Queries

// Find friends of friends
MATCH (person:Person {name: 'Anna'})-[:FRIENDS_WITH]->(friend)-[:FRIENDS_WITH]->(friendOfFriend)
RETURN friendOfFriend.name

// Find most popular posts
MATCH (post:Post)<-[:LIKED]-(person:Person)
RETURN post.title, count(person) as likes
ORDER BY likes DESC
LIMIT 5

Key Advantages

Natural Representation of Relationships

  • Intuitive data model
  • Efficient relationship storage
  • Easy to understand and work with

Graph Traversal Performance

  • Fast retrieval of connected data
  • Efficient handling of complex queries
  • Optimized for recursive queries

Practical Applications

Social Networks

// Friend recommendations
MATCH (user:Person)-[:FRIENDS_WITH]->(friend)-[:FRIENDS_WITH]->(potentialFriend)
WHERE user.name = 'Anna' AND NOT (user)-[:FRIENDS_WITH]->(potentialFriend)
RETURN potentialFriend.name

Recommendation Systems

// Recommendations based on interests
MATCH (user:Person)-[:LIKES]->(product:Product)<-[:LIKES]-(otherUser)-[:LIKES]->(recommendation:Product)
WHERE user.name = 'Anna' AND NOT (user)-[:LIKES]->(recommendation)
RETURN recommendation.name, count(otherUser) as frequency

Routing

// Find shortest path
MATCH path = shortestPath(
  (start:Location {name: 'A'})-[:CONNECTS_TO*]->(end:Location {name: 'B'})
)
RETURN path

Usage Highlights

  • Essential when working with complex, interrelated data structures
  • Maximum performance in processing cyclic and nested queries
  • Enables flexible design and management of multi-level relationships

Neo4j and similar platforms for graph database management show exceptional efficiency in systems where relationship processing and deep link analysis are critical. These tools offer advanced capabilities for managing complex network architectures and detecting patterns in structured sets of connected data.

Columnar Databases

The architecture of these systems is based on column-oriented storage of data, as opposed to the traditional row-based approach. This enables significant performance gains for specialized queries. Leading solutions in this area include ClickHouse and HBase, both recognized as reliable enterprise-grade technologies.

How It Works

Traditional (row-based) storage:

Row1: [id1, name1, email1, age1]  
Row2: [id2, name2, email2, age2]

Column-based storage:

Column1: [id1, id2]  
Column2: [name1, name2]  
Column3: [email1, email2]  
Column4: [age1, age2]

Key Characteristics

Storage Structure

  • Data is grouped by columns
  • Efficient compression of homogeneous data
  • Fast reading of specific fields

Scalability

  • Horizontal scalability
  • Distributed storage
  • High availability

Example Usage with ClickHouse

-- Create table
CREATE TABLE users (
    user_id UUID,
    name String,
    email String,
    registration_date DateTime
) ENGINE = MergeTree()
ORDER BY (registration_date, user_id);

-- Insert data
INSERT INTO users (user_id, name, email, registration_date)
VALUES (generateUUIDv4(), 'Anna Smith', 'anna@example.com', now());

-- Analytical query
SELECT 
    toDate(registration_date) as date,
    count(*) as users_count
FROM users 
GROUP BY date
ORDER BY date;

Key Advantages

Analytical Efficiency

  • Fast reading of selected columns
  • Optimized aggregation queries
  • Effective with large datasets

Data Compression

  • Superior compression of uniform data
  • Reduced disk space usage
  • I/O optimization

Typical Use Cases

Big Data

-- Log analysis with efficient aggregation
SELECT 
    event_type,
    count() as events_count,
    uniqExact(user_id) as unique_users
FROM system_logs 
WHERE toDate(timestamp) >= '2024-01-01'
GROUP BY event_type
ORDER BY events_count DESC;

Time Series

-- Aggregating metrics by time intervals
SELECT 
    toStartOfInterval(timestamp, INTERVAL 5 MINUTE) as time_bucket,
    avg(cpu_usage) as avg_cpu,
    max(cpu_usage) as max_cpu,
    quantile(0.95)(cpu_usage) as cpu_95th
FROM server_metrics
WHERE server_id = 'srv-001'
    AND timestamp >= now() - INTERVAL 1 DAY
GROUP BY time_bucket
ORDER BY time_bucket;

Analytics Systems

-- Advanced user statistics
SELECT 
    country,
    count() as users_count,
    round(avg(age), 1) as avg_age,
    uniqExact(city) as unique_cities,
    sumIf(purchase_amount, purchase_amount > 0) as total_revenue,
    round(avg(purchase_amount), 2) as avg_purchase
FROM user_statistics
GROUP BY country
HAVING users_count >= 100
ORDER BY total_revenue DESC
LIMIT 10;

Usage Highlights

  • Maximum performance in systems with read-heavy workloads
  • Proven scalability for large-scale data processing
  • Excellent integration in distributed computing environments

Columnar database management systems show exceptional efficiency in projects requiring deep analytical processing of large datasets. This is particularly evident in areas such as enterprise analytics, real-time performance monitoring systems, and platforms for processing timestamped streaming data.

Full-Text Databases (OpenSearch)

The OpenSearch platform, built on the architectural principles of Elasticsearch, is a comprehensive ecosystem for high-performance full-text search and multidimensional data analysis. This solution, designed according to distributed systems principles, stands out for its capabilities in data processing, intelligent search, and the creation of interactive visualizations for large-scale datasets.

Key Features

Full-Text Search

// Search with multilingual support
GET /products/_search
{
  "query": {
    "multi_match": {
      "query": "wireless headphones",
      "fields": ["title", "description"],
      "type": "most_fields"
    }
  }
}

Data Analytics

// Aggregation by categories
GET /products/_search
{
  "size": 0,
  "aggs": {
    "popular_categories": {
      "terms": {
        "field": "category",
        "size": 10
      }
    }
  }
}

Key Advantages

Efficient Search

  • Fuzzy search support
  • Result ranking
  • Match highlighting
  • Autocomplete functionality

Analytical Capabilities

  • Complex aggregations
  • Statistical analysis
  • Data visualization
  • Real-time monitoring

Common Use Cases

E-commerce Search

  • Product search
  • Faceted navigation
  • Product recommendations
  • User behavior analysis

Monitoring and Logging

  • Metrics collection
  • Performance analysis
  • Anomaly detection
  • Error tracking

Analytical Dashboards

  • Data visualization
  • Business metrics
  • Reporting
  • Real-time analytics

OpenSearch is particularly effective in projects that require advanced search and data analytics. At Hostman, OpenSearch is available as a managed service, simplifying integration and maintenance.

When to Choose NoSQL?

The architecture of various database management systems has been developed with specific use cases in mind, so choosing the right tech stack should be based on a detailed analysis of your application's requirements.In modern software development, a hybrid approach is becoming increasingly common, where multiple types of data storage are integrated into a single project to achieve maximum efficiency and extended functionality.

NoSQL systems do not provide a one-size-fits-all solution. When designing your data storage architecture, consider the specific nature of the project and its long-term development strategy.

Choose NoSQL databases when the following matter:

Large-scale Data Streams

  • Efficient handling of petabyte-scale storage
  • High-throughput read and write operations
  • Need for horizontal scalability

Dynamic Data Structures

  • Evolving data requirements
  • Flexibility under uncertainty

Performance Prioritization

  • High-load systems
  • Real-time applications
  • Services requiring high availability

Unconventional Data Formats

  • Networked relationship structures
  • Time-stamped sequences
  • Spatial positioning

Stick with Relational Databases when you need:

Guaranteed Integrity

  • Banking transactions
  • Electronic health records
  • Mission-critical systems

Complex Relationships

  • Multi-level data joins
  • Complex transactional operations
  • Strict ACID compliance

Immutable Structure

  • Fixed requirement specifications
  • Standardized business processes
  • Formalized reporting systems

Practical Recommendations

Hybrid Approach

// Using Redis for caching
// alongside PostgreSQL for primary data
const cached = await redis.get(`user:${id}`);
if (!cached) {
    const user = await pg.query('SELECT * FROM users WHERE id = $1', [id]);
    await redis.set(`user:${id}`, JSON.stringify(user));
    return user;
}
return JSON.parse(cached);

Gradual Transition

  • Start with a pilot project
  • Test performance
  • Evaluate support costs

Decision-Making Factors

Technical Aspects

  • Data volume
  • Query types
  • Scalability requirements
  • Consistency model

Business Requirements

  • Project budget
  • Development timeline
  • Reliability expectations
  • Growth plans

Development Team

  • Technology expertise
  • Availability of specialists
  • Maintenance complexity
Infrastructure

Similar

Infrastructure

Microservices Architecture Explained: Benefits, Real-World Use Cases & Design Patterns

Every developer strives to make the product development process faster while maintaining flexibility and effective control. Microservices architecture makes this possible, and over the past 7–10 years, it has become a strong alternative to the traditional monolithic approach. Let’s begin by exploring the difference between the two. Microservices Architecture vs. Monolith The difference between these two approaches to software development is best illustrated with an example. Suppose we have two online stores: one built as a monolith, the other using microservices. A monolithic online store is a single, indivisible structure that combines all components: databases (product catalog, customer data), cart, order and payment forms. All elements are tightly coupled and reside on the same server. In a microservices-based system, each component is an independent module that developers can work on separately. Of course, these modules don't have to be hosted on the same server. Thus, microservices architecture is like a modular building kit where you can easily add new components and scale the application. A monolith, in contrast, is like a solid wall — and scaling it typically means duplicating the entire structure. It’s also worth noting that microservices are sometimes mistakenly thought of as just a collection of tiny services. That’s not true. For example, the database of a large e-commerce site may contain millions of records and take up tens of gigabytes, yet still be just one module within a microservices-based application. Comparing Microservices and Monoliths by Key Criteria Let’s compare how microservices and monoliths address the same development needs. Release Cycle Microservices allow for faster development and more frequent releases thanks to their modularity — updates affect individual modules rather than the whole codebase. With a monolith, you must update the entire platform, which increases testing time and delays releases. Technology Stack Microservices offer much greater flexibility: each service can use its own programming language, libraries, and data storage technologies. With a monolith, the technology stack is fixed and hard to change, forcing developers to stick with the original tools. Onboarding Developers Since each microservice is a standalone unit, developers can be onboarded to specific modules without needing to understand the entire system. In a monolith, new developers must familiarize themselves with the entire application codebase before contributing effectively, making the team more dependent on specific individuals. Optimization The modularity of microservices simplifies optimization, as each module can be tuned separately. In monoliths, optimization is more complex due to tight coupling — changes in one part often affect the entire system. Scalability Microservices, being distributed and potentially deployed on separate servers, make it easier and faster to scale specific components. In monoliths, scaling one part usually means scaling the entire application, which is inefficient. Fault Tolerance Thanks to their distributed and modular nature, microservices offer higher fault tolerance. A failure in one module does not affect the whole system. In a monolith, components are tightly connected, and a failure in one part can bring down the entire application. Should You Switch to Microservices Now? Microservices clearly offer several advantages. But does that mean monoliths are outdated and should be replaced immediately? Not necessarily — it depends on your current project status. In fact, switching to microservices isn't always the best move. Distributed systems also come with their own challenges: Network Dependency: Microservices require robust network communication. Unstable connections can cause delays or data inconsistencies, potentially disrupting the application. Increased Overhead: Each module must be separately tested and monitored. You’ll also need to allocate cloud resources for each, which can increase costs. Team Coordination: Microservices can introduce coordination challenges between teams managing different modules. This often requires DevOps specialists to bridge gaps between developers and streamline collaboration. Considering all these factors, the switch to microservices should be well-timed. In most early-stage projects, especially those with limited teams or budgets, there's no urgent need to move away from a monolith. You should consider transitioning to microservices when: You have a large team — it makes sense to split them into independent groups, each managing a specific service. Your application is complex and modular — maintaining and updating modules separately is more practical. Your application experiences traffic spikes — distributed microservices allow you to scale quickly during peak times and scale down afterward. Your application is frequently updated — working on individual modules speeds up release cycles. If your project meets even one of these criteria, it's worth exploring microservices. But if your app is relatively small and doesn’t need frequent updates, it might be best to stick with the monolithic approach for now. Useful Tools for Microservices Architecture Modern development requires a containerization platform. In most cases, developers use Docker to isolate applications from infrastructure, enabling them to run seamlessly both locally and in the cloud. As the number of containers grows, you need an orchestrator to manage them. The most popular tool is Kubernetes, which integrates well with Docker. Docker also has its own orchestrator: Docker Swarm. Another essential tool is a load balancer, which evenly distributes network traffic across cloud resources, significantly improving the application's fault tolerance.
06 June 2025 · 5 min to read
Infrastructure

How to Choose the Best Password Manager in 2025

Although passwords are not considered the most secure method of authentication, they remain widely used for interacting with various services and applications. Today, more and more users face the need to manage dozens or even hundreds of passwords for different platforms. Storing them in notes, personal messages, or browser memory is not only inconvenient but also unsafe. To solve this problem, there are special types of password security software that not only store but also protect sensitive data, providing a secure space for your credentials. The market offers dozens of password management tools. In this article, we’ll take a closer look at password manager software and examine their key features. What Is a Password Manager? A password manager is software designed for securely storing and using passwords and other confidential data. Password managers simplify password handling by allowing users to remember just one code (commonly known as the master password) instead of multiple complex combinations. Most password managers also offer additional features, such as data breach monitoring, integration with third-party services, and support for storing other types of information like logins and payment card details. They also minimize human error in security management. For example, they eliminate the need to invent and remember complex passwords by offering cryptographically secure auto-generated alternatives. This greatly reduces the risk of weak or reused passwords — one of the main causes of account compromise. Key Features of Password Managers Before diving into reviews of specific software products, it's important to outline the minimum essential features a password manager should offer: Password Generation Service This feature enables the creation of unique, long, and cryptographically strong passwords. A major advantage is having flexible settings to meet the requirements of various services (e.g., length, special characters, etc.). Autofill Automating the process of entering passwords improves user experience and streamlines interactions with the password manager. Browser, OS, and app integration allow autofill to speed up logins and reduce error rates. Data Synchronization Especially relevant for cross-platform password apps that run on multiple operating systems. Synchronization can be cloud-based or local. It ensures access to your private data from any supported device, anywhere. For security, encrypted data transfer channels are essential to minimize leakage risks. Supported Security Measures These include encryption (e.g., AES-256) and two-factor authentication (2FA). Some managers also support biometric authentication using fingerprint scanners or facial recognition. Security Level The most important criterion to prioritize. Ensure that the app uses modern encryption algorithms (specifically AES-256) and supports 2FA. Regular security audits are also crucial. Many password manager developers publish the results of independent security checks, which builds trust. Pricing Depending on user needs, there are various pricing options. Free plans are good for basic use but may be limited (e.g., single-device access, no cloud sync). Paid plans offer expanded functionality, tech support, better security, and business features. Open-Source Options It’s also worth noting that free open-source solutions can offer functionality comparable to paid options. Top Proprietary Password Managers Now let’s review three popular proprietary password management services: NordPass NordPass is a password vault developed by Nord Security. It helps users keep their credentials safe with a user-friendly interface and secure storage. Key Features Secure password storage: Unlimited encrypted password storage with cloud sync for cross-device access. Password generator: Automatically creates strong combinations; includes checks for length, special characters, and other criteria. Autofill: Streamlines login by auto-filling credentials on websites. Data breach monitoring: Scans accounts for potential compromise from hacks or data leaks. Offline mode: Allows access to stored passwords even without an internet connection. Advantages Advanced encryption: Uses the XChaCha20 algorithm for data protection. Cross-platform support: Available for Windows, macOS, Linux, Android, and iOS; also includes browser plugins. Ease of use: Clean interface that is accessible even to non-technical users. Family and business plans: Offers flexible plans for individuals, families, and organizations. Disadvantages Limited free version: The free plan only offers basic features and doesn’t include multi-device sync. Cloud-only storage: No local-only storage option, which may concern users who prefer full control over their data. Closed-source software: Unlike some competitors, NordPass is proprietary, which may deter open-source advocates. Pricing Plans Free: Basic functionality with no sync across devices. Family: Supports up to six accounts. Business: Team management features for organizations. Pricing varies by region and subscription length, with longer terms offering better value. 1Password 1Password is one of the most popular password managers, offering secure data storage and access control. It’s designed to enhance cybersecurity and protect accounts and sensitive information online. Key Features Password storage: Secure login credential storage. Password generation: Built-in tool for creating strong, security-compliant passwords. Form autofill: Fast access to websites without manual data entry. Personal data storage: Supports storing not just passwords but also bank cards, licenses, notes, documents, and other important files. Leak monitoring: Alerts users in case of password leaks or data breaches. Advantages Robust security: Uses top-tier encryption algorithms. Flexible organization: Create multiple vaults for different users or purposes. Cross-platform compatibility: Works on Windows, macOS, Linux, iOS, Android, with browser integration. Business solutions: Includes tools for team access control and administration. Disadvantages No permanent free plan: Only a 14-day free trial is available, after which a subscription is required. Cloud-only storage: While convenient for syncing, some users prefer local-only data management. Pricing Plans Individual: $2.99/month (billed annually) with full features and cross-device sync. Family: $4.99/month for up to 5 users with individual vaults. Teams Starter Pack: $19.95/month for up to 10 users. Encryption AlgorithmDashlane Dashlane is an app for storing passwords and confidential information that provides strong protection. The program helps users simplify access to credentials and protect them from unauthorized use. Key Features Password storage: A secure vault for passwords from various websites and applications. Built-in password generator: A tool for creating reliable and unique character combinations. Autofill: Automatically fills in passwords, logins, financial, and other data on web pages. Data breach monitoring: A monitoring system warns about potential breaches and recommends password changes. Cross-device synchronization: Access your information from various devices, including PCs, smartphones, and tablets. Digital wallet: Secure storage for bank cards and payment details for convenient online shopping. Secure data sharing: Alerts about potential unauthorized access attempts and suggests password changes. Advantages High level of protection: Uses AES-256 encryption and Zero-Knowledge architecture, ensuring complete privacy. User-friendly interface: Simple and intuitive UI suitable even for beginners. Disadvantages Subscription cost: Dashlane is among the more expensive solutions, which may be a barrier for budget-conscious users. Limited functionality in the free version: The free plan offers only basic features and works on a single device. Pricing Plans Free plan: Store up to 25 passwords on one device. Premium: $4.99/month. Includes credit monitoring and identity theft protection. Family plan: $7.49/month (billed annually), supports up to 6 users, each with their own vault. Comparison Table Criteria NordPass 1Password Dashlane Free version available Yes Yes Yes Autosave Yes Yes Yes Passkey support Yes Yes Yes Data breach alerts Yes Yes Yes Multi-factor authentication Yes Yes Yes Email masking Yes Yes No Password generator Yes Yes Yes Supported devices Unlimited Unlimited Unlimited Family plan Yes (up to 6) Yes (up to 5) Yes (up to 10) Encryption algorithm used XChaCha20 AES-GCM-256 AES-256 Among proprietary password managers, we compared three programs: NordPass, 1Password, and Dashlane. All three offer similar functionality, differing mainly in the encryption algorithms they use. Each product also features a free version, allowing users to try it out and select the one that best suits their needs. Top Open-Source Password Managers In contrast to proprietary solutions, the market also offers open-source options. Notably, some open-source solutions can be self-hosted in your own infrastructure. KeePass  KeePass is a popular free password manager for Windows that ensures secure storage of passwords and credentials. It operates in offline mode, providing maximum control over stored data. Key Features Password management: Stores passwords accessible via a master password. Storage is limited only by vault size. Local data storage: User data is stored locally on the device, not in the cloud. Autofill: Automatically fills in data on websites and in apps. Cross-platform support: Versions exist for Windows, macOS, Linux, Android, and iOS. Advantages High security: Supports multiple encryption algorithms, including AES-256, ChaCha20, and Twofish. Offline mode: No cloud dependency reduces the risk of data leaks. Disadvantages Cumbersome synchronization: Requires manual configuration for cross-device syncing. KeePassXC KeePassXC is a free, open-source, cross-platform tool for secure password storage. It is a modern adaptation of the original KeePass, tailored for use on Windows, macOS, and Linux. Key Features Local encrypted storage: Data is stored locally and securely on the user's machine. No cloud dependency unless manually configured. 2FA support: Stores 2FA codes and integrates with hardware security keys (e.g., YubiKey). Autofill: Supports browser integration for auto-filling credentials. Cross-platform: Available on Windows, macOS, and Linux. Mobile access through compatible apps like KeePassDX for Android. Password generator: Customizable password creation tool. Advantages Ease of use: Offers a more user-friendly interface than the original KeePass. Offline operation: Does not require cloud storage; all data remains local. Disadvantages No official mobile apps: KeePassXC is limited to desktop; mobile support is only via third-party apps. Limited plugin options: Compared to KeePass, KeePassXC has fewer plugins available. Bitwarden Bitwarden is an open-source password manager popular for its reliability, simplicity, and transparency. Key Features Password storage: Stores unlimited passwords with access from anywhere. Data is encrypted using AES-256. Password generator: Allows custom password generation based on length, character type, etc. Autofill: Automatically fills in credentials on websites and in apps. Cross-platform support: Available on Windows, macOS, Linux, Android, and iOS. Two-factor authentication (2FA): Supports 2FA via apps, email, or hardware tokens (e.g., YubiKey). Advantages Open source: Public code base allows independent security audits. High security: Client-side end-to-end encryption ensures privacy even from Bitwarden developers. Affordable and accessible: The free tier includes many features often restricted in paid plans elsewhere. Local and cloud storage options: Can be hosted in the cloud or self-hosted for full control. Disadvantages Complex setup for beginners: Self-hosting and advanced configuration may be difficult for inexperienced users. Pricing Plans Self-hosted: Users can deploy Bitwarden on their own server. Premium plan: $10/year, adds breach monitoring and 1 GB of encrypted file storage. Family plan: $40/year, supports up to 6 users. Business plan: Starts at $3/user/month, with advanced team management features. Padloc Padloc is a cross-platform, open-source password management app focused on simplicity and ease of use. It allows users to store, manage, and synchronize passwords across multiple devices. Key Features Open Source: The project’s source code is available on GitHub and is distributed under the GPL-3 license. Cloud Synchronization: Supports storing data on cloud servers with an option for local encryption. Encryption Support: Utilizes AES-256 and Argon2 encryption algorithms. Cross-Platform: Available for Windows, macOS, Linux, Android, and iOS. Browser extensions are also available. Password Generator: Enables creation of strong passwords with customizable options. Advantages Ease of Use: Minimalist and beginner-friendly interface. Team Collaboration Support: Allows sharing of passwords within a team. Disadvantages No Offline Mode: Fully dependent on the cloud. Fewer Features Compared to Alternatives: Lacks features like 2FA support, SSH agent integration, and advanced security settings. Pricing Plans Premium: $3.49/month. Includes 2FA support, 1GB of encrypted file storage, breach report generation, and note-taking functionality with Markdown support. Family Plan: $5.95/month. Includes all Premium features and allows up to 5 users. Team: $3.49/month. Includes Premium features and supports up to 10 user groups with flexible management. Business: $6.99/month. Includes all Team features and supports up to 50 user groups with flexible configuration. Enterprise: Price upon request. Includes all Business features, unlimited user groups, and custom branding options. Psono Psono is a password manager geared toward self-hosting and enterprise use. It can be deployed on a private server, giving users full control over their data. Psono offers strong security, team features, and multi-factor authentication (MFA). Key Features Open Source: Source code is available on GitHub under the Apache 2.0 license. Self-Hosted: Can be deployed on a private server for full data control. Encryption Support: Uses AES-256, RSA, and Argon2 for encryption. Advantages High Security: Supports modern encryption standards and hardware keys. Team Collaboration Support: Ideal for businesses and IT teams. Disadvantages Setup Complexity: Requires server deployment for full functionality. Pricing Plans Self-Hosted: Free option for private deployment. SaaS Edition (Business): $3.50/month. Adds SAML & OIDC SSO, audit logging, and extended support on top of the free version’s features. Comparison Criteria KeePass KeePassXC Bitwarden Padloc Psono Cloud Sync No No Yes Yes Yes Auto-Save Yes Yes Yes Yes Yes Passkey Support Yes Yes Yes No Yes Data Breach Alerts No No No No Yes Multi-Factor Authentication (MFA) Yes Yes Yes Yes Yes Email Masking No No Yes No No Password Generator Yes Yes Yes Yes Yes Supported Devices Single device Single device Unlimited Two (free version) Unlimited (paid) Family Plan Available No No Yes (up to 6 users) Yes (up to 5 users) No Encryption Algorithm AES-256, SHA-256, HMAC-SHA-256/512 AES256 AES-256 E2EE, salted hashing, PBKDF2 SHA-256 AES XSalsa20 + Poly1305 Conclusion In this article, we explored password managers and thoroughly analyzed the most popular software solutions for secure information storage—both paid and free. Each reviewed product has its own strengths and weaknesses. A well-chosen password manager can simplify the management of personal data and protect it from unauthorized access. When selecting a solution, it’s important to consider the functionality, security level, and ease of use.
05 June 2025 · 13 min to read
Infrastructure

Network Protocols: What They Are and How They Work

A network protocol is a set of rules and agreements used to facilitate communication between devices at a specific network layer. Protocols define and regulate how information is exchanged between participants in computer networks. Many protocols are involved in network operation. For example, loading a webpage in a browser is the result of a process governed by several protocols: HTTP: The browser forms a request to the server. DNS: The browser resolves the domain name to an IP address. TCP: A connection is established, and data integrity is ensured. IP: Network addressing is performed. Ethernet: Physical data transmission occurs between devices on the network. These numerous protocols can be categorized according to the network layers they operate on. The most common network models are the OSI and TCP/IP models. In this article, we will explain these models and describe the most widely used protocols. Key Terminology This section introduces essential network-related terms needed for understanding the rest of the article. Network. A network is a collection of digital devices and systems that are connected to each other (physically or logically) and exchange data. Network elements may include servers, computers, phones, routers, even a smart Wi-Fi-enabled lightbulb—and the list goes on. The size of a network can vary significantly—even two devices connected by a cable form a network. Data transmitted over a network is packaged into packets, which are special blocks of data. Protocols define the rules for creating and handling these packets. Some communication systems, such as point-to-point telecommunications, do not support packet-based transmission and instead transmit data as a continuous bit stream. Packet-based transmission enables more efficient traffic distribution among network participants. Network Node. A node is any device that is part of a computer network. Nodes are typically divided into two types: End Nodes. These are devices that send and/or receive data. Simply put, these are sources or destinations of information. Intermediate Nodes. These nodes connect end nodes together. For example, a smartphone sends a request to a server via Wi-Fi. The smartphone and server are end nodes, while the Wi-Fi router is an intermediate node. Depending on node placement and quantity, a network may be classified as: Global Network. A network that spans the entire globe. The most well-known example is the Internet. Local Network (LAN). A network covering a limited area. For example, your home Wi-Fi connects your phone, computer, and laptop into a local network. The router (an intermediate node) acts as a bridge to the global network. An exception to geographic classification is networks of space-based systems, such as satellites or orbital stations. Distributed Network. A network with geographically distributed nodes. Network Medium. This refers to the environment in which data transmission occurs. The medium can be cables, wires, air, or optical fiber. If copper wire is used, data is transmitted via electricity; with fiber optics, data is transmitted via light pulses. If no cables are used and data is transmitted wirelessly, radio waves are used. OSI Model In the early days of computer networks, no universal model existed to standardize network operation and design. Each company implemented its own approach, often incompatible with others. This fragmented landscape became problematic—networks, which were supposed to connect computers, instead created barriers due to incompatible architectures. In 1977, the International Organization for Standardization (ISO) took on the task of solving this issue. After seven years of research, the OSI model was introduced in 1984. OSI stands for Open Systems Interconnection, meaning systems that use publicly available specifications to allow interoperability, regardless of their architecture. (This "openness" should not be confused with Open Source.) The model consists of seven network layers, each responsible for specific tasks. Let’s look at each: 1. Physical Layer This layer deals with the physical aspects of data transmission, including transmission methods, medium characteristics, and signal modulation. 2. Data Link Layer The data link layer operates within a local network. It frames the raw bit stream from the physical layer into recognizable data units (frames), determines start and end points, handles addressing within a local network, detects errors, and ensures data integrity. Standard protocols are Ethernet and PPP. 3. Network Layer This layer handles communication between different networks. It builds larger networks from smaller subnets and provides global addressing and routing, selecting the optimal path. For example, the IP protocol, which gives each device a unique address, operates at this layer. Key protocols are IP and ICMP. 4. Transport Layer The transport layer ensures end-to-end communication between processes on different computers. It directs data to the appropriate application using ports. Protocols such as: UDP — Unreliable transmission of datagrams. TCP — Reliable byte-stream transmission. 5. Session Layer This layer manages communication sessions: establishing, maintaining, and terminating connections, as well as synchronizing data. 6. Presentation Layer Responsible for translating data formats into forms understandable to both sender and receiver. Examples: text encoding (ASCII, UTF-8), file formats (JPEG, PNG, GIF), encryption and decryption. 7. Application Layer The user-facing layer where applications operate. Examples include web browsers using HTTP, email clients, and video/audio communication apps. Some OSI protocols span more than one layer. For instance, Ethernet covers both the physical and data link layers. When data is sent from one node to another, it passes through each OSI layer from top to bottom. Each layer processes and encapsulates the data before passing it to the next lower layer. This process is called encapsulation. On the receiving end, the process is reversed: each layer decapsulates and processes the data, from bottom to top, until it reaches the application. This is called decapsulation. While the OSI model is not used in practical network implementations today, it remains highly valuable for educational purposes, as many network architectures share similar principles. TCP/IP While the OSI model was being developed and debated over, others were implementing practical solutions. The most widely adopted was the TCP/IP stack, also known as the DoD model. According to RFC 1122, the TCP/IP model has four layers: Application Layer Transport Layer Internet Layer (sometimes just called "Network") Link Layer (also called Network Access or Interface Layer) Though different in structure, TCP/IP follows the same fundamental principles as OSI. For example: The OSI session, presentation, and application layers are merged into a single application layer in TCP/IP. The OSI physical and data link layers are merged into the link layer in TCP/IP. Since terminology may vary across sources, we will clarify which model we are referring to throughout this article. Let’s take a closer look at each layer and the protocols involved, starting from the bottom. Data Link Layer in TCP/IP As mentioned earlier, the Data Link Layer in the TCP/IP model combines two layers from the OSI model: the Data Link and Physical layers. The most widely used data link protocol in TCP/IP is Ethernet, so we’ll focus on that. Ethernet Let’s forget about IP addresses and network models for a moment. Imagine a local network consisting of 4 computers and a switch. We'll ignore the switch itself; in our example, it's simply a device that connects the computers into a single local network. Each computer has its own MAC address. In our simplified example, a MAC address consists of 3 numbers, which is not accurate in reality. MAC Address In reality, a MAC address is 48 bits long. It’s a unique identifier assigned to a network device. If two devices have the same MAC address, it can cause network issues. The first 24 bits of a MAC address are assigned by the IEEE — an organization responsible for developing electronics and telecommunications standards. The device manufacturer assigns the remaining 24 bits. Now, back to our local network. If one computer wants to send data to another, it needs the recipient's MAC address. Data in Ethernet networks is transmitted in the form of Ethernet frames. Ethernet is a relatively old protocol, developed in 1973, and has gone through several upgrades and format changes over time. Here are the components of an Ethernet frame: Preamble indicates the beginning of a frame. Destination MAC address is the recipient’s address. Source MAC address is the sender’s address. Type/Length indicates the network protocol being used, such as IPv4 or IPv6. SNAP/LLC and Data are the payload. Ethernet frames have a minimum size requirement to prevent collisions. FCS (Frame Check Sequence) is a checksum used to detect transmission errors. ARP So far, we’ve talked about a simple local network where all nodes share the same data link environment. That’s why this is called the data link layer. However, MAC addressing alone is not enough for modern TCP/IP networks. It works closely with IP addressing, which belongs to the network layer. We’ll go into more detail on IP in the network layer section. For now, let’s look at how IP addresses interact with MAC addresses. Let’s assign an IP address to each computer: In everyday life, we rarely interact with MAC addresses directly — computers do that. Instead, we use IP addresses or domain names. The ARP (Address Resolution Protocol) helps map an IP address to its corresponding MAC address. When a computer wants to send data but doesn’t know the recipient’s MAC address, it broadcasts a message like: "Computer with IP 1.1.1.2, please send your MAC address to the computer with MAC:333." If a computer with that IP exists on the network, it replies: "1.1.1.2 — that’s me, my MAC is 111." So far, we've worked within a single network. Now, let’s expand to multiple subnets. Network Layer Protocols in TCP/IP Now we add a router to our local network and connect it to another subnet. Two networks are connected via the router. This device acts as an intermediate node, allowing communication between different data link environments. In simple terms, it allows a computer from one subnet to send data to a computer in another subnet. How does a device know it’s sending data outside its own subnet? Every network has a parameter called a subnet mask. By applying this mask to a node’s IP address, the device can determine the subnet address. This is done using a bitwise AND operation. You can check the subnet mask in Windows using the ipconfig command:  In this example, the mask is 255.255.255.0. This is a common subnet mask. It means that if the first three octets of two IP addresses match, they are in the same subnet. For example: IP 1.1.1.2 and 1.1.1.3 are in the same subnet. IP 1.1.2.2 is in a different subnet. When a device detects that the recipient is in another subnet, it sends data to the default gateway, which is the router’s IP address. Let’s simulate a situation: A device with MAC 111 wants to send data to the IP 1.1.2.3. The sender realizes this is a different subnet and sends the data to the default gateway. First, it uses ARP to get the MAC address of the gateway, then sends the packet. The router receives the packet, sees that the destination IP is different, and forwards the data. In the second subnet, it again uses ARP to find the MAC address of the target device and finally delivers the data. IP Protocol The IP (Internet Protocol) was introduced in the 1980s to connect computer networks. Today, there are two versions: IPv4 – uses 32-bit addressing. The number of available IP addresses is limited. IPv6 – uses 128-bit addressing and was introduced to solve IPv4 address exhaustion. In IPv6, ARP is not used. Both protocols serve the same function. IPv6 was meant to replace IPv4, but because of technologies like NAT, IPv4 is still widely used. In this guide, we’ll focus on IPv4. An IP packet consists of the following fields: Version – IPv4 or IPv6. IHL (Internet Header Length) – indicates the size of the header. Type of Service – used for QoS (Quality of Service). Total Length – includes header and data. Identification – groups fragmented packets together. Flags – indicate if a packet is fragmented. Fragment Offset – position of the fragment. Time to Live (TTL) – limits the number of hops. Protocol – defines the transport protocol (e.g., TCP, UDP). Header Checksum – verifies the header’s integrity. Source IP Address Destination IP Address Options – additional parameters for special use. Data – the actual payload. Transport Layer Protocols The most common transport layer protocols in TCP/IP are UDP and TCP. They deliver data to specific applications identified by port numbers. Let’s start with UDP — it’s simpler than TCP. UDP A UDP datagram contains: Source port Destination port Length Checksum Payload (from the higher layer) UDP’s role is to handle ports and verify frames. However, it does not guarantee delivery. If some data is lost or corrupted, UDP will not request a retransmission — unlike TCP. TCP TCP packets are called segments. A TCP segment includes: Source and destination ports Sequence number Acknowledgment number (used for confirming receipt) Header length Reserved flags Control flags (for establishing or ending connections) Window size (how many segments should be acknowledged) Checksum Urgent pointer Options Data (from the higher layer) TCP guarantees reliable data transmission. A connection is established between endpoints before sending data. If delivery cannot be guaranteed, the connection is terminated. TCP handles packet loss, ensures order, and reassembles fragmented data. Application Layer Protocols In both the TCP/IP model and the OSI model, the top layer is the application layer. Here are some widely used application protocols: DNS (Domain Name System) – resolves domain names to IP addresses. HTTP – transfers hypertext over the web, allowing communication between browsers and web servers. HTTPS – does the same as HTTP, but with encryption for secure communication. DNS servers use UDP, which is faster but less reliable. In contrast, protocols like FTP and HTTP rely on TCP, which provides reliable delivery. Other popular application protocols include: FTP (File Transfer Protocol) – for managing file transfers. POP3 (Post Office Protocol version 3) – used by email clients to retrieve messages. IMAP (Internet Message Access Protocol) – allows access to emails over the internet. Conclusion This guide covered the most commonly used protocols in computer networks. These protocols form the backbone of most real-world network communications. In total, there are around 7,000 protocols, many of which are used for more specialized tasks.
05 June 2025 · 12 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support