Sign In
Sign In

The OSI Model: A Complete Beginner’s Guide

The OSI Model: A Complete Beginner’s Guide
Hostman Team
Technical writer
Infrastructure

When studying how computer networks work, sooner or later you will encounter the so-called OSI open network model. The OSI model is crucial for understanding network technologies, and it often presents unexpected challenges for beginners.

In this article, we’ll go over the basic principles of the OSI model and will try to provide an “OSI model for dummies” kind of guide.

The Concept of a Protocol

Communication protocols (or simply protocols) are necessary so that participants in information exchange can understand each other. A wide variety of protocols are involved in the operation of computer networks, relating to different network layers. For example, a computer's network controller follows a protocol that describes how to convert digital data into an analog signal transmitted over wires. A browser connects to a website using the TCP transport protocol, and a server and a browser communicate using the HTTP protocol.

In other words, a protocol is a set of agreements between software and hardware developers. It describes how programs and devices interact with other programs and devices that support the protocol.

OSI

OSI stands for Open Systems Interconnection. It does not refer to Open Source; in this context, "open systems" are systems built on open (publicly available) specifications that conform to established standards.

You will often come across the term "Open Systems Interconnection (OSI) Reference Model." The reference model outlines the layers a network should have and the functions performed at each layer. The OSI model divides all protocols into the following seven layers:

  • Physical
  • Data Link
  • Network
  • Transport
  • Session
  • Presentation
  • Application

The OSI model does not include descriptions of the protocols themselves; these are defined in separate standards. 

Today, the OSI model is not much used in practice. In the past, there were literal implementations with exactly seven layers, but over time, they were replaced by the less prescriptive TCP/IP protocol suite, which underpins the modern Internet.

Nevertheless, the protocols in use today roughly correspond to the OSI layers, and the model is still used as a common language for describing how networks work.

Physical Layer

All layers are numbered, starting from the one closest to the data transmission medium. In this case, the first layer of the OSI model is the physical layer. This is where bits of information are converted into signals that are then transmitted through the medium. The physical protocol used depends on how the computer is connected to the network.

For example, in a typical local area network (LAN) using twisted-pair cables, the 100BASE-TX specification (IEEE 802.3u standard) is employed. It defines the cables and connectors, wire characteristics, frequencies, voltage, encoding, and much more. Wi-Fi connections are more complex since data is transmitted over shared radio channels. The interaction of Wi-Fi devices is described by the IEEE 802.11 specification, which, like Ethernet, includes parts of both the physical and data link layers.

When accessing the Internet via a cellular network, GSM specifications are utilized, which include specialized protocols (such as GPRS) that affect not only the first two layers but also the network layer. There are also relatively simple protocols, such as RS232, which is used when connecting two computers via a null-modem cable through COM ports.

Data Link Layer

Next is the data link layer of the OSI model. At this layer, entire messages (frames) are transmitted instead of just bits. The data link layer receives a stream of bits from the physical layer, identifies the start and end of the message, and packages the bits into a frame. Error detection and correction also take place here. In multipoint network connections, where multiple computers use the same communication channel, the data link layer additionally provides physical addressing and access control to the shared transmission medium.

Some tasks theoretically handled by protocols at this layer are addressed in the Ethernet and Wi-Fi specifications; however, there is more. Network interfaces in multipoint connections recognize each other using special six-byte identifiers—MAC addresses. When configuring a network, network adapters must know which device is responsible for which network address (IP address) to send packets (blocks of data transmitted in a packet-switched mode) to their destinations correctly. The ARP (Address Resolution Protocol) is used to automatically build tables that map IP addresses to MAC addresses.

In point-to-point connections, ARP is not needed. However, the PPP (Point-to-Point Protocol) is often used. In addition to frame structure and integrity checks, PPP includes rules for establishing a connection, checking line status, and authenticating participants.

Network Layer

The next level is the network layer of the OSI model. It is designed to build large, composite networks based on various networking technologies. At this level, differences between different data link layer technologies are reconciled, and global addressing is provided, allowing each computer on the network to be uniquely identified. Routing is also performed here, determining the path for packet forwarding through intermediate nodes.

It’s sometimes said that in the Internet, the IP (Internet Protocol) functions as the network layer. This is true in a sense: IP defines the structure of individual packets transmitted through gateways, the system of network addresses, and some other functions. However, several other protocols can also be attributed to the network layer, even though they operate "on top" of the IP protocol.

One of the most important of these is the Internet Control Message Protocol (ICMP). It enables communication between network participants regarding various normal and abnormal conditions, including link failures, the absence of a suitable route, and other delivery issues. Sometimes, ICMP messages contain recommendations for using alternative routes.

Transport Layer

Packets transmitted over a network using network layer protocols are typically limited in size. They may arrive out of order, be lost, or even duplicated. Application programs require a higher level of service that ensures reliable data delivery and ease of use. This is precisely the role of transport layer protocols in the OSI model. They monitor packet delivery by sending and analyzing acknowledgments, numbering packets, and reordering them correctly upon arrival.

As mentioned earlier, network layer protocols do not guarantee packet delivery. A sent packet might be lost, duplicated, or arrive out of sequence. The content of such a packet is usually called a datagram.

One of the simplest transport protocols is the User Datagram Protocol (UDP). Participants in network communication running on the same computer are identified by integers called port numbers (or simply ports). The UDP protocol requires that the data sent over the network be accompanied by the sender’s and receiver’s port numbers, the length of the datagram, and its checksum. All of this is “wrapped” into a packet according to the IP protocol's conventions. However, the responsibility for acknowledgments, retransmissions, splitting information into smaller pieces, and reassembling it in the correct order falls on the software developer. Therefore, UDP does not protect against packet loss, duplication, or disorder — only the integrity of data within a single datagram is ensured.

There is also a second type of transport interaction — stream-based communication. Here, all issues related to packet loss and data reconstruction from fragments are handled by the transport protocol implementation itself, which makes it significantly more complex than datagram-based protocols. The corresponding transport protocol used on the Internet is TCP (Transmission Control Protocol). Unlike UDP, TCP stream communication requires establishing a connection. It guarantees that all bytes written to the stream will be available for reading on the other end and in the correct order. If this guarantee cannot be upheld, the connection will be terminated, and both parties will be informed.

The TCP protocol includes a number of sophisticated agreements, but fortunately, all of these are handled by the operating system.

The Remaining Layers

Identifying which real-world protocols correspond to the remaining three layers is somewhat more difficult. Following the transport layer comes the session layer. According to the creators of the OSI model, its purpose is to establish communication sessions. This includes managing the order of message transmission during dialogues (such as in video conferences), handling concurrent access to critical operations, and providing protection against connection loss (synchronization function).

The problem is that, in practice, all of these functions are either implemented by application-layer protocols or by even higher-level mechanisms that fall outside the scope of the OSI model. As a result, the session layer is not used in real networks.

The next layer is the presentation layer. Its task is to present data in a form that is understandable to both the sender and the receiver. This includes various data formats and interpretation rules, such as text encoding protocols (like ASCII, UTF-8, and KOI8-R), specifications for different versions of HTML/XHTML, image formats (JPEG, GIF, PNG), the MIME specification set, and others. This is also the layer where encryption and decryption are implemented. The most popular examples are TLS (Transport Layer Security) and SSL (Secure Sockets Layer).

The application layer is the most straightforward. It facilitates the interaction of user-facing applications. This includes email, the World Wide Web, social networks, video and audio communication, and so on.

Pros and Cons 

The OSI model was adopted by the International Organization for Standardization (ISO) in 1983, a time when networking technologies were rapidly developing. While the committee debated standards, the world gradually shifted to the TCP/IP stack, which began to displace other protocols. When the OSI protocol implementations were finally released, they were met with a wave of criticism. Critics pointed out their incompatibility with real technologies, incomplete specifications, and limited capabilities compared to existing protocols.

Additionally, experts considered the division into seven layers to be unjustified. Some layers were rarely used, and the same tasks were often handled at multiple different layers. Specialists joke that the OSI model ended up with seven layers because the committee had seven subcommittees, and each proposed its own addition. Meanwhile, the TCP/IP protocol suite, which underpins the entire modern Internet, was developed by a small group of people in an ad hoc fashion—solving problems as they arose, with no committees involved.

However, not everything is negative. A clear advantage of the OSI model is its strong theoretical foundation for network communication, making it a standard reference for documentation and education. Some believe that all is not lost and that the model may still find a role—for example, in cloud computing.

Infrastructure

Similar

Infrastructure

Top Dock Panels for Linux in 2025: Lightweight, Fast & Flexible

A dock panel, or simply a “dock”, is a toolbar that makes working with frequently used applications easier and extends the capabilities of the standard desktop panel. Unlike the traditional taskbar, dock panels offer significantly more features, not just for working with icons but also with widgets. Additionally, they can be positioned anywhere on the screen, not just across the full width. In this article, we’ll look at the best Linux dock panels that can make working with your favorite programs much more convenient and add useful features. Of course, “best” is a subjective term, so we’ve selected the six most popular docks among Linux users. Docky Docky's popularity is largely due to its lightweight and resource-efficient nature. Its interface resembles macOS, which many users find appealing. Docky is also a stable application that won’t cause lags or crashes. It supports themes, widgets (called "docklets"), a 3D mode, and can stretch like a regular toolbar. Thanks to widget support, you can instantly see the weather, monitor system resource usage (CPU, RAM), or check power status. Key Features: Lightweight Stable Highly customizable Supports docklets How to Install Docky: Debian/Ubuntu:  apt install docky Arch:  pacman -S docky Fedora/CentOS:  dnf install docky Plank Another lightweight dock panel, Plank, is very easy to install and configure, making it a great option for Linux beginners or anyone wanting to conserve system resources. It has a clean interface, flexible placement, and auto-hide options. Fans of customization will appreciate the wide selection of built-in icons and the ability to add their own. Like Docky, Plank supports docklets. A notable one is Clippy, which shows clipboard contents. Key Features: Lightweight Easy to configure Customizable Supports docklets How to Install Plank: Debian/Ubuntu:  apt install plank Arch:  pacman -S plank Fedora/CentOS:  dnf install plank Latte Dock While Docky and Plank focus on speed and simplicity, Latte excels in visual customization. It’s perfect for those who dislike minimalism, featuring effects like parabolic zoom. Latte Dock also supports multiple dock panels, detailed visibility settings, widgets, and custom layouts. Originally designed for KDE, it can also run in other desktop environments with the right dependencies installed. It's worth noting that Latte hasn't been actively maintained for some time and hasn't received many updates in the last couple of years. However, many users still run Latte Dock successfully on different Linux distributions—and swear by it. Key Features: Beautiful and customizable Supports multiple dock panels Supports docklets and custom layouts Built for KDE How to Install Latte: Debian/Ubuntu:  apt install latte-dock Arch:  pacman -S latte-dock Fedora/CentOS:  dnf install latte-dock Cairo-Dock A well-known dock featured in most reviews, Cairo-Dock is praised for its high degree of customization and optimization. There’s even a low-resource version. Built-in widgets (weather, email notifications, torrent loading) are not dock-bound, and you can place them anywhere on the desktop. Cairo-Dock also includes system-wide search, eliminating the need to open the start menu. Key Features: Maximum customization Well optimized Freely placeable docklets Built-in system search How to Install Cairo-Dock: Debian/Ubuntu:  apt install cairo-dock Arch:  pacman -S cairo-dock Fedora/CentOS:  dnf install cairo-dock Tint2 Less popular but still worth considering, Tint2 offers minimalistic design and excellent optimization—ideal for low-spec computers. It supports nearly all Linux window managers, plug-in docklets, and has a rich configuration file enabling fine-tuned customization, especially for fonts and panel colors. Key Features: Well optimized Compatible with window managers Docklet support Great customization options How to Install Tint2: Debian/Ubuntu:  apt-get install tint2 Arch: pacman -S tint2 Gentoo:  emerge -av tint2 Dash to Dock While Latte is KDE-focused, Dash to Dock is designed for GNOME. It doesn't integrate well with other desktop environments, but GNOME users, especially those on Ubuntu and Fedora, will find it highly capable. Dash to Dock is lightweight, simple to configure, and offers the level of customization most users expect from a modern dock. Key Features: Lightweight Easy to configure Customizable Built for GNOME How to Install Dash to Dock: Arch:  Arch supports direct installation with Yay. Simply run the command: yay -S gnome-shell-extension-dash-to-dock Other distros: For other distributions, you need first to clone the package in GitHub: git clone https://github.com/micheleg/dash-to-dock.git   Then navigate to the directory: cd dash-to-dock And run these two commands consecutively: make sudo make install
30 May 2025 · 4 min to read
Infrastructure

Top Applications of Artificial Intelligence (AI) Across Industries

Today, artificial intelligence has already penetrated all spheres of our lives. Not long ago, it seemed that neural networks and artificial intelligence would not be able to perform most everyday human tasks. However, thanks to computational resources and machine learning algorithms, neural networks have learned not only to compose texts and solve mathematical equations but also to recognize objects in images and videos (for example, for autonomous vehicles), as well as to manage production lines and logistics (for example, to optimize delivery routes).  In today’s article, we will examine what artificial intelligence can do and what people use AI for in various areas of application. We will also explore real practical examples of using neural networks in everyday tasks. Introduction to the Application of Artificial Intelligence Artificial Intelligence (AI) is a branch of computer science that designs and creates systems intended to perform tasks that require human intelligence. Simply put, AI is a computer program that receives and analyzes data and then draws conclusions based on the results. AI is a multifunctional tool that covers a wide range of tasks: processing large volumes of data, learning, forecasting, speech, text, music recognition, and more. Today, the capabilities of artificial intelligence have become practically limitless. Here are some tasks where AI is already successfully applied and even replaces humans: Processing large volumes of data (Big Data). Automating various routine processes (for example, in IT). Recognizing and analyzing text, images, videos, sound, etc. Forecasting and modeling (for example, in finance or medicine). Personalization (for example, recommendation systems on streaming platforms and online stores). Managing complex systems (autonomous vehicles, logistics, robotics). This "explosion" in demand for AI is associated with the following advantages: Efficiency: significant acceleration of processes while reducing costs. Accuracy: minimizing human errors. Scalability: processing and analyzing enormous data volumes in real time. Innovation: AI can open new possibilities in fields such as medicine, transport, marketing. Accessibility of technology: with increased computing power and data volume, AI applications have become cheaper and more widespread, allowing penetration into many fields. Main Areas of AI Application Let’s look at what AI is being used for in various societal sectors. Medicine and Healthcare The medical and healthcare sector is one of the most promising areas for implementing neural networks and AI. The adoption and funding of AI in healthcare are continuously growing. For example, an analytical report by CB Insights noted a 108% global funding increase in 2021. Here are real examples of AI in medicine: In March 2025, an international group of scientists from the University of Hong Kong, InnoHK D24H lab, and the London School of Hygiene developed a special AI model for diagnosing thyroid cancer. Experiments showed the model’s accuracy exceeded 90%. One key benefit is nearly halving the time doctors spend preparing for patient appointments by analyzing medical documents using advanced tools like ChatGPT and DeepSeek. AI is also used beyond text data. For example, it can detect prostate cancer using MRI scans as input data. Major tech companies actively use AI in medical services. Google Health has developed an AI for analyzing mammograms to detect breast cancer. IBM, a pioneer in computing, is deploying AI to handle medical information and assist doctors in selecting personalized cancer treatments. IBM is also advancing generative AI chatbots (watsonx Assistant), which are used in healthcare. Finance and Banking The financial and banking sector is no exception. AI is widely used for forecasting (including risk assessment), detecting potential fraud, and offering clients personalized services and offers based on their spending patterns. Specially trained algorithms analyze transactions in real time, identifying suspicious and fraudulent activities. AI is well established in credit and mortgage markets, aiding credit scoring, market trend prediction, investment management, and trading. Some practical examples: Goldman Sachs, a major investment bank and financial conglomerate, employs smart assistants to help employees with tasks such as summarizing documents, editing emails, or translating texts. PayPal uses AI extensively to detect fraudulent transactions in real time, processing billions of operations annually. JPMorgan Chase uses the AI-powered Coin service to analyze legal documents, reducing document processing time from 360,000 hours per year to just seconds. Industry and Manufacturing In industry and manufacturing, AI primarily automates technological processes. It also handles equipment diagnostics and various tasks on assembly lines, helping companies reduce production costs, predict equipment failures, and minimize downtime. Siemens, a German conglomerate in electrical engineering, electronics, and energy equipment, uses AI to service its turbines by forecasting equipment failures and optimizing maintenance schedules. Major airlines such as Emirates and Delta Air Lines use the industrial software platform Predix for real-time predictive analytics. This AI usage has cut engine repair costs by 15% and reduced flight delays by 30% due to better failure prediction. French energy engineering company Schneider Electric employs Robotic Process Automation (RPA) to handle labor-intensive tasks related to preparing documents for switchboard operators and managing supply chains. Transport and Logistics In transportation, AI is heavily used in autonomous vehicles. AI processes data from cameras and radars to ensure safe movement. In logistics, AI focuses on optimizing delivery routes, performing analytics and forecasting, and managing warehouse inventories, thereby reducing costs and speeding up business processes. City transport authorities use AI to automatically assign drivers to routes or select buses for deployment on routes, taking passenger flow into account. Waymo, a manufacturer of autonomous vehicle technology, actively markets self-driving cars equipped with AI that are already transporting passengers in some U.S. cities. DHL, an international express delivery company, uses AI to optimize delivery routes, cutting time and costs. It also employs robotics extensively in warehouses and sorting centers. AI in Everyday Life AI and neural networks are not limited to large industries and companies. Millions of users worldwide use AI-integrated apps and services every day, including: Smart assistants: Voice assistants like Siri, Alexa, and Google Assistant use AI to process voice commands, answer questions, and control smart devices. They continuously learn to improve speech recognition and personalization. Streaming platforms: AI underpins recommendation systems on major platforms such as Netflix, YouTube, Amazon Prime, and Spotify. Algorithms analyze user preferences to suggest content likely to be enjoyed, increasing audience engagement and improving user experience. Natural language processing: AI is used in translators and chatbots—for example, translating between languages or providing customer support on airline and software manufacturer websites. Promising Directions for AI Development Although AI already handles many human tasks, its potential remains far from fully realized. Future trends in AI include: Quantum computing: Quantum computers promise to accelerate data processing dramatically, potentially leading to breakthroughs in AI. They will enable solving problems currently inaccessible even to the most powerful supercomputers, such as molecular modeling for pharmaceuticals. Neuromorphic technologies: Neuromorphic chips that mimic the human brain could make AI more energy-efficient and faster, especially valuable for IoT devices and autonomous systems. Ethical Aspects of AI Application Ethical issues arise with AI, such as algorithmic bias. Protecting data privacy is also crucial. Developing ethical standards for AI will be a key factor in the further use of neural networks and artificial intelligence. The Future of Artificial Intelligence According to some forecasts, by 2030, sectors already actively using AI will grow 3 to 5 times. Digital technology markets where AI is just gaining momentum will grow 6 to 11 times. The main global AI demand will come from retail, medicine, and transport, driven by the development of new solutions that facilitate production processes. Additional future trends include: Mass adoption of robotics: The widespread use of autonomous vehicles, drones, and robots will expand into more areas, including science and education. Mass use of AI in education: New platforms will emerge, offering personalized learning tailored to each student’s abilities and creating individualized study plans. Development of generative AI: This technology creates text, images, music, conversations, stories, and more. It will be especially valuable for companies engaged in multimedia production, product design, and creative industries. Limitations and Potential Risks Rapid AI development and widespread use have introduced many risks, including job losses, data leaks, and AI misuse in criminal and fraudulent activities. To mitigate these threats, some countries are implementing AI regulations. For example, the European Union’s AI Act, effective from February 2, 2025, bans AI systems posing risks to safety, health, or fundamental rights—except for national security cases. It specifically prohibits programs that assess and score human social behavior. Other limitations include the high cost of development, processing huge data volumes, and high energy consumption. Conclusion Today, we discussed various fields where neural networks and artificial intelligence are applied. In today’s reality, AI is everywhere—from algorithms in apps to complex production and healthcare systems. Despite widespread adoption, AI’s full potential is still unfolding, and we must prepare for the broader integration of new technologies into our lives.
29 May 2025 · 8 min to read
Infrastructure

Top Kubernetes Interview Questions and Answers

In today's tech landscape, the Kubernetes container orchestration platform is widely used across various projects. With its increasing popularity and widespread adoption, Kubernetes often comes up during interviews for certain IT roles, including DevOps, SRE, system administration, development, and operations. The questions can range from very simple ones about cluster components to more advanced topics like networking within the cluster and network policies. In this article, we’ll go over the top Kubernetes interview questions and provide detailed answers. What is Kubernetes? Kubernetes is an open-source platform for managing containerized applications. It enables the deployment, scaling, and management of containerized workloads and services. List the Main Components of a Kubernetes Cluster At the core of Kubernetes lies the Control Plane, which resides on the master node. The Control Plane includes the following components: kube-api-server – The API server processes REST requests and serves as the "brain" of the cluster. All interactions, including object creation and deletion, go through the API server, which also manages communication between cluster components. etcd – A highly available key-value store that saves configuration data and cluster state. It can be deployed externally for improved fault tolerance. etcd is an independent project maintained by a separate team. kube-scheduler – The component responsible for determining which nodes will run which pods. It monitors available resources on each node to balance workload distribution. kube-controller-manager – Runs controllers that monitor resources and ensure the cluster matches the desired state by making necessary changes. kube-proxy – A network service that acts as a load balancer. It distributes network traffic between pods and runs on every node in the cluster. What is a Pod in Kubernetes? A Pod is the smallest deployable unit in Kubernetes and serves as an abstraction for running containers. A pod usually contains one or more containers, its own IP address, and data storage. Kubernetes doesn’t interact directly with containers, but rather through pods. What is the difference between Deployment and StatefulSet? Both Deployment and StatefulSet are Kubernetes objects for managing applications, but they serve different purposes. Deployment: Used for managing stateless applications (e.g., web servers). Supports rolling updates for zero-downtime deployments. Pods are ephemeral with non-persistent names and IPs. No state persistence: when a pod is deleted, its data is lost. StatefulSet: Designed for stateful applications (e.g., databases). Pods have stable, unique names and identifiers that persist across restarts. Supports Persistent Volumes to retain data between restarts. Pods are created and terminated in a specific order, one at a time. In conclusion, data persistence is the main difference between a Deployment and a StatefulSet. Use Deployment if the application does not require state to be preserved. However, if the application needs to retain its state, then a StatefulSet is the appropriate choice. What is a Service in Kubernetes, and What are the Types? A Service in Kubernetes defines how to access a set of pods. It provides a stable IP and DNS name, allowing internal or external communication with pods. Types of Services: ClusterIP – The default type. Exposes the service on an internal IP, accessible only within the cluster. NodePort – Exposes the service on a specific port across all nodes. Allows external access via NodeIP:NodePort. LoadBalancer – Provisions an external load balancer (mainly in cloud environments) and assigns a public IP for external traffic distribution. ExternalName – Maps the service name to an external hostname or IP address using a DNS CNAME record. Works purely at the DNS level. What is Ingress in Kubernetes? Ingress is a Kubernetes object that defines rules for routing external HTTP/HTTPS traffic to internal services within the cluster. It enables fine-grained control over how traffic is handled and directed. What is an Ingress Controller? An Ingress Controller is a component that implements the Ingress rules. It typically consists of: A reverse proxy (e.g., Nginx, HAProxy) A controller that interacts with the Kubernetes API server to apply Ingress configuration and routing rules. The controller watches for changes to Ingress objects and configures the reverse proxy accordingly to handle incoming traffic. How to Store Sensitive Data (Secrets), Including Logins, Passwords, Tokens, and Keys? Kubernetes provides the Secret object for storing sensitive information. There are six types of secrets: Opaque – A general-purpose secret type used to store any data. Service Account Token – Used to work with service accounts by generating a JWT token. Typically, the token is automatically created when a service account object is created. Basic Auth – Stores login and password in Base64-encoded format. SSH Auth – Used for SSH authentication. The secret contains a pre-generated private key. TLS Certificates – Involves using certificates and their private keys, provided in the manifest's tls.crt and tls.key fields (Base64-encoded).  Bootstrap Token – A special token type used to add new nodes to the Kubernetes cluster safely. Secrets are usually injected into containers via volumeMount or secretKeyRef. You can also use external secret management tools like HashiCorp Vault. What Are Labels and Selectors, and What Are They Used For? Labels are key-value metadata that can be attached to any Kubernetes object. They help to identify attributes of objects that are not directly related to the running services but can provide useful information to users — for example, the purpose of a deployed application or the environment in which it will run. In other words, labels are intended to distinguish between different instances of objects. Selectors are used to filter or query objects based on their labels. A selector is a request to fetch objects that match specific label criteria. What Are Probes in Kubernetes, What Types Exist, and What Are They Used For? Probes in Kubernetes check the health and readiness of applications. There are three types: Liveness Probe: Checks whether a pod is running correctly. If the check fails, the pod is restarted automatically. Readiness Probe: Checks whether a pod is ready to receive network traffic. If it fails, the pod is excluded from load balancing, though it continues running. Startup Probe: Used for apps that take a long time to start. This probe checks the app's initial startup before liveness and readiness checks are activated. What Is Pod Disruption Budget (PDB) and What Is It Used For? Pod Disruption Budget is a Kubernetes feature used to ensure a minimum number of pods are available during voluntary disruptions (e.g., node maintenance or upgrades). Example: If you have an application with 3 replicas that can tolerate the loss of 1 pod, then the PDB should specify that no more than 1 pod can be unavailable at any time. This prevents disruptions that would make the application non-functional. How to Control Resource Usage in Containers? Use requests and limits in your pod definitions: Requests define the minimum amount of CPU and memory required for a pod to be scheduled. If the cluster doesn't have enough resources, the pod won't be scheduled. Limits define the maximum amount of CPU and memory a pod can consume. The pod will be throttled or terminated if it exceeds these limits. You can learn more about Kubernetes requests and limits in our article. How to Expose an Application Running in Kubernetes to the External Network? To provide external access to an application, you can use: Ingress Controller – A preferred method for managing HTTP/HTTPS access. It routes traffic to services based on defined rules. NodePort – Opens a specific port on all nodes for external access. LoadBalancer – Provisions an external IP through a cloud load balancer. What Is the CNI Interface? CNI (Container Network Interface) is a Kubernetes specification maintained by the Cloud Native Computing Foundation. It defines how network interfaces are managed in Linux containers. CNI is responsible for connecting pods to the network. CNI features are implemented through plugins, with popular ones including: Calico Weave Flannel Cilium What Is CRI? CRI (Container Runtime Interface) is the primary communication interface between the kubelet component in a Kubernetes cluster and the container runtime environment. Using CRI, Kubernetes interacts with the container engine responsible for creating and managing containers (Kubernetes itself does not create containers directly).  Popular container runtimes that implement CRI include containerd and CRI-O. What Is a Persistent Volume (PV)? A Persistent Volume (PV) is a Kubernetes object used to store data persistently across pod lifecycles. Volumes in Kubernetes are implemented via plugins, and the platform supports the following types: Container Storage Interface (CSI) Fibre Channel (FC) hostPath iSCSI Local Storage Network File System (NFS) What Is a Persistent Volume Claim (PVC)? A Persistent Volume Claim (PVC) is a user request for storage resources. It allows users to claim a portion of a Persistent Volume based on parameters such as requested size and access mode. PVCs enable dynamic provisioning of storage in Kubernetes, meaning the cluster can automatically create a volume that matches the claim. How to Assign Access Rights in a Kubernetes Cluster? Kubernetes manages access control using RBAC (Role-Based Access Control). RBAC allows administrators to define who can do what within the cluster using the following entities: Role – Defines a set of permissions within a specific namespace. RoleBinding – Assigns a Role to a user or group within a namespace. ClusterRole – Grants permissions across the entire cluster (not limited to a single namespace). ClusterRoleBinding – Binds a ClusterRole to users or groups across all namespaces. ServiceAccount – An identity used by Kubernetes workloads (pods) to interact with the API. Conclusion In this article, we covered a list of common interview questions that candidates might encounter when applying for IT roles involving Kubernetes. These questions span a range of foundational and advanced topics, including architecture, security, networking, and storage in Kubernetes.
22 May 2025 · 9 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support