Sign In
Sign In

Load Balancing in Kubernetes

Load Balancing in Kubernetes
Hostman Team
Technical writer
Kubernetes
24.11.2023
Reading time: 9 min

Load balancing in Kubernetes is a variety of ways to redirect incoming traffic to specific servers in the cluster, thus distributing traffic evenly and making scaling tasks easier.

The main benefit of balancing is avoiding application downtime. It prevents planned downtime due to the deployment of a new software version or unplanned downtime due to hardware issues. 

In this article, we'll look at how load balancing helps stabilize the Kube cluster, increasing application availability. As Kubernetes services help load balancing, we will show how they work and then give specific examples of load balancing. 

But first of all, let's talk about how Kube implements pod tracking, which makes the balancing itself much easier.

Tracking pods in Kubernetes with and without a selector

Pods in Kubernetes are temporary objects that get a new IP each time they are started. After a task is completed, they are destroyed and then re-created on a new deployment. Without Kubernetes service tools, we would have to keep track of the IPs of all active pods, which would be a very complicated task, especially as our application scales. The Kube service solves this problem thanks to the selector. Let's take a look at this code (replace the values with your actual ones):

kind: Service
metadata:
  name: hostmanapp
spec:
  selector:
    app: hostmanapp
  ports:
    - protocol: TCP
      name: hostmanapp
      port: 5428
    targetPort: 5428

The selector ensures that services are correctly matched with the associated pods. When a service receives a pod with a matching label, it updates the pod's IP in the Endpoints object lists. Endpoints keep track of the IP addresses of all pods and update automatically. Each service creates its own Endpoint object.

This article won't go into too much detail about Endpoints. Just remember that the Endpoints update the list of IP addresses so that the Kube service can redirect its traffic.

Defining a service using a selector is the most common method, but we can do this without it. For example, if we migrate our application to Kube, we can evaluate its behavior without migrating the server. Let's try using an existing application hosted on the old server:

kind: Service
  metadata:
    name: hostmanapp-without-ep
  spec:
    ports:
      - protocol: TCP
        port: 5428
        targetPort: 5428
And then:
kind: Endpoints
  metadata:
    name: hostmanapp-without-ep
  subsets:
    - addresses:
        - ip: x.x.x.x #specify the IP
      ports:
      - port: 5428

This will set the name hostmanapp-without-ep to connect to the hostmanapp server.

-

Kubernetes services

Kube, by default, always creates a ClusterIP service. However, there are four types of services, each designed for its own tasks, and together, they help to provide quite flexible load balancing. Let's take a look at all of them and give code examples for customization. 

ClusterIP

Designed for intra-cluster communication between applications. It is configured like this (the application values are random; you should replace them with your own):

kind: Service #mandatory line to define any service
  metadata:
    name: hostmanapp
  spec:
    type: ClusterIP
    selector:
      app: hostmanapp
    ports:
      - protocol: TCP
        port: 5428
      targetPort: 5428

NodePort

External service for mapping pods to hosts via a persistent port defined separately below (all values are also random; replace them with your own):

kind: Service
  metadata:
    name: hostmanapp
  spec:
    type: NodePort
    selector:
      app: hostmanapp
    ports:
      - protocol: TCP
        port: 5428
        targetPort: 5428
      nodePort: 32157

LoadBalancer

LoadBalancer is a cloud infrastructure service that allows you to provide routing through a website, for example. Here is the code for launching it:

kind: Service
  metadata:
    name: hostmanapp
  spec:
    type: LoadBalancer
    selector:
      app: hostmanapp
    ports:
      - protocol: TCP
        port: 5428
      targetPort: 5428

ExternalName

This service is needed to provide out-of-cluster access. The way to do it is simple:

metadata:
  name: hostmanapp
spec:
  type: ExternalName
externalName: hostmanapp.mydomain.com

Note that any service will have a DNS name created using this pattern: service-name.space-name.svc.cluster.local. This record will point to the cluster IP. Without it, Kube will query the IPs of specific pods.

Varieties of balancing through Kube services

As we have seen from the descriptions of all four Kube services, you can organize load balancing in different ways. Let's start by describing how it is done inside the cluster.

Intra-cluster balancing

ClusterIP service is intended for intra-cluster balancing (you can find the code for configuring this and other Kube services above). It is suitable, for example, for organizing the interaction of separate groups of pods located within one Kube cluster. You can provide access to the service in two ways: through DNS or with the environment variables.

Above, we have already described the DNS method. Let's add that it is the most common and recommended way of interaction between microservices. But note that DNS works in Kube only with a DNS-server add-on: for example, CoreDNS.

As for environment variables, they are set when starting a new service via the service-name instruction. You may need PORT and SERVICE_HOST variables, and here are the directives for setting them:

service-name_PORT
service-name_SERVICE_HOST

External balancing

It can be performed using NodePort (hereafter NP) and LoadBalancer (hereafter LB). NP is suitable for balancing a limited number of services and has the advantage of providing connectivity without a dedicated external balancing tool.

The first limitation of NP is that it is suitable only for a private network; you can't organize an Internet connection via NP. Another disadvantage is that it only works over static ports in a limited range, and the service must allocate the same port to each host. This becomes problematic when the application scales to multiple microservices.

The LB provides a public IP or DNS to which external users can connect. Traffic flows from the LB to the matched service on the assigned port, redirecting it to the worker pods. However, the LB does not have a direct match to the pods.

Let's see how to create an external LB. In the example below, we will start a pod and connect its GUI from an external network. Note that the LB does not filter incoming or outgoing traffic. It is just a proxy to connect to the external network, redirecting traffic to the appropriate modules/services. Create a cluster with create cluster command and enter the following parameters (name and region values are random; you must also enter your SSH public key in the ssh-public-key field).

--name myHostmanCluster 
--region my-hostman-region 
--with-oidc 
--ssh-access 
--ssh-public-key <xxxxxxxxxx> 
--managed
Now we edit the pod's yaml:
kind: Pod
metadata:
  name: hostmanpod
labels:
  app: hostmanpod
spec:
  containers:
    - name: hostmanpod
      image: hostmanpod:latest

Now edit the pod’s YAML file:

kind: Pod
metadata:
  name: hostmanpod
labels:
  app: hostmanpod
spec:
  containers:
    - name: hostmanpod
    image: hostmanpod:latest

Then create a pod:

kubectl apply -f hostmanpod.yaml

Let's make sure it works:

kubectl get pods --selector='app=hostmanpod'

Now activate the LB (again, the values in the code samples are random):

kind: Service
  metadata:
    name: hostmanpod-ext-serv
  spec:
    type: LoadBalancer
    selector:
      app: hostmanpod
    ports:
      - name: hostmanpod-admin
        protocol: TCP
        port: 14953
      targetPort: 14953

Next, start the service and confirm:

kubectl apply -f hostmanpod-svc.yaml

service/hostmanpod-ext-serv created

kubectl get svc

Copy the obtained DNS to the browser and enter the port specified in the code above. Let's assume that we got this DNS:

http://b9f305e6d743a85cb32f48f6a210cb51.my-hostman-region.com

Then we should paste the following into the browser:

http://b9f305e6d743a85cb32f48f6a210cb51.my-hostman-region.com:14953

Now you can share this address with anyone who wants to connect to your administrator account. As you can see, creating and configuring an external load balancer for an application is quite easy. 

LoadBalancer indeed has some limitations, but Ingress helps to bypass them.

Balancing via Ingress

We have seen that LoadBalancer creates application instances for each service. This is fine as long as we have a few services, but as their number increases, it becomes difficult to manage them. Also, LB does not support URL routing, SSL, and more. This is where Ingress comes to the rescue, which is an extension for NP and LB. Ingress processes internal traffic to determine which pods or services to forward next. The main function of Ingress is load balancing, but it can also perform URL routing, SSL termination, and several other functions. There are quite a few Ingress configurations, and they are easy to find by queries. Here is one for illustrative purposes:

E0e66de8 4066 4b88 A5f1 A8afb7fbf78c

The example above defines the rules by which traffic from end users will flow. We should also add that Ingress is not a Kube service like LB or NP, but a set of rules used by these services. In addition, a cluster using Ingress needs an Ingress Controller. There are a few of these controllers, and you can check out some popular solutions: AWS ALB, NGINX, Istio, Traefik.

Different controllers have different features and capabilities, so you will have to evaluate them based on your requirements. But whichever controller you use, Ingress will greatly simplify the configuration and management of routing rules and help you implement SSL-based traffic. And, of course, like traditional tools, Ingress controllers support a variety of balancing algorithms.

Conclusion

We learned about the differences between Kubernetes services, how to access them, how to organize intra-cluster and external balancing, and got acquainted with additional tools. To effectively use Kube services, you need to understand which of them is optimal for your tasks and configure it accordingly. It will save a lot of debugging time and ensure trouble-free operation of your applications and services.

Kubernetes
24.11.2023
Reading time: 9 min

Similar

Kubernetes

Kubernetes Backup

The Kubernetes containerization platform processes and stores large volumes of data from various cluster components, including persistent storage blocks (Persistent Volumes), various manifests, and configuration files such as Deployments, ConfigMaps, and Secrets. It is important to organize backups to protect this data. There are various solutions for simplifying the Kubernetes backup process. One of them is Velero, specifically designed to create Kubernetes cluster backups. Today, we will take a detailed look at the process of creating backups using Velero. Prerequisites A deployed and running Kubernetes cluster. It can be a self-hosted cluster deployed or a Kubernetes cluster in the Hostman cloud. Object storage for backup files. In this guide, we will use Hostman S3 object storage. A server or a computer from which we will manage the cluster and install Velero. We'll use a machine with Ubuntu 24.04. kubectl utility installed. The major version of kubectl should not differ from that of the cluster. For instance, if the cluster version is 1.31, you can use versions from 1.30 to 1.32. To download a specific version of kubectl, specify it in the URL, for example: curl -LO https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl After installation, check the version: kubectl version --client Helm package manager installed. Helm simplifies installing, upgrading, and managing applications within a Kubernetes cluster. Helm organizes complex Kubernetes configurations into manageable packages called charts. Creating S3 Storage S3 is an object storage service for reliable storage of large datasets. Since Velero requires object storage, let's create one in the S3 Storage section of the Hostman management panel. Click the Create button: For this guide, we'll select the minimum storage size of 10 GB. In practice, you should choose a size that meets your needs. Set the storage type to Public. You can also rename the bucket if needed. Velero Overview Velero is an open-source client-server utility for creating backups and restoring Kubernetes cluster resources. It works with Kubernetes objects (such as Pods, Deployments, and Services) and saves them as snapshots. Additionally, it can back up data from Persistent Volume (PV) objects. Velero Key Features: Backup Creation: Save the state of the Kubernetes cluster, including manifests and Persistent Volumes. Data Restoration: Restore the entire cluster or individual resources from a backup. Data Migration: Move resources between Kubernetes clusters. Velero Architecture The Velero architecture consists of the following key components: Velero Server (deployed inside the Kubernetes cluster): The server component runs as a Deployment object within the Kubernetes cluster. It handles backup and recovery tasks. CLI (deployed outside the cluster): The client component provides a command-line interface for managing Velero and sends commands to the Velero server. Cloud Storage Provider Plugins: Used to interact with data storage services (e.g., Amazon S3, Google Cloud Storage, and Azure Blob Storage). Preparing the kubeconfig File To connect to a cluster, you need the kubeconfig file — a special YAML file containing connection details for the cluster. If you are using a Kubernetes cluster from Hostman, you can download the kubeconfig file from the Dashboard of your cluster. Next, export the KUBECONFIG environment variable, specifying the full path to the kubeconfig file. Linux and macOS In the terminal, run the following command: export KUBECONFIG=/root/Daring_Linnet_config.yaml Windows In the Windows PowerShell, use this command: $env:KUBECONFIG = "C:\Users\alex\plugins\container-service\clusters\customername\Daring_Linnet_config.yaml" Replace Daring_Linnet_config.yaml with the name of your kubeconfig file. After exporting the environment variable, check the connection to the cluster by listing all available nodes: kubectl get nodes If the command returns a list of nodes, we have successfully connected to the cluster. Installing Velero Installing the Client Component As mentioned earlier, Velero consists of a client (CLI) and a server component. We'll start by installing the client, which provides a command-line interface. Download the .tar archive for the Velero client and extract it. We'll use version 1.15.1: curl -L https://github.com/vmware-tanzu/velero/releases/download/v1.15.1/velero-v1.15.1-linux-amd64.tar.gz | tar -xz The output will be a directory named velero-v1.15.1-linux-amd64 (where v1.15.1 is the version used). Move the directory to /usr/local/bin: mv velero-v1.15.1-linux-amd64/velero /usr/local/bin/ Check the utility's functionality by displaying its version: velero version If the version is displayed, the client component has been successfully installed. Now we will proceed with the installation of the server component. Installing the Server Component One way to install the server component of Velero is through a Helm chart. To install Velero using Helm, follow these steps: Create a new namespace named velero: kubectl create namespace velero Create a new Kubernetes Secret object to store the aws_access_key_id and aws_secret_access_key variables. These keys are essential for authenticating and authorizing access to S3 storage. S3 Access Key: A public identifier used to identify the user or application making the request. S3 Secret Access Key: A private key used to digitally sign requests. Keep this key confidential. To find the S3 Access Key and S3 Secret Access Key, go to the S3 Storage section in the Hostman management panel and click on the bucket. Copy these values and create a new file named velero-credentials-secret.yaml: nano velero-credentials-secret.yaml Add the following content: apiVersion: v1 kind: Secret metadata: name: cloud-credentials namespace: velero type: Opaque stringData: cloud: | [default] aws_access_key_id = UOY3beX5A3bV9Ly aws_secret_access_key = F3x78pH1d5BOu4BfVv Create the secret in Kubernetes: kubectl apply -f velero-credentials-secret.yaml Add the official vmware-tanzu Helm repository: helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts Update the repository list: helm repo update List the repositories to confirm the addition: helm repo ls Install Velero using the following command: helm install velero vmware-tanzu/velero \ --namespace velero \ --set credentials.existingSecret=cloud-credentials \ --set 'configuration.backupStorageLocation[0].name=default' \ --set 'configuration.backupStorageLocation[0].provider=aws' \ --set 'configuration.backupStorageLocation[0].bucket=f60e2023-bucket-for-velero' \ --set 'configuration.backupStorageLocation[0].config.region=us-2' \ --set 'configuration.backupStorageLocation[0].config.s3ForcePathStyle=true' \ --set 'configuration.backupStorageLocation[0].config.s3Url=https://s3.hostman.com' \ --set 'configuration.volumeSnapshotLocation[0].name=default' \ --set 'configuration.volumeSnapshotLocation[0].provider=aws' \ --set 'configuration.volumeSnapshotLocation[0].config.region=us-2' \ --set 'initContainers[0].name=velero-plugin-for-aws' \ --set 'initContainers[0].image=velero/velero-plugin-for-aws:v1.7.0' \ --set 'initContainers[0].volumeMounts[0].mountPath=/target' \ --set 'initContainers[0].volumeMounts[0].name=plugins' In the configuration.backupStorageLocation[0].bucket parameter, specify the bucket name, which you can find in the Hostman control panel. Run the installation command. If there are no errors, a message will confirm that Velero has been deployed in the cluster. To monitor its status, use: kubectl get deployment/velero -n velero The deployment file is successfully launched, as indicated by the READY and UP-TO-DATE statuses. You can also check the status of the Velero pod: kubectl get pods -n velero If the pod is running, you can optionally check its logs (where velero-7bb8d5c5f-jwg5c is the Velero pod name): kubectl logs velero-7bb8d5c5f-jwg5c -n velero The Velero installation is now fully complete. Backup Using Velero To test the backup process, we will create a new namespace and several Kubernetes objects within it. Create a namespace named test-velero: kubectl create ns test-velero Create a Deployment file with two containers running the NGINX web server and a LoadBalancer service.  nano nginx-dev.yaml Add the following configuration: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-dev namespace: test-velero labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx:1.17.6 name: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: labels: app: nginx name: nginx-test-service namespace: test-velero spec: ports: - port: 80 targetPort: 80 selector: app: nginx type: LoadBalancer Apply the file and create the resources: kubectl apply -f nginx-dev.yaml Verify the status of the created resources: kubectl get all -n test-velero Creating a Backup To create a backup for all resources in the test-velero namespace, run the following command: velero backup create nginx-test-backup --include-namespaces test-velero If the backup was created successfully, you will see the following message: Backup request "nginx-test-backup" submitted successfully.Run `velero backup describe nginx-test-backup` or `velero backup logs nginx-test-backup` for more details. You can check the status with the describe command:  velero backup describe nginx-test-backup If successful, the status will be Completed. Listing Backups To view all backups in the storage, run: velero backup get The output will display the status (STATUS), number of errors (ERRORS), warnings (WARNINGS), creation time (CREATED), and expiration time (EXPIRES) for each backup. Restoring a Backup To test the restoration process, first delete the previously created namespace and all objects within it: kubectl delete namespace test-velero Restore the backup by specifying its name (nginx-test-backup): velero restore create --from-backup nginx-test-backup Check the restoration status using the following command, providing the name of the restored copy (obtained from the velero restore create output): velero restore describe nginx-test-backup-20250114155656 If successful, the status will be Completed. Viewing Backup Files To view backup files, navigate to the Objects tab in the S3 Storage section in your Hostman control panel. Velero creates separate directories for: Backups: containing backup data for the respective resources. Restorations: containing details about restored objects. Each directory contains the corresponding Kubernetes objects for backup and restoration purposes. Useful Commands for Backup with Velero Velero offers extensive backup functionality, allowing you to create backups for specific objects or configurations. Below are some useful examples: Scheduled Backup for Specific Namespaces To automatically create backups for all objects in the default and my-namespace namespaces every day at 2:00 AM: velero schedule create daily-backup --schedule="0 2 * * *" --include-namespaces default,my-namespace Backup for Specific Resources To create a backup only for objects of type deployment in the default namespace: velero backup create my-backup2 --include-resources deployments --include-namespaces default Full Cluster Backup To back up the entire Kubernetes cluster, including cluster-scoped resources such as ClusterRole, ClusterRoleBinding, CustomResourceDefinition (CRD), PersistentVolume, and StorageClass: velero backup create full-cluster-backup Backup by Label Selector To back up only objects with a specific label, for instance, those with the selector app=nginx: velero backup create backup-with-label-nginx --selector "app=nginx" Backup Excluding a Label Selector To back up only objects without a specific label selector, such as excluding objects labeled app=nginx: velero backup create backup-with-no-label-nginx --selector "app=nginx" Excluding a Specific Namespace To exclude the kube-system namespace and all its objects from the backup: velero backup create backup-exclude-kube-system --exclude-namespaces kube-system Excluding Specific Resources To exclude all secrets from the backup: velero backup create backup-exclude-secrets --exclude-resources secrets Conclusion In this practical guide, we covered how to install Velero and how to use it to create Kubernetes backups and restore data. Velero's rich functionality allows for quick and straightforward backup-related tasks, making it a valuable tool for maintaining data safety and cluster reliability.
04 February 2025 · 10 min to read
Kubernetes

Kubernetes Requests and Limits

When working with the Kubernetes containerization platform, it is important to control resource usage for cluster objects such as pods. The requests and limits parameters allow you to configure resource consumption limits, such as how many resources a pod can use in a Kubernetes cluster. This article will explore the use of requests and limits in Kubernetes through practical examples. Prerequisites To work with requests and limits in a Kubernetes cluster, we need: A Kubernetes cluster (you can create one in the Hostman control panel). For testing purposes, a cluster with two nodes will suffice. The cluster can also be deployed manually by renting the necessary number of cloud or dedicated (physical) servers, setting up the operating system, and installing the required packages. Lens or kubectl for connecting to and managing your Kubernetes clusters. Connecting to a Kubernetes Cluster Using Lens First, go to the cluster management page in your Hostman panel. Download the Kubernetes cluster configuration file (the kubeconfig file). Once Lens is installed on your system, launch the program, and from the left menu, go to the Catalog (app) section: Select Clusters and click the blue plus button at the bottom right. Choose the directory where you downloaded the Kubernetes configuration file by clicking the Sync button at the bottom right. After this, our cluster will appear in the list of available clusters. Click on the cluster's name to open its dashboard: What are Requests and Limits in Kubernetes First, let's understand what requests and limits are in Kubernetes. Requests are a mechanism in Kubernetes that is responsible for allocating physical resources, such as memory and CPU cores, to the container being launched. In simple terms, requests in Kubernetes are the minimum system requirements for an application to function properly. Limits are a mechanism in Kubernetes that limits the physical resources (memory and CPU cores) allocated to the container being launched. In other words, limits in Kubernetes are the maximum values for physical resources, ensuring that the launched application cannot consume more resources than specified in the limits. The container can only use resources up to the limit specified in the Limits. The request and limit mechanisms apply only to objects of type pod and are defined in the pod configuration files, including deployment, StatefulSet, and ReplicaSet files. Requests are added in the containers block using the resources parameter. In the resources section, you need to add the requests block, which consists of two values: cpu (CPU resource request) and memory (memory resource request). The syntax for requests is as follows: containers: ... resources: requests: cpu: "1.0" memory: "150Mi" In this example, for the container to be launched on a selected node in the cluster, at least one free CPU core and 150 megabytes of memory must be available. Limits are set in the same way. For example: containers: ... resources: limits: cpu: "2.0" memory: "500Mi" In this example, the container cannot use more than two CPU cores and no more than 500 megabytes of memory. The units of measurement for requests and limits are as follows: CPU — in millicores (milli-cores) RAM — in bytes For CPU resources, cores are used. For example, if we need to allocate one physical CPU core to a container, the manifest should specify 1.0. To allocate half a core, specify 0.5. A core can be logically divided into millicores, so you can allocate, for example, 100m, which means one-thousandth of a core (1 full CPU core contains 1000 millicores). For RAM, we specify values in bytes. You can use numbers with the suffixes E, P, T, G, M, k. For example, if a container needs to be allocated 1 gigabyte of memory, you should specify 1G. In megabytes, it would be 1024M, in kilobytes, it would be 1048576k, and so on. The requests and limits parameters are optional; however, it is important to note that if both parameters are not set, the container will be able to run on any available node in the cluster regardless of the free resources and will consume as many resources as are physically available on each node. Essentially, the cluster will allocate excess resources. This practice can negatively affect the stability of the entire cluster, as it significantly increases the risk of errors such as OOM (Out of Memory) and OutOfCPU (lack of CPU resources). To prevent these errors, Kubernetes introduced the request and limit mechanisms. Practical Use of Requests and Limits in Kubernetes Let's look at the practical use of requests and limits. First, we will deploy a deployment file with an Nginx image where we will set only the requests. In the configuration below, to launch a pod with a container, the node must have at least 100 millicores of CPU (1/1000 of a CPU core) and 150 megabytes of free memory: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test-deployment namespace: ns-for-nginx labels: app: nginx-test spec: selector: matchLabels: app: nginx-test template: metadata: labels: app: nginx-test spec: containers: - name: nginx-test image: nginx:1.25 resources: requests: cpu: "100m" memory: "150Mi" Before deploying the deployment, let's create a new namespace named ns-for-nginx: kubectl create ns ns-for-nginx After creating the namespace, we will deploy the deployment file using the following command: kubectl apply -f nginx-test-deployment.yml Now, let's check if the deployment was successfully created: kubectl get deployments -A Also, check the status of the pod: kubectl get po -n ns-for-nginx The deployment file and the pod have been successfully launched. To ensure that the minimum resource request was set for the Nginx pod, we will use the kubectl describe pod command (where nginx-test-deployment-786d6fcb57-7kddf is the name of the running pod): kubectl describe pod nginx-test-deployment-786d6fcb57-7kddf -n ns-for-nginx In the output of this command, you can find the requests block, which contains the previously set minimum requirements for our container to run: In the example above, we created a deployment that sets only the minimum required resources for deployment. Now, let's add limits for the container to run with 1 full CPU core and 1 gigabyte of RAM by creating a new deployment file: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test-deployment-2 namespace: ns-for-nginx labels: app: nginx-test2 spec: selector: matchLabels: app: nginx-test2 template: metadata: labels: app: nginx-test2 spec: containers: - name: nginx-test2 image: nginx:1.25 resources: requests: cpu: "100m" memory: "150Mi" limits: cpu: "1.0" memory: "1G" Let's create the deployment in the cluster: kubectl apply -f nginx-test-deployment2.yml Using the kubectl describe command, let's verify that both requests and limits have been applied (where nginx-test-deployment-2-6d5df6c95c-brw8n is the name of the pod): kubectl describe pod nginx-test-deployment-2-6d5df6c95c-brw8n -n ns-for-nginx In the screenshot above, both requests and limits have been set for the container. With these quotas, the container will be scheduled on a node with at least 150 megabytes of RAM and 100 milli-CPU. At the same time, the container will not be allowed to consume more than 1 gigabyte of RAM and 1 CPU core. Using ResourceQuota In addition to manually assigning resources for each container, Kubernetes provides a way to allocate quotas to specific namespaces in the cluster. The ResourceQuota mechanism allows setting resource usage limits within a particular namespace. ResourceQuota is intended to limit resources such as CPU and memory. The practical use of ResourceQuota looks like this: Create a new namespace with quota settings: kubectl create ns ns-for-resource-quota Create a ResourceQuota object: apiVersion: v1 kind: ResourceQuota metadata: name: resource-quota-test namespace: ns-for-resource-quota spec: hard: pods: "2" requests.cpu: "0.5" requests.memory: "800Mi" limits.cpu: "1" limits.memory: "1G" In this example, for all objects created in the ns-for-resource-quota namespace, the following limits will apply: A maximum of 2 pods can be created. The minimum CPU resources required for starting the pods is 0.5 milliCPU. The minimum memory required for starting the pods is 800MB. CPU limits are set to 1 core (no more can be allocated). Memory limits are set to 1GB (no more can be allocated). Apply the configuration file: kubectl apply -f test-resource-quota.yaml Check the properties of the ResourceQuota object: kubectl get resourcequota resource-quota-test -n ns-for-resource-quota As you can see, resource quotas have been set. Also, verify the output of the kubectl describe ns command: kubectl describe ns ns-for-resource-quota The previously created namespace ns-for-resource-quota will have the corresponding resource quotas. Example of an Nginx pod with the following configuration: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-with-quota namespace: ns-for-resource-quota labels: app: nginx-with-quota spec: selector: matchLabels: app: nginx-with-quota replicas: 3 template: metadata: labels: app: nginx-with-quota spec: containers: - name: nginx image: nginx:1.22.1 resources: requests: cpu: 100m memory: 100Mi limits: cpu: 100m memory: 100Mi Here we define 3 replicas of the Nginx pod to test the quota mechanism. We also set minimum resource requests for the containers and apply limits to ensure the containers don't exceed the defined resources. Apply the configuration file: kubectl apply -f nginx-deployment-with-quota.yaml kubectl get all -n ns-for-resource-quota As a result, only two of the three replicas of the pod will be successfully created. The deployment will show an error message indicating that the resource quota for pod creation has been exceeded (in this case, we're trying to create more pods than allowed): However, the remaining two Nginx pods were successfully started: Conclusion Requests and limits are critical mechanisms in Kubernetes that allow for flexible resource allocation and control within the cluster, preventing unexpected errors in running applications and ensuring the stability of the cluster itself. We offer an affordable Kubernetes hosting platform, with transparent and scalable pricing for all workloads.
29 January 2025 · 9 min to read
Kubernetes

Kubernetes Cluster Health Checks

The Kubernetes containerization platform is a complex system consisting of many different components and internal API objects totaling over 50. When issues arise with the cluster, it is important to know how to troubleshoot them. There are many different health checks available for a Kubernetes cluster and its components — let's go over them today. Connecting to a Kubernetes Cluster with kubectl To connect to a Kubernetes cluster using the kubectl command-line utility, you need a kubeconfig configuration file that contains the settings for connecting to the cluster. By default, this file is located in the hidden .kube directory in the user's home directory. The configuration file is located on the master node at /etc/kubernetes/admin.conf. To copy the configuration file to the user's home directory, you need to run the following command: sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config When using cloud-based Kubernetes clusters, you can download the file from the cluster's management panel. For example, on Hostman: After copying the file to the user's home directory, you need to export the environment variable so that the kubectl utility can locate the configuration file. To do this, run: export KUBECONFIG=$HOME/.kube/config Now, the kubectl command will automatically connect to the cluster, and all commands will be applied to the cluster specified in the exported configuration file. If you're using a kubeconfig file downloaded from the cluster's management panel, you can use the following command: export KUBECONFIG=/home/user/Downloads/config.yaml Where /home/user/Downloads/ is the full path to the config.yaml file. Checking the Client and Server Versions of Kubernetes Although this check might seem not so obvious, it plays a fundamental role in starting the troubleshooting process. The reason is that for Kubernetes to function stably, the client and server versions of Kubernetes need to be identical to avoid unexpected issues. This is mentioned in the official kubectl installation documentation. To check the client and server version of your Kubernetes cluster, run the following command: kubectl version In the output of the command, pay attention to the Client Version and Server Version lines. If the client and server versions differ (as in the screenshot above), the following warning will appear in the command output: WARNING: version difference between client (1.31) and server (1.29) exceeds the supported minor version skew of +/-1. Retrieving Basic Cluster Information During cluster health checks, it might be useful to know the IP address or domain name of the control plane component, as well as the address of the embedded Kubernetes DNS server — CoreDNS. To do this, use the following command: kubectl cluster-info For the most detailed information about the cluster, you can obtain a cluster dump using the command: kubectl cluster-info dump Note that this command produces a huge amount of data. For further use and analysis, it's a good idea to save the data to a separate file. To do this, redirect the output to a file: kubectl cluster-info dump > cluster_dump.txt Retrieving All Available Cluster API Objects If you need to get a list of all the API objects available in the cluster, run the following command: kubectl api-resources Once you know the names of the cluster objects, you can perform various actions on them — from listing the existing objects to editing and deleting them. Instead of using the full object name, you can use its abbreviation (listed in the SHORTNAMES column), though abbreviations are not supported for all objects. Cluster Health Check Let's go over how to check the health of various components within a Kubernetes cluster. Nodes Health Check Checking Node Status Start by checking the status of the cluster nodes. To do this, use the following command: kubectl get nodes In the output, pay attention to the STATUS column. Each node should display a Ready status. Viewing Detailed Information If any node shows a NotReady status, you can view more detailed information about that node to understand the cause. To do this, use the command: kubectl describe node <node_name> In particular, pay attention to the Conditions and Events sections, which show all events on the node. These messages can help determine the cause of the node's unavailability. Additionally, the Conditions section displays the status of the following node components: NetworkUnavailable — Shows the status of the network configuration. If there are no network issues, the status will be False. If there are network issues, it will be True. MemoryPressure — Displays the status of memory usage on the node. If sufficient memory is available, the status will be False; if memory is running low, the status will be True. DiskPressure — Displays the status of available disk space on the node. If enough space is available, the status will be False. If disk space is low, the status will be True. PIDPressure — Shows the status of process "overload." If there are only a few processes running, the status will be False. If there are many processes running, the status will be True. Ready — Displays the overall health of the node. If the node is healthy and ready to run pods, the status will be True. If any issues are found (e.g., memory or network problems), the status will be False. Monitoring Resource Usage In Kubernetes, you can track resource consumption for both cluster nodes and pod-type objects, as well as containers. To display resource usage consumed by the cluster nodes, use the command: kubectl top node The top node command shows how much CPU and memory each node is consuming. The values are displayed in millicores for CPU and bytes for memory, and also as percentages. To display resource usage of all pods across all namespaces running in the cluster, use the command: kubectl top pod -A If you need to display resource usage for pods in a specific namespace, specify the namespace with the -n flag: kubectl top pod -n kube-system To view the resource consumption specifically by containers running in pods, use the --containers option: kubectl top pod --containers -A Viewing Events in the Cluster To view all events within the cluster, use the following command: kubectl get events It will display all events, regardless of the type of object in the cluster. The following columns are used: LAST SEEN — The time when the event occurred, displayed in seconds, minutes, hours, days, or months. TYPE — Indicates the event's status, which is akin to the severity level. The supported statuses are: Normal, Warning, and Error. REASON — Represents the cause of the event. For example, Starting indicates that an object in the cluster was started, and Pulling means that an image for a container was pulled. OBJECT — The cluster object that triggered the event, including nodes in the cluster (e.g., during initialization). MESSAGE — Displays the detailed message of the event, which can be useful for troubleshooting. To narrow down the list of events, you can use a specific namespace: kubectl get events -n kube-system For more detailed event output, use the wide option: kubectl get events -o wide The wide option adds additional columns of information, including: SUBOBJECT — Displays the subobject related to the event (e.g., container, volume, secret). SOURCE — The source of the event, which could be components like kubelet, node-controller, etc. FIRST SEEN — The timestamp when the event was first recorded in the cluster. COUNT — The number of times the event has been repeated since it was first seen in the cluster. NAME — The name of the object (e.g., pod, secret) associated with the event. To view events in real-time, use the -w flag: kubectl get events -w The get events command also supports filtering via the --field-selector option, where you specify a field from the get events output. For example, to display all events with a Warning type in the cluster: kubectl get events --field-selector type=Warning -A Additionally, filtering by timestamps is supported. To display events in the order they first occurred, use the .metadata.creationTimestamp parameter: kubectl get events --sort-by='.metadata.creationTimestamp' Monitoring Kubernetes API Server The API server is a critical component and the "brain" of Kubernetes, that processes all requests to the cluster. The API server should always be available to respond to requests. To check its status, you can use special API endpoints: livez and readyz. To check the live status of the API server, use the following command: kubectl get --raw '/livez?verbose' To check the readiness status of the API server, use the following command: kubectl get --raw '/readyz?verbose' If both the livez and readyz requests return an ok status, it means the API server is running and ready to handle requests. Kubernetes Cluster Components To quickly display the status of all cluster components, use the following command: kubectl get componentstatuses If the STATUS and MESSAGE columns show Healthy and ok, it means the components are running successfully. If any component encounters an error or failure, the STATUS column will display Unhealthy, and the MESSAGE column will provide an error message. Container Runtime As is known, Kubernetes itself does not run pods with containers. Instead, it uses an external component called the Container Runtime Interface (CRI), or simply the container runtime. It’s important to ensure that the container runtime environment is functioning correctly. At the time of writing, Kubernetes supports the following container runtimes: containerd CRI-O It’s worth mentioning that Docker's container runtime is no longer supported starting from Kubernetes version 1.24. CRI-O First, check the status of the container runtime. To do this, on the node where the error appears, run the following command: systemctl status crio In the Active line, the status should show active (running). If it shows failed, further investigation is needed in the crio information messages and log files. To display the basic information about crio, including the latest error messages, use the command: crictl info If an error occurs while using crio, the message parameter will show a detailed description. crio also logs all its activities to log files, typically found in the /var/log/crio/pods directory. Additionally, you can use the journalctl logs. To display all logs for the crio unit, run: journalctl -u crio Containerd As with crio, start by checking the status of the container runtime. On the node where the error appears, run: systemctl status containerd In the Active line, the status should show active (running). If the status shows failed, you can get more detailed information by using the built-in status command, which will display all events, including errors: containerd status Alternatively, you can view the logs using journalctl. To display logs for the containerd unit, run: journalctl -u containerd You can also check the configuration file parameters for containerd using two commands (the output is usually quite large): containerd config default — Displays the default configuration file. Use this if no changes have been made to the file. If errors occur, this file can be used for rollback. containerd config dump — Displays the current configuration file, which may have been modified. Pods Health Check Kubernetes operates with pods, the smallest software units in the cluster, where containers with applications are run. The status of pods should always be READY. To display a list of all pods in the cluster and their statuses, use the following command: kubectl get po -A To display pods in a specific namespace, use the -n flag followed by the namespace name: kubectl get po -n kube-system For more detailed information about a pod, including any possible errors, use the kubectl describe pod command, which provides the most detailed information about the pod: kubectl describe pod coredns-6997b8f8bd-b5dq6 -n kube-system All events related to the pod, including errors, are displayed in the Events section. Getting Information About Objects with kubectl describe The kubectl describe command is a powerful tool for finding detailed information about an object, including searching for and viewing various errors. You can apply this command to all Kubernetes objects that are listed in the output of the kubectl api-resources command. Deployment files are widely used when deploying applications in a Kubernetes cluster. They allow you to control the state of service deployments, including scaling application replicas. To display the statuses of all available deployments in the Kubernetes cluster, use the command: kubectl get deployments -A It is important that the columns READY, UP-TO-DATE, and AVAILABLE display the same number of pods as specified in the deployment file. If the READY column shows 0 or fewer pods than specified, the pod with the application will not be started. To find the error's cause, use the describe command with the type of object, in this case, deployment: kubectl describe deployment coredns -n kube-system Just like when using describe for pods, all events, including errors, are displayed in the Conditions section. Conclusion Checking the health of a Kubernetes cluster is an important step in troubleshooting and resolving issues. Kubernetes consists of many different components, each with its own verification algorithm. It is important to know what and how to check to identify and fix errors quickly.
28 January 2025 · 11 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support