Sign In
Sign In

Kubernetes Cluster: Installation, Configuration, and Management

Kubernetes Cluster: Installation, Configuration, and Management
Hostman Team
Technical writer
Kubernetes
22.08.2024
Reading time: 7 min

Kubernetes, or K8s, is an open-source container orchestration platform developed by Google. The core concept behind Kubernetes is that a user installs it on a server, or more likely a cluster, and deploys various workloads on it. Kubernetes addresses challenges related to container creation, scaling, namespaces, access rights, and more. The primary interaction with the cluster is through YAML configuration files.

This tutorial will guide you through creating and deploying a Kubernetes cluster locally.

Creating Virtual Machines

We will set up the Kubernetes cluster on two virtual machines: one acting as the master node and the other as a worker node. While deploying a cluster with only two nodes is not practical for real-world use, it is sufficient for educational purposes. If you wish to create a Kubernetes cluster with more nodes, simply repeat the process for each additional node.

  1. We will use Oracle's VirtualBox to create virtual machines, which you can download from this link. After installation, proceed to create the virtual machines.
  2. For the operating system, we will use Ubuntu Server, which can be downloaded here. After downloading, open VirtualBox.
  3. Click "Create" in VirtualBox to create a new virtual machine. The default settings are sufficient, but allocate 3 GB of RAM and 2 CPUs for the master node (which manages the Kubernetes cluster) and 2 GB of RAM for the worker node. Kubernetes requires a minimum of 2 CPUs for the master node. Create two virtual machines this way.
  4. After creating the virtual machines, create a boot image with the Ubuntu Server distribution. Go to "Storage" and click "Choose/Create a Disk Image."
  5. Click "Add" and select the Ubuntu Server distribution. Then, start both machines and install the operating system by selecting "Try or Install Ubuntu." During installation, create users for each system and choose the default settings.
  6. After installation, shut down both virtual machines and go to their settings. In the "Network" section, change the connection type to "Bridged Adapter" for each system so that the virtual machines can communicate with each other over the network.

System Preparation

Network Configuration

Set the node names for the cluster. On the master node, execute the following command:

sudo hostnamectl set-hostname master.local

On the worker node, execute:

sudo hostnamectl set-hostname worker.local

If there are multiple worker nodes, assign each a unique name: worker1.local, worker2.local, and so on.

To ensure that nodes are accessible by name, modify the hosts file on each node. Add the following lines:

192.168.43.80     master.local master
192.168.43.77     worker.local worker

Here, 192.168.43.80 and 192.168.43.77 are the IP addresses of each node. To find the IP address, use the ip addr command:

ip addr

Locate the IP address next to inet. Open the hosts file and make the necessary edits:

sudo nano /etc/hosts

To verify that the VMs can communicate with each other, ping the nodes:

ping 192.168.43.80

If successful, you will receive a response similar to this:

PING 192.168.43.80 (192.168.43.80) 56(84) bytes of data.
64 bytes from 192.168.43.80: icmp_seq=1 ttl=64 time=0.054 ms

You can enforce pod-to-pod isolation by applying the patterns in Kubernetes Cluster Network Policies tutorial—defining ingress and egress rules, using podSelector and namespace selectors to restrict traffic between master and worker nodes.

Updating Packages and Installing Additional Utilities

Next, install the necessary utilities and packages on each node. These steps should be applied to each node unless specified otherwise. Start by updating the package list and systems:

sudo apt-get update && apt-get upgrade -y

Then install the following packages:

sudo apt-get install curl apt-transport-https git iptables-persistent -y

Swap File

Kubernetes will not start with an active swap file, so it needs to be disabled:

sudo swapoff -a

To prevent it from reactivating after a reboot, modify the fstab file:

sudo nano /etc/fstab

Comment out the line with #:

# /swap.img      none    swap    sw      0       0

Kernel Configuration

Load additional kernel modules:

sudo nano /etc/modules-load.d/k8s.conf

Add the following two lines to k8s.conf:

br_netfilter
overlay

Now, load the modules into the kernel:

sudo modprobe br_netfilter
sudo modprobe overlay

Verify the modules are loaded successfully:

sudo lsmod | egrep "br_netfilter|overlay"

You should see output similar to this:

overlay               147456  0
br_netfilter           28672  0
bridge                299008  1 br_netfilter

Create a configuration file to process traffic through the bridge in netfilter:

sudo nano /etc/sysctl.d/k8s.conf

Add the following two lines:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

Apply the settings:

sudo sysctl --system

Docker Installation

Run the following command to install Docker:

sudo apt-get install docker docker.io -y

For more details on installing Docker on Ubuntu, refer to the official guide.

After installation, enable Docker to start on boot and restart the service:

sudo systemctl enable docker
sudo systemctl restart docker

Kubernetes Installation

Add the GPG key:

sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

Next, create a repository configuration file:

sudo nano /etc/apt/sources.list.d/kubernetes.list

Add the following entry:

deb https://apt.kubernetes.io/ kubernetes-xenial main

Update the apt-get package list:

sudo apt-get update

Install the following packages:

sudo apt-get install kubelet kubeadm kubectl

Installation is now complete. Verify the Kubernetes client version:

sudo kubectl version --client 

The output should be similar to this:

Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2"}

Cluster Configuration

Master Node

Run the following command for the initial setup and preparation of the master node:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

The --pod-network-cidr flag specifies the internal subnet address, with 10.244.0.0/16 being the default value.

The process will take a few minutes. Upon completion, you will see the following message:

Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.43.80:6443 --token f7sihu.wmgzwxkvbr8500al \
--discovery-token-ca-cert-hash sha256:6746f66b2197ef496192c9e240b31275747734cf74057e04409c33b1ad280321

Save this command to connect the worker nodes to the master node.

Create the KUBECONFIG environment variable:

export KUBECONFIG=/etc/kubernetes/admin.conf

Install the Container Network Interface (CNI):

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Worker Node

On the worker node, run the kubeadm join command obtained during the master node setup. After this, on the master node, enter:

sudo kubectl get nodes

The output should be:

NAME                 STATUS      ROLES                        AGE    VERSION
master.local          Ready      control-plane,master          10m    v1.24.2
worker.local          Ready      <none>                        79s    v1.24.2

The cluster is now deployed and ready for operation.

Conclusion

Setting up a Kubernetes cluster involves several steps, from creating and configuring virtual machines to installing and configuring the necessary software components. This tutorial provided a step-by-step guide to deploying a basic Kubernetes cluster on a local environment. While this setup is suitable for educational purposes, real-world deployments typically involve more nodes and more complex configurations. Kubernetes provides powerful tools for managing containerized applications, making it a valuable skill for modern IT professionals. By following this guide, you've taken the first steps in mastering Kubernetes and its ecosystem.

Kubernetes
22.08.2024
Reading time: 7 min

Similar

Kubernetes

How to Install Kubecost: Full Installation Guide

Kubecost is a tool for monitoring and managing costs in Kubernetes. It helps you understand in real time how much resources (CPU, RAM, storage, etc.) each component (pod, service, namespace, deployment) is consuming, and how that translates into money. It is mainly used to monitor costs per service and optimize resource usage. Kubecost brings cost transparency, letting you see how much each application or namespace costs. Unused resources are automatically identified. This tool is useful for DevOps engineers in managing and optimizing resources, financial analysts in tracking infrastructure spending, and project managers in allocating costs across teams and projects. In this article, we’ll go through the installation, integration, and initial configuration of Kubecost. Installing Kubecost Let’s walk through the installation of Kubecost step by step. Step Zero: Create and Connect to a Kubernetes Cluster To use Kubecost, you’ll need: A Kubernetes cluster with a supported version (1.16 or newer). Sufficient resources in the cluster (a minimum of 2 CPUs and 4 GB RAM is recommended for Kubecost pods). A cluster management tool like kubectl. Hostman’s cloud infrastructure provides the ability to create a Kubernetes cluster with a recommended configuration (2 CPUs @ 3.3 GHz, 4 GB RAM, 60 GB NVMe). We described the process of creating a cluster in the documentation. For easier monitoring, you can also install the Kubernetes Dashboard with a single click. Once the cluster is created, connect to it—we recommend using Lens. The connection process is also described in detail in our docs. You’ll need a terminal with the cluster’s context. To access it, navigate to the Overview tab in Lens and click the Terminal button located at the bottom. All command-line operations will be performed in this terminal. Step One: Choose a Storage Type Kubernetes requires dedicated storage to function properly. For development, Local Path Provisioner is a good option; for production, we recommend an external fault-tolerant storage solution. Local Path Provisioner This is convenient in test and local environments where a single node and low fault tolerance are sufficient. However, in clusters with multiple nodes under active testing, it may not be enough since it’s limited to local disks. Here’s how to install it using Rancher’s ready-made manifest: curl -s https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml | kubectl apply -f - Expected output: namespace/local-path-storage created serviceaccount/local-path-provisioner-service-account created role.rbac.authorization.k8s.io/local-path-provisioner-role created clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created rolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created deployment.apps/local-path-provisioner created storageclass.storage.k8s.io/local-path createdconfigmap/local-path-config created Ensure the pod is running: kubectl get pods -n local-path-storage Expected output: NAME                               READY   STATUS    RESTARTS   AGE local-path-provisioner-xxx         1/1     Running   0          68s After installation, a StorageClass named local-path should appear: kubectl get sc Expected output: NAME         PROVISIONER              ... VOLUMEBINDINGMODE     AGE local-path   rancher.io/local-path    ... WaitForFirstConsumer  5s To set the created local-path as the default storage class: kubectl patch storageclass local-path \   -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' Expected output: storageclass.storage.k8s.io/local-path patched External Storage For production use, where highly available volumes and automatic node-failure recovery are important, more reliable solutions than Local Path Provisioner are preferred. One such option is S3 storage from Hostman. You can easily install the CSI S3 addon. Step Two: Install Kubecost Add the Kubecost Helm repository and update it: helm repo add kubecost https://kubecost.github.io/cost-analyzer/ helm repo update Now use one of the following Helm commands: If the cluster has a default StorageClass: helm install kubecost kubecost/cost-analyzer \   --namespace kubecost --create-namespace If the cluster does NOT have a default StorageClass: helm install kubecost kubecost/cost-analyzer \   --namespace kubecost --create-namespace \   --set global.storageClass=<STORAGECLASS> Expected output: Kubecost 2.x.x has been successfully installed. Step Three: Verify Installation Check that the PersistentVolumeClaims (PVCs) created by Kubecost are in Bound status: kubectl get pvc -n kubecost Expected output (trimmed for clarity): NAME                         STATUS kubecost-cost-analyzer       Bound kubecost-prometheus-server   Bound Make sure each PVC shows Bound. Next, ensure all pods are running and error-free: kubectl get pod -n kubecost Expected output: NAME                              READY   STATUS    RESTARTS kubecost-cost-analyzer-xxx        4/4     Running   0 kubecost-forecasting-xxx          1/1     Running   0 kubecost-grafana-xxx              2/2     Running   0 kubecost-prometheus-server-xxx    1/1     Running   0 If you see this, Kubecost is installed correctly. Step Four: Port Forwarding To manage Kubecost and view its metrics, you need to port-forward to your local machine. First, identify the service used by Kubecost: kubectl get svc -n kubecost Expected output (trimmed): NAME                       TYPE        CLUSTER-IP        EXTERNAL-IP   PORT kubecost-cost-analyzer     ClusterIP   10.111.138.113    <none>        9090/TCP The desired service is typically kubecost-cost-analyzer, and its port is 9090. Forward it: kubectl port-forward -n kubecost service/kubecost-cost-analyzer 9090:9090 Expected output: Forwarding from 127.0.0.1:9090 -> 9090 Forwarding from [::1]:9090 -> 9090 Now you can use Kubecost via the web UI at http://localhost:9090. How to Configure Kubecost Note: The Kubecost UI may vary by version; button labels, metrics, and other elements may change. Go to the Settings tab (on the left, might be hidden) for initial configuration. Cost Model Configuration Filling out this section is recommended for accurate cost calculations. Scroll to the Pricing section and enable the Enable Custom Pricing toggle. The app will prompt you to enter resource pricing manually. If using Hostman, you can find this pricing info on the Create Cluster page, under section 3. Worker Nodes Configuration, tab Custom. There, sliders will display the cost of the selected configuration. Note: In Hostman, the cost of fixed-configuration worker nodes is lower than that of equivalent custom-configured ones. Example field entry: Field Value Description Monthly CPU Price $1.80 Price per 1 vCPU Monthly Spot CPU Price* 0 Price per 1 Spot vCPU Monthly RAM Price $1.50 Price per 1 GB of RAM Monthly Spot RAM Price* 0 Price per 1 GB of Spot RAM Monthly GPU Price* 0 Price per 1 GPU Monthly Storage Price $0.04 Price per 1 GB of storage * — Not used in Hostman. Custom Labels Labels are used to identify, group, and detail costs associated with Kubernetes resources. Scroll to the Labels section. It’s similar in layout to the cost model section. Name Description Default Value Owner Label / Annotation Indicates resource owner (e.g., user or team)* owner Team Label Defines the team using the resource* team Department Label Links the resource to a department or cost center* department Product Label Specifies the app/product the resource is for* app Environment Label Indicates the environment (dev, prod, staging, etc.) env GPU Label Node-level label indicating GPU type — GPU Label Value Label value indicating GPU presence — * — supports CSV format. Prometheus Status Check Kubecost retrieves metrics from Prometheus. Scroll down to Prometheus Status—it’s near the bottom of the Settings page. You should see green checkmarks for each metric (as shown in the screenshot). If metrics are missing, Kubecost may not work as expected. For full diagnostics, visit: http://localhost:9090/diagnostics. Alert Configuration Kubecost can notify users of unexpected events. Alerts can be sent via email, Slack, webhooks, or Microsoft Teams. Go to the Alerts tab. Under Global Recipients, enter the contacts for global alert delivery. Below that, you can define alert types and specific recipients. Each type is described below: Name Description Allocation Budget Budget for cost allocation at namespace/team/project level. Notifies on overage. Allocation Efficiency Resource usage efficiency (e.g., CPU, RAM) within budgets or namespaces. Allocation Recurring Update Regular updates on resource allocation and costs. Allocation Spend Change Notifies of significant changes in resource spend. Asset Budget Budget for physical/virtual resources (nodes, GPUs, disks). Alerts on overage. Asset Recurring Update Regular updates on physical/virtual resource usage. Cloud Cost Budget Budget for cloud costs. Alerts when exceeded. Uninstalling and Reinstalling Kubecost Sometimes, full uninstallation is required to fix issues—for example, if no default StorageClass was set during the initial install. To remove Kubecost completely: helm uninstall kubecost -n kubecost kubectl delete ns kubecost To reinstall, follow Step Two again. Troubleshooting Common Issues Error Symptoms Solution Out of memory OOMKilled, logs show CrashLoopBackOff Add new worker nodes via Hostman (Resources tab). Kubernetes will reschedule the pods. Lack of CPU or disk Pods stuck in CrashLoopBackOff; Prometheus shows incomplete data Add more resources, check Prometheus logs for retention or WAL errors. Prometheus out of disk space Logs show Storage retention limit reached, WAL write errors Resize disk (for external storage), or add a new disk and migrate Prometheus data (local). UI slow / Graphs timing out Graphs load slowly or timeout Increase resources.requests/limits; optimize Prometheus retention and use recording rules. No PersistentVolume for PVC Error: 0/2 nodes ... no available persistent volumes to bind Refer to Step One, reinstall Kubecost with proper storage. PVC stuck in Pending kubectl get pvc shows Pending; no PV or no StorageClass Ensure storage class exists or set manually. Missing metrics in UI No data/graphs; logs show Unable to query Prometheus Verify Prometheus is running and has enough disk. Helm install fails Errors like chart not found, or failed resource creation Retry Step Two, ensure you have proper RBAC permissions. UI inaccessible via port Port-forward runs, but http://<node_ip>:9090 fails Use http://localhost:9090 if running locally; configure NodePort or LoadBalancer access. Zero dollar cost in UI Cost Allocation shows $0 or no data Manually define the cost model under Settings > On-Prem. Conclusion Kubecost is a powerful tool for monitoring and optimizing Kubernetes costs. It helps make infrastructure spending transparent and manageable. This guide covered the full installation and configuration process, including cluster preparation, choosing a storage class, Helm-based deployment, cost model setup, and Prometheus integration. Effective use of Kubecost not only helps reduce expenses but also improves resource management across teams, projects, and applications. By following this guide, you’ll be able to deploy and tailor Kubecost to suit your infrastructure needs.
25 July 2025 · 10 min to read
Kubernetes

Kubernetes Backup

The Kubernetes containerization platform processes and stores large volumes of data from various cluster components, including persistent storage blocks (Persistent Volumes), various manifests, and configuration files such as Deployments, ConfigMaps, and Secrets. It is important to organize backups to protect this data. There are various solutions for simplifying the Kubernetes backup process. One of them is Velero, specifically designed to create Kubernetes cluster backups. Today, we will take a detailed look at the process of creating backups using Velero. Prerequisites A deployed and running Kubernetes cluster. It can be a self-hosted cluster deployed or a Kubernetes cluster in the Hostman cloud. Object storage for backup files. In this guide, we will use Hostman S3 object storage. A server or a computer from which we will manage the cluster and install Velero. We'll use a machine with Ubuntu 24.04. kubectl utility installed. The major version of kubectl should not differ from that of the cluster. For instance, if the cluster version is 1.31, you can use versions from 1.30 to 1.32. To download a specific version of kubectl, specify it in the URL, for example: curl -LO https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl After installation, check the version: kubectl version --client Helm package manager installed. Helm simplifies installing, upgrading, and managing applications within a Kubernetes cluster. Helm organizes complex Kubernetes configurations into manageable packages called charts. Creating S3 Storage S3 is an object storage service for reliable storage of large datasets. Since Velero requires object storage, let's create one in the S3 Storage section of the Hostman management panel. Click the Create button: For this guide, we'll select the minimum storage size of 10 GB. In practice, you should choose a size that meets your needs. Set the storage type to Public. You can also rename the bucket if needed. Velero Overview Velero is an open-source client-server utility for creating backups and restoring Kubernetes cluster resources. It works with Kubernetes objects (such as Pods, Deployments, and Services) and saves them as snapshots. Additionally, it can back up data from Persistent Volume (PV) objects. Velero Key Features: Backup Creation: Save the state of the Kubernetes cluster, including manifests and Persistent Volumes. Data Restoration: Restore the entire cluster or individual resources from a backup. Data Migration: Move resources between Kubernetes clusters. Velero Architecture The Velero architecture consists of the following key components: Velero Server (deployed inside the Kubernetes cluster): The server component runs as a Deployment object within the Kubernetes cluster. It handles backup and recovery tasks. CLI (deployed outside the cluster): The client component provides a command-line interface for managing Velero and sends commands to the Velero server. Cloud Storage Provider Plugins: Used to interact with data storage services (e.g., Amazon S3, Google Cloud Storage, and Azure Blob Storage). Preparing the kubeconfig File To connect to a cluster, you need the kubeconfig file — a special YAML file containing connection details for the cluster. If you are using a Kubernetes cluster from Hostman, you can download the kubeconfig file from the Dashboard of your cluster. Next, export the KUBECONFIG environment variable, specifying the full path to the kubeconfig file. Linux and macOS In the terminal, run the following command: export KUBECONFIG=/root/Daring_Linnet_config.yaml Windows In the Windows PowerShell, use this command: $env:KUBECONFIG = "C:\Users\alex\plugins\container-service\clusters\customername\Daring_Linnet_config.yaml" Replace Daring_Linnet_config.yaml with the name of your kubeconfig file. After exporting the environment variable, check the connection to the cluster by listing all available nodes: kubectl get nodes If the command returns a list of nodes, we have successfully connected to the cluster. Installing Velero Installing the Client Component As mentioned earlier, Velero consists of a client (CLI) and a server component. We'll start by installing the client, which provides a command-line interface. Download the .tar archive for the Velero client and extract it. We'll use version 1.15.1: curl -L https://github.com/vmware-tanzu/velero/releases/download/v1.15.1/velero-v1.15.1-linux-amd64.tar.gz | tar -xz The output will be a directory named velero-v1.15.1-linux-amd64 (where v1.15.1 is the version used). Move the directory to /usr/local/bin: mv velero-v1.15.1-linux-amd64/velero /usr/local/bin/ Check the utility's functionality by displaying its version: velero version If the version is displayed, the client component has been successfully installed. Now we will proceed with the installation of the server component. Installing the Server Component One way to install the server component of Velero is through a Helm chart. To install Velero using Helm, follow these steps: Create a new namespace named velero: kubectl create namespace velero Create a new Kubernetes Secret object to store the aws_access_key_id and aws_secret_access_key variables. These keys are essential for authenticating and authorizing access to S3 storage. S3 Access Key: A public identifier used to identify the user or application making the request. S3 Secret Access Key: A private key used to digitally sign requests. Keep this key confidential. To find the S3 Access Key and S3 Secret Access Key, go to the S3 Storage section in the Hostman management panel and click on the bucket. Copy these values and create a new file named velero-credentials-secret.yaml: nano velero-credentials-secret.yaml Add the following content: apiVersion: v1 kind: Secret metadata: name: cloud-credentials namespace: velero type: Opaque stringData: cloud: | [default] aws_access_key_id = UOY3beX5A3bV9Ly aws_secret_access_key = F3x78pH1d5BOu4BfVv Create the secret in Kubernetes: kubectl apply -f velero-credentials-secret.yaml Add the official vmware-tanzu Helm repository: helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts Update the repository list: helm repo update List the repositories to confirm the addition: helm repo ls Install Velero using the following command: helm install velero vmware-tanzu/velero \ --namespace velero \ --set credentials.existingSecret=cloud-credentials \ --set 'configuration.backupStorageLocation[0].name=default' \ --set 'configuration.backupStorageLocation[0].provider=aws' \ --set 'configuration.backupStorageLocation[0].bucket=f60e2023-bucket-for-velero' \ --set 'configuration.backupStorageLocation[0].config.region=us-2' \ --set 'configuration.backupStorageLocation[0].config.s3ForcePathStyle=true' \ --set 'configuration.backupStorageLocation[0].config.s3Url=https://s3.hostman.com' \ --set 'configuration.volumeSnapshotLocation[0].name=default' \ --set 'configuration.volumeSnapshotLocation[0].provider=aws' \ --set 'configuration.volumeSnapshotLocation[0].config.region=us-2' \ --set 'initContainers[0].name=velero-plugin-for-aws' \ --set 'initContainers[0].image=velero/velero-plugin-for-aws:v1.7.0' \ --set 'initContainers[0].volumeMounts[0].mountPath=/target' \ --set 'initContainers[0].volumeMounts[0].name=plugins' In the configuration.backupStorageLocation[0].bucket parameter, specify the bucket name, which you can find in the Hostman control panel. Run the installation command. If there are no errors, a message will confirm that Velero has been deployed in the cluster. To monitor its status, use: kubectl get deployment/velero -n velero The deployment file is successfully launched, as indicated by the READY and UP-TO-DATE statuses. You can also check the status of the Velero pod: kubectl get pods -n velero If the pod is running, you can optionally check its logs (where velero-7bb8d5c5f-jwg5c is the Velero pod name): kubectl logs velero-7bb8d5c5f-jwg5c -n velero The Velero installation is now fully complete. Backup Using Velero To test the backup process, we will create a new namespace and several Kubernetes objects within it. Create a namespace named test-velero: kubectl create ns test-velero Create a Deployment file with two containers running the NGINX web server and a LoadBalancer service.  nano nginx-dev.yaml Add the following configuration: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-dev namespace: test-velero labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx:1.17.6 name: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: labels: app: nginx name: nginx-test-service namespace: test-velero spec: ports: - port: 80 targetPort: 80 selector: app: nginx type: LoadBalancer Apply the file and create the resources: kubectl apply -f nginx-dev.yaml Verify the status of the created resources: kubectl get all -n test-velero Creating a Backup To create a backup for all resources in the test-velero namespace, run the following command: velero backup create nginx-test-backup --include-namespaces test-velero If the backup was created successfully, you will see the following message: Backup request "nginx-test-backup" submitted successfully.Run `velero backup describe nginx-test-backup` or `velero backup logs nginx-test-backup` for more details. You can check the status with the describe command:  velero backup describe nginx-test-backup If successful, the status will be Completed. Listing Backups To view all backups in the storage, run: velero backup get The output will display the status (STATUS), number of errors (ERRORS), warnings (WARNINGS), creation time (CREATED), and expiration time (EXPIRES) for each backup. Restoring a Backup To test the restoration process, first delete the previously created namespace and all objects within it: kubectl delete namespace test-velero Restore the backup by specifying its name (nginx-test-backup): velero restore create --from-backup nginx-test-backup Check the restoration status using the following command, providing the name of the restored copy (obtained from the velero restore create output): velero restore describe nginx-test-backup-20250114155656 If successful, the status will be Completed. Viewing Backup Files To view backup files, navigate to the Objects tab in the S3 Storage section in your Hostman control panel. Velero creates separate directories for: Backups: containing backup data for the respective resources. Restorations: containing details about restored objects. Each directory contains the corresponding Kubernetes objects for backup and restoration purposes. Useful Commands for Backup with Velero Velero offers extensive backup functionality, allowing you to create backups for specific objects or configurations. Below are some useful examples: Scheduled Backup for Specific Namespaces To automatically create backups for all objects in the default and my-namespace namespaces every day at 2:00 AM: velero schedule create daily-backup --schedule="0 2 * * *" --include-namespaces default,my-namespace Backup for Specific Resources To create a backup only for objects of type deployment in the default namespace: velero backup create my-backup2 --include-resources deployments --include-namespaces default Full Cluster Backup To back up the entire Kubernetes cluster, including cluster-scoped resources such as ClusterRole, ClusterRoleBinding, CustomResourceDefinition (CRD), PersistentVolume, and StorageClass: velero backup create full-cluster-backup Backup by Label Selector To back up only objects with a specific label, for instance, those with the selector app=nginx: velero backup create backup-with-label-nginx --selector "app=nginx" Backup Excluding a Label Selector To back up only objects without a specific label selector, such as excluding objects labeled app=nginx: velero backup create backup-with-no-label-nginx --selector "app=nginx" Excluding a Specific Namespace To exclude the kube-system namespace and all its objects from the backup: velero backup create backup-exclude-kube-system --exclude-namespaces kube-system Excluding Specific Resources To exclude all secrets from the backup: velero backup create backup-exclude-secrets --exclude-resources secrets Before running production backups, validate node, pod, and volume health as described in Kubernetes Cluster Health Checks—covering viewing detailed information about resources and various components  to ensure all resources are ready. Conclusion In this practical guide, we covered how to install Velero and how to use it to create Kubernetes backups and restore data. Velero's rich functionality allows for quick and straightforward backup-related tasks, making it a valuable tool for maintaining data safety and cluster reliability.
04 February 2025 · 11 min to read
Kubernetes

Kubernetes Requests and Limits

When working with the Kubernetes containerization platform, it is important to control resource usage for cluster objects such as pods. The requests and limits parameters allow you to configure resource consumption limits, such as how many resources a pod can use in a Kubernetes cluster. This article will explore the use of requests and limits in Kubernetes through practical examples. Prerequisites To work with requests and limits in a Kubernetes cluster, we need: A Kubernetes cluster (you can create one in the Hostman control panel). For testing purposes, a cluster with two nodes will suffice. The cluster can also be deployed manually by renting the necessary number of cloud or dedicated (physical) servers, setting up the operating system, and installing the required packages. Lens or kubectl for connecting to and managing your Kubernetes clusters. Connecting to a Kubernetes Cluster Using Lens First, go to the cluster management page in your Hostman panel. Download the Kubernetes cluster configuration file (the kubeconfig file). Once Lens is installed on your system, launch the program, and from the left menu, go to the Catalog (app) section: Select Clusters and click the blue plus button at the bottom right. Choose the directory where you downloaded the Kubernetes configuration file by clicking the Sync button at the bottom right. After this, our cluster will appear in the list of available clusters. Click on the cluster's name to open its dashboard: What are Requests and Limits in Kubernetes First, let's understand what requests and limits are in Kubernetes. Requests are a mechanism in Kubernetes that is responsible for allocating physical resources, such as memory and CPU cores, to the container being launched. In simple terms, requests in Kubernetes are the minimum system requirements for an application to function properly. Limits are a mechanism in Kubernetes that limits the physical resources (memory and CPU cores) allocated to the container being launched. In other words, limits in Kubernetes are the maximum values for physical resources, ensuring that the launched application cannot consume more resources than specified in the limits. The container can only use resources up to the limit specified in the Limits. The request and limit mechanisms apply only to objects of type pod and are defined in the pod configuration files, including deployment, StatefulSet, and ReplicaSet files. Requests are added in the containers block using the resources parameter. In the resources section, you need to add the requests block, which consists of two values: cpu (CPU resource request) and memory (memory resource request). The syntax for requests is as follows: containers: ... resources: requests: cpu: "1.0" memory: "150Mi" In this example, for the container to be launched on a selected node in the cluster, at least one free CPU core and 150 megabytes of memory must be available. Limits are set in the same way. For example: containers: ... resources: limits: cpu: "2.0" memory: "500Mi" In this example, the container cannot use more than two CPU cores and no more than 500 megabytes of memory. The units of measurement for requests and limits are as follows: CPU — in millicores (milli-cores) RAM — in bytes For CPU resources, cores are used. For example, if we need to allocate one physical CPU core to a container, the manifest should specify 1.0. To allocate half a core, specify 0.5. A core can be logically divided into millicores, so you can allocate, for example, 100m, which means one-thousandth of a core (1 full CPU core contains 1000 millicores). For RAM, we specify values in bytes. You can use numbers with the suffixes E, P, T, G, M, k. For example, if a container needs to be allocated 1 gigabyte of memory, you should specify 1G. In megabytes, it would be 1024M, in kilobytes, it would be 1048576k, and so on. The requests and limits parameters are optional; however, it is important to note that if both parameters are not set, the container will be able to run on any available node in the cluster regardless of the free resources and will consume as many resources as are physically available on each node. Essentially, the cluster will allocate excess resources. This practice can negatively affect the stability of the entire cluster, as it significantly increases the risk of errors such as OOM (Out of Memory) and OutOfCPU (lack of CPU resources). To prevent these errors, Kubernetes introduced the request and limit mechanisms. To understand how request and limit choices impact service performance, apply the techniques from Load Balancing in Kubernetes, which covers tracking pods in Kubernetes, balancing via Ingress, external and intra-cluster balancing—ensuring your resource constraints don’t inadvertently throttle critical traffic. Practical Use of Requests and Limits in Kubernetes Let's look at the practical use of requests and limits. First, we will deploy a deployment file with an Nginx image where we will set only the requests. In the configuration below, to launch a pod with a container, the node must have at least 100 millicores of CPU (1/1000 of a CPU core) and 150 megabytes of free memory: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test-deployment namespace: ns-for-nginx labels: app: nginx-test spec: selector: matchLabels: app: nginx-test template: metadata: labels: app: nginx-test spec: containers: - name: nginx-test image: nginx:1.25 resources: requests: cpu: "100m" memory: "150Mi" Before deploying the deployment, let's create a new namespace named ns-for-nginx: kubectl create ns ns-for-nginx After creating the namespace, we will deploy the deployment file using the following command: kubectl apply -f nginx-test-deployment.yml Now, let's check if the deployment was successfully created: kubectl get deployments -A Also, check the status of the pod: kubectl get po -n ns-for-nginx The deployment file and the pod have been successfully launched. To ensure that the minimum resource request was set for the Nginx pod, we will use the kubectl describe pod command (where nginx-test-deployment-786d6fcb57-7kddf is the name of the running pod): kubectl describe pod nginx-test-deployment-786d6fcb57-7kddf -n ns-for-nginx In the output of this command, you can find the requests block, which contains the previously set minimum requirements for our container to run: In the example above, we created a deployment that sets only the minimum required resources for deployment. Now, let's add limits for the container to run with 1 full CPU core and 1 gigabyte of RAM by creating a new deployment file: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test-deployment-2 namespace: ns-for-nginx labels: app: nginx-test2 spec: selector: matchLabels: app: nginx-test2 template: metadata: labels: app: nginx-test2 spec: containers: - name: nginx-test2 image: nginx:1.25 resources: requests: cpu: "100m" memory: "150Mi" limits: cpu: "1.0" memory: "1G" Let's create the deployment in the cluster: kubectl apply -f nginx-test-deployment2.yml Using the kubectl describe command, let's verify that both requests and limits have been applied (where nginx-test-deployment-2-6d5df6c95c-brw8n is the name of the pod): kubectl describe pod nginx-test-deployment-2-6d5df6c95c-brw8n -n ns-for-nginx In the screenshot above, both requests and limits have been set for the container. With these quotas, the container will be scheduled on a node with at least 150 megabytes of RAM and 100 milli-CPU. At the same time, the container will not be allowed to consume more than 1 gigabyte of RAM and 1 CPU core. Using ResourceQuota In addition to manually assigning resources for each container, Kubernetes provides a way to allocate quotas to specific namespaces in the cluster. The ResourceQuota mechanism allows setting resource usage limits within a particular namespace. ResourceQuota is intended to limit resources such as CPU and memory. The practical use of ResourceQuota looks like this: Create a new namespace with quota settings: kubectl create ns ns-for-resource-quota Create a ResourceQuota object: apiVersion: v1 kind: ResourceQuota metadata: name: resource-quota-test namespace: ns-for-resource-quota spec: hard: pods: "2" requests.cpu: "0.5" requests.memory: "800Mi" limits.cpu: "1" limits.memory: "1G" In this example, for all objects created in the ns-for-resource-quota namespace, the following limits will apply: A maximum of 2 pods can be created. The minimum CPU resources required for starting the pods is 0.5 milliCPU. The minimum memory required for starting the pods is 800MB. CPU limits are set to 1 core (no more can be allocated). Memory limits are set to 1GB (no more can be allocated). Apply the configuration file: kubectl apply -f test-resource-quota.yaml Check the properties of the ResourceQuota object: kubectl get resourcequota resource-quota-test -n ns-for-resource-quota As you can see, resource quotas have been set. Also, verify the output of the kubectl describe ns command: kubectl describe ns ns-for-resource-quota The previously created namespace ns-for-resource-quota will have the corresponding resource quotas. Example of an Nginx pod with the following configuration: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-with-quota namespace: ns-for-resource-quota labels: app: nginx-with-quota spec: selector: matchLabels: app: nginx-with-quota replicas: 3 template: metadata: labels: app: nginx-with-quota spec: containers: - name: nginx image: nginx:1.22.1 resources: requests: cpu: 100m memory: 100Mi limits: cpu: 100m memory: 100Mi Here we define 3 replicas of the Nginx pod to test the quota mechanism. We also set minimum resource requests for the containers and apply limits to ensure the containers don't exceed the defined resources. Apply the configuration file: kubectl apply -f nginx-deployment-with-quota.yaml kubectl get all -n ns-for-resource-quota As a result, only two of the three replicas of the pod will be successfully created. The deployment will show an error message indicating that the resource quota for pod creation has been exceeded (in this case, we're trying to create more pods than allowed): However, the remaining two Nginx pods were successfully started: Conclusion Requests and limits are critical mechanisms in Kubernetes that allow for flexible resource allocation and control within the cluster, preventing unexpected errors in running applications and ensuring the stability of the cluster itself. We offer an affordable Kubernetes hosting platform, with transparent and scalable pricing for all workloads.
29 January 2025 · 9 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support