Sign In
Sign In

Restricting Access in Kubernetes

Restricting Access in Kubernetes
Hostman Team
Technical writer
Kubernetes
19.08.2024
Reading time: 6 min

Deploying a Kubernetes cluster is relatively easy, even for beginners. However, maintaining its functionality is a different story. One of the key tasks here is managing access rights to prevent cluster issues. In this guide, we'll explore the most effective way to restrict access, minimizing the chances of cluster disruptions due to accidental configuration changes by inexperienced users. But first, let's cover some basics.

How Access Control Works in Kubernetes

Kubernetes access control is based on the concept of roles and permissions, known as Role-Based Access Control (RBAC). RBAC allows Kubernetes administrators to define who has access to which resources and operations within the cluster.

The following key entities are used for configuring Role-Based Access Control in Kubernetes:

  • Roles: Define what actions are permitted on specific resources (e.g., read, write, delete).

  • RoleBindings: Link roles to specific users, service accounts, or groups.

  • ServiceAccounts: Used to authenticate applications and services within the cluster.

With RBAC, administrators can control access to various Kubernetes resources, such as pods, services, and storage, based on the needs and roles of users or services. RBAC provides a flexible access management system that helps ensure security and control over the cluster.

RBAC allows you to set access controls at the Kubernetes cluster level (using ClusterRole and ClusterRoleBinding) or limit them within a specific namespace (using Role and RoleBinding).

Creating Roles and RoleBindings to Restrict Access

To create a Role and RoleBinding in Kubernetes, you need to create YAML files defining these objects. Below are specific examples. First, here's a code example for defining a Role (let's call it getlistwatch.yaml):

kind: Role
metadata:
	namespace: default
	name: getlistwatch
rules:
- apiGroups: [""]
	resources: ["pods"]
	verbs: ["get", "list", "watch"]

Now, here's an example for defining a RoleBinding (let's call it getlistwatch-bind.yaml):

kind: RoleBinding
metadata:
	name: getlistwatch-bind
	namespace: default
subjects:
- kind: User
	name: username # Replace 'username' with the actual user's name
	apiGroup: rbac.authorization.k8s.io
roleRef:
	kind: Role
	name: getlistwatch
	apiGroup: rbac.authorization.k8s.io

You can apply these objects using the kubectl apply -f command. For our examples, it would look like this:

kubectl apply -f getlistwatch.yaml
kubectl apply -f getlistwatch-bind.yaml

In these examples, we created a Role named getlistwatch, which allows getting, listing, and watching pod resources in the cluster. We then created a RoleBinding named getlistwatch-bind, which links this role to a specific user. After applying these files, the user will be granted permission to perform the specified operations on pod resources in the cluster.

It's worth noting that this user will not be able to perform any other actions in the cluster unless other roles are assigned to them, which should be checked separately.

Authentication Methods

There are three primary methods:

  1. Basic authentication with configuration passed through the API.

  2. Client certificate authentication, certified by the Kubernetes certification authority.

  3. Authentication through Bearer-token or JWT.

The first method is rarely used today, so let's move on to the second.

Kubernetes Certificate Authentication

In Kubernetes certificate authentication, each user or service receives its own certificate, which is used for authentication when attempting to access the Kubernetes cluster. The process usually involves the following steps:

  1. Generating Certificates: The cluster administrator generates certificates for each user/service using a certificate authority or certificate creation tools. For example, RSA private keys can be created, followed by certificate signing requests sent to the certification authority.

  2. Configuring Authentication: The administrator adds the generated certificates to the Kubernetes configuration, specifying which users/services have access to which resources in the cluster. This is done through kubeconfig, generated for each user/service, with the signed certificate added.

  3. Creating Roles: At this stage, a Role is created and then linked to the user/service through a RoleBinding (as shown in the code above).

Now, when attempting to access the cluster, the user/service must provide their certificate for verification. Kubernetes uses this certificate to authenticate and determine access permissions. If the certificate is successfully verified, the user/service is granted the appropriate permissions to perform operations in the cluster. There are also automation tools for this process, such as bash scripts or Ansible.

This method allows you to create a set of standard roles but introduces the challenge of managing access for numerous users/services and the complexity of certificate revocation. Therefore, in many cases, it's better and safer to use third-party authentication services like DEX and Keycloak, which provide secure authentication via OIDC (OpenID Connect).

To ensure your RBAC policies are solid, leverage the health-check techniques from the Kubernetes Cluster Health Checks tutorial—verify API-server readiness and component statuses, then scan recent events for “Unauthorized” entries with a simple kubectl get events filter. These quick checks help you catch misconfigurations before they become incidents.

Authentication via DEX

One of the key advantages of DEX is its ease of use. However, setting up DEX requires the creation of certificates for both DEX and Gangway, which work in tandem as they communicate through TLS. When deployed within a Kubernetes cluster, entities such as dex.example.com and gangway.example.com will be created, for which certificates are needed. Don't forget to monitor certificate expiration dates programmatically or through cert-manager, as they are time-limited. Cert-manager can even automatically renew them. DEX is installed via Helm Chart; all its settings are contained in a ConfigMap.

Authentication via Keycloak

One of Keycloak's advantages is that it has its own web interface, unlike DEX. It also supports a larger number of backends and can work with more than just Kubernetes. Additionally, Keycloak is ideal for managing user access to multiple applications, as it's designed to work as an SSO (Single Sign-On) server. However, this comes at the cost of a higher learning curve, as even experienced developers unfamiliar with Keycloak will need to study its extensive documentation first.

How Third-Party Authentication Services Work

After the user opens the Gangway form, they will be redirected to the DEX/Keycloak authorization page.

  • The application checks the correctness of the entered data.

  • If the data is correct, DEX/Keycloak returns authentication tokens to Gangway. This process is automated and invisible to the user.

  • The user can then download the generated kubeconfig with access settings based on the received data.

  • Kubeconfig is needed to send requests directly to the Kubernetes server, where DEX/Keycloak checks the validity of the tokens.

Which Method to Choose

To conclude, here are some factors to help you choose between Kubernetes certificates, DEX, and Keycloak:

  • Choose certificate authentication if the project is small and has few users, as tracking them in this method can be inconvenient.

  • Choose DEX if you need access settings only for the cluster, without additional backends.

  • Choose Keycloak if you need to configure access for multiple unrelated applications for individual users.

Kubernetes
19.08.2024
Reading time: 6 min

Similar

Kubernetes

How to Install Kubecost: Full Installation Guide

Kubecost is a tool for monitoring and managing costs in Kubernetes. It helps you understand in real time how much resources (CPU, RAM, storage, etc.) each component (pod, service, namespace, deployment) is consuming, and how that translates into money. It is mainly used to monitor costs per service and optimize resource usage. Kubecost brings cost transparency, letting you see how much each application or namespace costs. Unused resources are automatically identified. This tool is useful for DevOps engineers in managing and optimizing resources, financial analysts in tracking infrastructure spending, and project managers in allocating costs across teams and projects. In this article, we’ll go through the installation, integration, and initial configuration of Kubecost. Installing Kubecost Let’s walk through the installation of Kubecost step by step. Step Zero: Create and Connect to a Kubernetes Cluster To use Kubecost, you’ll need: A Kubernetes cluster with a supported version (1.16 or newer). Sufficient resources in the cluster (a minimum of 2 CPUs and 4 GB RAM is recommended for Kubecost pods). A cluster management tool like kubectl. Hostman’s cloud infrastructure provides the ability to create a Kubernetes cluster with a recommended configuration (2 CPUs @ 3.3 GHz, 4 GB RAM, 60 GB NVMe). We described the process of creating a cluster in the documentation. For easier monitoring, you can also install the Kubernetes Dashboard with a single click. Once the cluster is created, connect to it—we recommend using Lens. The connection process is also described in detail in our docs. You’ll need a terminal with the cluster’s context. To access it, navigate to the Overview tab in Lens and click the Terminal button located at the bottom. All command-line operations will be performed in this terminal. Step One: Choose a Storage Type Kubernetes requires dedicated storage to function properly. For development, Local Path Provisioner is a good option; for production, we recommend an external fault-tolerant storage solution. Local Path Provisioner This is convenient in test and local environments where a single node and low fault tolerance are sufficient. However, in clusters with multiple nodes under active testing, it may not be enough since it’s limited to local disks. Here’s how to install it using Rancher’s ready-made manifest: curl -s https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml | kubectl apply -f - Expected output: namespace/local-path-storage created serviceaccount/local-path-provisioner-service-account created role.rbac.authorization.k8s.io/local-path-provisioner-role created clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created rolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created deployment.apps/local-path-provisioner created storageclass.storage.k8s.io/local-path createdconfigmap/local-path-config created Ensure the pod is running: kubectl get pods -n local-path-storage Expected output: NAME                               READY   STATUS    RESTARTS   AGE local-path-provisioner-xxx         1/1     Running   0          68s After installation, a StorageClass named local-path should appear: kubectl get sc Expected output: NAME         PROVISIONER              ... VOLUMEBINDINGMODE     AGE local-path   rancher.io/local-path    ... WaitForFirstConsumer  5s To set the created local-path as the default storage class: kubectl patch storageclass local-path \   -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' Expected output: storageclass.storage.k8s.io/local-path patched External Storage For production use, where highly available volumes and automatic node-failure recovery are important, more reliable solutions than Local Path Provisioner are preferred. One such option is S3 storage from Hostman. You can easily install the CSI S3 addon. Step Two: Install Kubecost Add the Kubecost Helm repository and update it: helm repo add kubecost https://kubecost.github.io/cost-analyzer/ helm repo update Now use one of the following Helm commands: If the cluster has a default StorageClass: helm install kubecost kubecost/cost-analyzer \   --namespace kubecost --create-namespace If the cluster does NOT have a default StorageClass: helm install kubecost kubecost/cost-analyzer \   --namespace kubecost --create-namespace \   --set global.storageClass=<STORAGECLASS> Expected output: Kubecost 2.x.x has been successfully installed. Step Three: Verify Installation Check that the PersistentVolumeClaims (PVCs) created by Kubecost are in Bound status: kubectl get pvc -n kubecost Expected output (trimmed for clarity): NAME                         STATUS kubecost-cost-analyzer       Bound kubecost-prometheus-server   Bound Make sure each PVC shows Bound. Next, ensure all pods are running and error-free: kubectl get pod -n kubecost Expected output: NAME                              READY   STATUS    RESTARTS kubecost-cost-analyzer-xxx        4/4     Running   0 kubecost-forecasting-xxx          1/1     Running   0 kubecost-grafana-xxx              2/2     Running   0 kubecost-prometheus-server-xxx    1/1     Running   0 If you see this, Kubecost is installed correctly. Step Four: Port Forwarding To manage Kubecost and view its metrics, you need to port-forward to your local machine. First, identify the service used by Kubecost: kubectl get svc -n kubecost Expected output (trimmed): NAME                       TYPE        CLUSTER-IP        EXTERNAL-IP   PORT kubecost-cost-analyzer     ClusterIP   10.111.138.113    <none>        9090/TCP The desired service is typically kubecost-cost-analyzer, and its port is 9090. Forward it: kubectl port-forward -n kubecost service/kubecost-cost-analyzer 9090:9090 Expected output: Forwarding from 127.0.0.1:9090 -> 9090 Forwarding from [::1]:9090 -> 9090 Now you can use Kubecost via the web UI at http://localhost:9090. How to Configure Kubecost Note: The Kubecost UI may vary by version; button labels, metrics, and other elements may change. Go to the Settings tab (on the left, might be hidden) for initial configuration. Cost Model Configuration Filling out this section is recommended for accurate cost calculations. Scroll to the Pricing section and enable the Enable Custom Pricing toggle. The app will prompt you to enter resource pricing manually. If using Hostman, you can find this pricing info on the Create Cluster page, under section 3. Worker Nodes Configuration, tab Custom. There, sliders will display the cost of the selected configuration. Note: In Hostman, the cost of fixed-configuration worker nodes is lower than that of equivalent custom-configured ones. Example field entry: Field Value Description Monthly CPU Price $1.80 Price per 1 vCPU Monthly Spot CPU Price* 0 Price per 1 Spot vCPU Monthly RAM Price $1.50 Price per 1 GB of RAM Monthly Spot RAM Price* 0 Price per 1 GB of Spot RAM Monthly GPU Price* 0 Price per 1 GPU Monthly Storage Price $0.04 Price per 1 GB of storage * — Not used in Hostman. Custom Labels Labels are used to identify, group, and detail costs associated with Kubernetes resources. Scroll to the Labels section. It’s similar in layout to the cost model section. Name Description Default Value Owner Label / Annotation Indicates resource owner (e.g., user or team)* owner Team Label Defines the team using the resource* team Department Label Links the resource to a department or cost center* department Product Label Specifies the app/product the resource is for* app Environment Label Indicates the environment (dev, prod, staging, etc.) env GPU Label Node-level label indicating GPU type — GPU Label Value Label value indicating GPU presence — * — supports CSV format. Prometheus Status Check Kubecost retrieves metrics from Prometheus. Scroll down to Prometheus Status—it’s near the bottom of the Settings page. You should see green checkmarks for each metric (as shown in the screenshot). If metrics are missing, Kubecost may not work as expected. For full diagnostics, visit: http://localhost:9090/diagnostics. Alert Configuration Kubecost can notify users of unexpected events. Alerts can be sent via email, Slack, webhooks, or Microsoft Teams. Go to the Alerts tab. Under Global Recipients, enter the contacts for global alert delivery. Below that, you can define alert types and specific recipients. Each type is described below: Name Description Allocation Budget Budget for cost allocation at namespace/team/project level. Notifies on overage. Allocation Efficiency Resource usage efficiency (e.g., CPU, RAM) within budgets or namespaces. Allocation Recurring Update Regular updates on resource allocation and costs. Allocation Spend Change Notifies of significant changes in resource spend. Asset Budget Budget for physical/virtual resources (nodes, GPUs, disks). Alerts on overage. Asset Recurring Update Regular updates on physical/virtual resource usage. Cloud Cost Budget Budget for cloud costs. Alerts when exceeded. Uninstalling and Reinstalling Kubecost Sometimes, full uninstallation is required to fix issues—for example, if no default StorageClass was set during the initial install. To remove Kubecost completely: helm uninstall kubecost -n kubecost kubectl delete ns kubecost To reinstall, follow Step Two again. Troubleshooting Common Issues Error Symptoms Solution Out of memory OOMKilled, logs show CrashLoopBackOff Add new worker nodes via Hostman (Resources tab). Kubernetes will reschedule the pods. Lack of CPU or disk Pods stuck in CrashLoopBackOff; Prometheus shows incomplete data Add more resources, check Prometheus logs for retention or WAL errors. Prometheus out of disk space Logs show Storage retention limit reached, WAL write errors Resize disk (for external storage), or add a new disk and migrate Prometheus data (local). UI slow / Graphs timing out Graphs load slowly or timeout Increase resources.requests/limits; optimize Prometheus retention and use recording rules. No PersistentVolume for PVC Error: 0/2 nodes ... no available persistent volumes to bind Refer to Step One, reinstall Kubecost with proper storage. PVC stuck in Pending kubectl get pvc shows Pending; no PV or no StorageClass Ensure storage class exists or set manually. Missing metrics in UI No data/graphs; logs show Unable to query Prometheus Verify Prometheus is running and has enough disk. Helm install fails Errors like chart not found, or failed resource creation Retry Step Two, ensure you have proper RBAC permissions. UI inaccessible via port Port-forward runs, but http://<node_ip>:9090 fails Use http://localhost:9090 if running locally; configure NodePort or LoadBalancer access. Zero dollar cost in UI Cost Allocation shows $0 or no data Manually define the cost model under Settings > On-Prem. Conclusion Kubecost is a powerful tool for monitoring and optimizing Kubernetes costs. It helps make infrastructure spending transparent and manageable. This guide covered the full installation and configuration process, including cluster preparation, choosing a storage class, Helm-based deployment, cost model setup, and Prometheus integration. Effective use of Kubecost not only helps reduce expenses but also improves resource management across teams, projects, and applications. By following this guide, you’ll be able to deploy and tailor Kubecost to suit your infrastructure needs.
25 July 2025 · 10 min to read
Kubernetes

Kubernetes Backup

The Kubernetes containerization platform processes and stores large volumes of data from various cluster components, including persistent storage blocks (Persistent Volumes), various manifests, and configuration files such as Deployments, ConfigMaps, and Secrets. It is important to organize backups to protect this data. There are various solutions for simplifying the Kubernetes backup process. One of them is Velero, specifically designed to create Kubernetes cluster backups. Today, we will take a detailed look at the process of creating backups using Velero. Prerequisites A deployed and running Kubernetes cluster. It can be a self-hosted cluster deployed or a Kubernetes cluster in the Hostman cloud. Object storage for backup files. In this guide, we will use Hostman S3 object storage. A server or a computer from which we will manage the cluster and install Velero. We'll use a machine with Ubuntu 24.04. kubectl utility installed. The major version of kubectl should not differ from that of the cluster. For instance, if the cluster version is 1.31, you can use versions from 1.30 to 1.32. To download a specific version of kubectl, specify it in the URL, for example: curl -LO https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl After installation, check the version: kubectl version --client Helm package manager installed. Helm simplifies installing, upgrading, and managing applications within a Kubernetes cluster. Helm organizes complex Kubernetes configurations into manageable packages called charts. Creating S3 Storage S3 is an object storage service for reliable storage of large datasets. Since Velero requires object storage, let's create one in the S3 Storage section of the Hostman management panel. Click the Create button: For this guide, we'll select the minimum storage size of 10 GB. In practice, you should choose a size that meets your needs. Set the storage type to Public. You can also rename the bucket if needed. Velero Overview Velero is an open-source client-server utility for creating backups and restoring Kubernetes cluster resources. It works with Kubernetes objects (such as Pods, Deployments, and Services) and saves them as snapshots. Additionally, it can back up data from Persistent Volume (PV) objects. Velero Key Features: Backup Creation: Save the state of the Kubernetes cluster, including manifests and Persistent Volumes. Data Restoration: Restore the entire cluster or individual resources from a backup. Data Migration: Move resources between Kubernetes clusters. Velero Architecture The Velero architecture consists of the following key components: Velero Server (deployed inside the Kubernetes cluster): The server component runs as a Deployment object within the Kubernetes cluster. It handles backup and recovery tasks. CLI (deployed outside the cluster): The client component provides a command-line interface for managing Velero and sends commands to the Velero server. Cloud Storage Provider Plugins: Used to interact with data storage services (e.g., Amazon S3, Google Cloud Storage, and Azure Blob Storage). Preparing the kubeconfig File To connect to a cluster, you need the kubeconfig file — a special YAML file containing connection details for the cluster. If you are using a Kubernetes cluster from Hostman, you can download the kubeconfig file from the Dashboard of your cluster. Next, export the KUBECONFIG environment variable, specifying the full path to the kubeconfig file. Linux and macOS In the terminal, run the following command: export KUBECONFIG=/root/Daring_Linnet_config.yaml Windows In the Windows PowerShell, use this command: $env:KUBECONFIG = "C:\Users\alex\plugins\container-service\clusters\customername\Daring_Linnet_config.yaml" Replace Daring_Linnet_config.yaml with the name of your kubeconfig file. After exporting the environment variable, check the connection to the cluster by listing all available nodes: kubectl get nodes If the command returns a list of nodes, we have successfully connected to the cluster. Installing Velero Installing the Client Component As mentioned earlier, Velero consists of a client (CLI) and a server component. We'll start by installing the client, which provides a command-line interface. Download the .tar archive for the Velero client and extract it. We'll use version 1.15.1: curl -L https://github.com/vmware-tanzu/velero/releases/download/v1.15.1/velero-v1.15.1-linux-amd64.tar.gz | tar -xz The output will be a directory named velero-v1.15.1-linux-amd64 (where v1.15.1 is the version used). Move the directory to /usr/local/bin: mv velero-v1.15.1-linux-amd64/velero /usr/local/bin/ Check the utility's functionality by displaying its version: velero version If the version is displayed, the client component has been successfully installed. Now we will proceed with the installation of the server component. Installing the Server Component One way to install the server component of Velero is through a Helm chart. To install Velero using Helm, follow these steps: Create a new namespace named velero: kubectl create namespace velero Create a new Kubernetes Secret object to store the aws_access_key_id and aws_secret_access_key variables. These keys are essential for authenticating and authorizing access to S3 storage. S3 Access Key: A public identifier used to identify the user or application making the request. S3 Secret Access Key: A private key used to digitally sign requests. Keep this key confidential. To find the S3 Access Key and S3 Secret Access Key, go to the S3 Storage section in the Hostman management panel and click on the bucket. Copy these values and create a new file named velero-credentials-secret.yaml: nano velero-credentials-secret.yaml Add the following content: apiVersion: v1 kind: Secret metadata: name: cloud-credentials namespace: velero type: Opaque stringData: cloud: | [default] aws_access_key_id = UOY3beX5A3bV9Ly aws_secret_access_key = F3x78pH1d5BOu4BfVv Create the secret in Kubernetes: kubectl apply -f velero-credentials-secret.yaml Add the official vmware-tanzu Helm repository: helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts Update the repository list: helm repo update List the repositories to confirm the addition: helm repo ls Install Velero using the following command: helm install velero vmware-tanzu/velero \ --namespace velero \ --set credentials.existingSecret=cloud-credentials \ --set 'configuration.backupStorageLocation[0].name=default' \ --set 'configuration.backupStorageLocation[0].provider=aws' \ --set 'configuration.backupStorageLocation[0].bucket=f60e2023-bucket-for-velero' \ --set 'configuration.backupStorageLocation[0].config.region=us-2' \ --set 'configuration.backupStorageLocation[0].config.s3ForcePathStyle=true' \ --set 'configuration.backupStorageLocation[0].config.s3Url=https://s3.hostman.com' \ --set 'configuration.volumeSnapshotLocation[0].name=default' \ --set 'configuration.volumeSnapshotLocation[0].provider=aws' \ --set 'configuration.volumeSnapshotLocation[0].config.region=us-2' \ --set 'initContainers[0].name=velero-plugin-for-aws' \ --set 'initContainers[0].image=velero/velero-plugin-for-aws:v1.7.0' \ --set 'initContainers[0].volumeMounts[0].mountPath=/target' \ --set 'initContainers[0].volumeMounts[0].name=plugins' In the configuration.backupStorageLocation[0].bucket parameter, specify the bucket name, which you can find in the Hostman control panel. Run the installation command. If there are no errors, a message will confirm that Velero has been deployed in the cluster. To monitor its status, use: kubectl get deployment/velero -n velero The deployment file is successfully launched, as indicated by the READY and UP-TO-DATE statuses. You can also check the status of the Velero pod: kubectl get pods -n velero If the pod is running, you can optionally check its logs (where velero-7bb8d5c5f-jwg5c is the Velero pod name): kubectl logs velero-7bb8d5c5f-jwg5c -n velero The Velero installation is now fully complete. Backup Using Velero To test the backup process, we will create a new namespace and several Kubernetes objects within it. Create a namespace named test-velero: kubectl create ns test-velero Create a Deployment file with two containers running the NGINX web server and a LoadBalancer service.  nano nginx-dev.yaml Add the following configuration: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-dev namespace: test-velero labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx:1.17.6 name: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: labels: app: nginx name: nginx-test-service namespace: test-velero spec: ports: - port: 80 targetPort: 80 selector: app: nginx type: LoadBalancer Apply the file and create the resources: kubectl apply -f nginx-dev.yaml Verify the status of the created resources: kubectl get all -n test-velero Creating a Backup To create a backup for all resources in the test-velero namespace, run the following command: velero backup create nginx-test-backup --include-namespaces test-velero If the backup was created successfully, you will see the following message: Backup request "nginx-test-backup" submitted successfully.Run `velero backup describe nginx-test-backup` or `velero backup logs nginx-test-backup` for more details. You can check the status with the describe command:  velero backup describe nginx-test-backup If successful, the status will be Completed. Listing Backups To view all backups in the storage, run: velero backup get The output will display the status (STATUS), number of errors (ERRORS), warnings (WARNINGS), creation time (CREATED), and expiration time (EXPIRES) for each backup. Restoring a Backup To test the restoration process, first delete the previously created namespace and all objects within it: kubectl delete namespace test-velero Restore the backup by specifying its name (nginx-test-backup): velero restore create --from-backup nginx-test-backup Check the restoration status using the following command, providing the name of the restored copy (obtained from the velero restore create output): velero restore describe nginx-test-backup-20250114155656 If successful, the status will be Completed. Viewing Backup Files To view backup files, navigate to the Objects tab in the S3 Storage section in your Hostman control panel. Velero creates separate directories for: Backups: containing backup data for the respective resources. Restorations: containing details about restored objects. Each directory contains the corresponding Kubernetes objects for backup and restoration purposes. Useful Commands for Backup with Velero Velero offers extensive backup functionality, allowing you to create backups for specific objects or configurations. Below are some useful examples: Scheduled Backup for Specific Namespaces To automatically create backups for all objects in the default and my-namespace namespaces every day at 2:00 AM: velero schedule create daily-backup --schedule="0 2 * * *" --include-namespaces default,my-namespace Backup for Specific Resources To create a backup only for objects of type deployment in the default namespace: velero backup create my-backup2 --include-resources deployments --include-namespaces default Full Cluster Backup To back up the entire Kubernetes cluster, including cluster-scoped resources such as ClusterRole, ClusterRoleBinding, CustomResourceDefinition (CRD), PersistentVolume, and StorageClass: velero backup create full-cluster-backup Backup by Label Selector To back up only objects with a specific label, for instance, those with the selector app=nginx: velero backup create backup-with-label-nginx --selector "app=nginx" Backup Excluding a Label Selector To back up only objects without a specific label selector, such as excluding objects labeled app=nginx: velero backup create backup-with-no-label-nginx --selector "app=nginx" Excluding a Specific Namespace To exclude the kube-system namespace and all its objects from the backup: velero backup create backup-exclude-kube-system --exclude-namespaces kube-system Excluding Specific Resources To exclude all secrets from the backup: velero backup create backup-exclude-secrets --exclude-resources secrets Before running production backups, validate node, pod, and volume health as described in Kubernetes Cluster Health Checks—covering viewing detailed information about resources and various components  to ensure all resources are ready. Conclusion In this practical guide, we covered how to install Velero and how to use it to create Kubernetes backups and restore data. Velero's rich functionality allows for quick and straightforward backup-related tasks, making it a valuable tool for maintaining data safety and cluster reliability.
04 February 2025 · 11 min to read
Kubernetes

Kubernetes Requests and Limits

When working with the Kubernetes containerization platform, it is important to control resource usage for cluster objects such as pods. The requests and limits parameters allow you to configure resource consumption limits, such as how many resources a pod can use in a Kubernetes cluster. This article will explore the use of requests and limits in Kubernetes through practical examples. Prerequisites To work with requests and limits in a Kubernetes cluster, we need: A Kubernetes cluster (you can create one in the Hostman control panel). For testing purposes, a cluster with two nodes will suffice. The cluster can also be deployed manually by renting the necessary number of cloud or dedicated (physical) servers, setting up the operating system, and installing the required packages. Lens or kubectl for connecting to and managing your Kubernetes clusters. Connecting to a Kubernetes Cluster Using Lens First, go to the cluster management page in your Hostman panel. Download the Kubernetes cluster configuration file (the kubeconfig file). Once Lens is installed on your system, launch the program, and from the left menu, go to the Catalog (app) section: Select Clusters and click the blue plus button at the bottom right. Choose the directory where you downloaded the Kubernetes configuration file by clicking the Sync button at the bottom right. After this, our cluster will appear in the list of available clusters. Click on the cluster's name to open its dashboard: What are Requests and Limits in Kubernetes First, let's understand what requests and limits are in Kubernetes. Requests are a mechanism in Kubernetes that is responsible for allocating physical resources, such as memory and CPU cores, to the container being launched. In simple terms, requests in Kubernetes are the minimum system requirements for an application to function properly. Limits are a mechanism in Kubernetes that limits the physical resources (memory and CPU cores) allocated to the container being launched. In other words, limits in Kubernetes are the maximum values for physical resources, ensuring that the launched application cannot consume more resources than specified in the limits. The container can only use resources up to the limit specified in the Limits. The request and limit mechanisms apply only to objects of type pod and are defined in the pod configuration files, including deployment, StatefulSet, and ReplicaSet files. Requests are added in the containers block using the resources parameter. In the resources section, you need to add the requests block, which consists of two values: cpu (CPU resource request) and memory (memory resource request). The syntax for requests is as follows: containers: ... resources: requests: cpu: "1.0" memory: "150Mi" In this example, for the container to be launched on a selected node in the cluster, at least one free CPU core and 150 megabytes of memory must be available. Limits are set in the same way. For example: containers: ... resources: limits: cpu: "2.0" memory: "500Mi" In this example, the container cannot use more than two CPU cores and no more than 500 megabytes of memory. The units of measurement for requests and limits are as follows: CPU — in millicores (milli-cores) RAM — in bytes For CPU resources, cores are used. For example, if we need to allocate one physical CPU core to a container, the manifest should specify 1.0. To allocate half a core, specify 0.5. A core can be logically divided into millicores, so you can allocate, for example, 100m, which means one-thousandth of a core (1 full CPU core contains 1000 millicores). For RAM, we specify values in bytes. You can use numbers with the suffixes E, P, T, G, M, k. For example, if a container needs to be allocated 1 gigabyte of memory, you should specify 1G. In megabytes, it would be 1024M, in kilobytes, it would be 1048576k, and so on. The requests and limits parameters are optional; however, it is important to note that if both parameters are not set, the container will be able to run on any available node in the cluster regardless of the free resources and will consume as many resources as are physically available on each node. Essentially, the cluster will allocate excess resources. This practice can negatively affect the stability of the entire cluster, as it significantly increases the risk of errors such as OOM (Out of Memory) and OutOfCPU (lack of CPU resources). To prevent these errors, Kubernetes introduced the request and limit mechanisms. To understand how request and limit choices impact service performance, apply the techniques from Load Balancing in Kubernetes, which covers tracking pods in Kubernetes, balancing via Ingress, external and intra-cluster balancing—ensuring your resource constraints don’t inadvertently throttle critical traffic. Practical Use of Requests and Limits in Kubernetes Let's look at the practical use of requests and limits. First, we will deploy a deployment file with an Nginx image where we will set only the requests. In the configuration below, to launch a pod with a container, the node must have at least 100 millicores of CPU (1/1000 of a CPU core) and 150 megabytes of free memory: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test-deployment namespace: ns-for-nginx labels: app: nginx-test spec: selector: matchLabels: app: nginx-test template: metadata: labels: app: nginx-test spec: containers: - name: nginx-test image: nginx:1.25 resources: requests: cpu: "100m" memory: "150Mi" Before deploying the deployment, let's create a new namespace named ns-for-nginx: kubectl create ns ns-for-nginx After creating the namespace, we will deploy the deployment file using the following command: kubectl apply -f nginx-test-deployment.yml Now, let's check if the deployment was successfully created: kubectl get deployments -A Also, check the status of the pod: kubectl get po -n ns-for-nginx The deployment file and the pod have been successfully launched. To ensure that the minimum resource request was set for the Nginx pod, we will use the kubectl describe pod command (where nginx-test-deployment-786d6fcb57-7kddf is the name of the running pod): kubectl describe pod nginx-test-deployment-786d6fcb57-7kddf -n ns-for-nginx In the output of this command, you can find the requests block, which contains the previously set minimum requirements for our container to run: In the example above, we created a deployment that sets only the minimum required resources for deployment. Now, let's add limits for the container to run with 1 full CPU core and 1 gigabyte of RAM by creating a new deployment file: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test-deployment-2 namespace: ns-for-nginx labels: app: nginx-test2 spec: selector: matchLabels: app: nginx-test2 template: metadata: labels: app: nginx-test2 spec: containers: - name: nginx-test2 image: nginx:1.25 resources: requests: cpu: "100m" memory: "150Mi" limits: cpu: "1.0" memory: "1G" Let's create the deployment in the cluster: kubectl apply -f nginx-test-deployment2.yml Using the kubectl describe command, let's verify that both requests and limits have been applied (where nginx-test-deployment-2-6d5df6c95c-brw8n is the name of the pod): kubectl describe pod nginx-test-deployment-2-6d5df6c95c-brw8n -n ns-for-nginx In the screenshot above, both requests and limits have been set for the container. With these quotas, the container will be scheduled on a node with at least 150 megabytes of RAM and 100 milli-CPU. At the same time, the container will not be allowed to consume more than 1 gigabyte of RAM and 1 CPU core. Using ResourceQuota In addition to manually assigning resources for each container, Kubernetes provides a way to allocate quotas to specific namespaces in the cluster. The ResourceQuota mechanism allows setting resource usage limits within a particular namespace. ResourceQuota is intended to limit resources such as CPU and memory. The practical use of ResourceQuota looks like this: Create a new namespace with quota settings: kubectl create ns ns-for-resource-quota Create a ResourceQuota object: apiVersion: v1 kind: ResourceQuota metadata: name: resource-quota-test namespace: ns-for-resource-quota spec: hard: pods: "2" requests.cpu: "0.5" requests.memory: "800Mi" limits.cpu: "1" limits.memory: "1G" In this example, for all objects created in the ns-for-resource-quota namespace, the following limits will apply: A maximum of 2 pods can be created. The minimum CPU resources required for starting the pods is 0.5 milliCPU. The minimum memory required for starting the pods is 800MB. CPU limits are set to 1 core (no more can be allocated). Memory limits are set to 1GB (no more can be allocated). Apply the configuration file: kubectl apply -f test-resource-quota.yaml Check the properties of the ResourceQuota object: kubectl get resourcequota resource-quota-test -n ns-for-resource-quota As you can see, resource quotas have been set. Also, verify the output of the kubectl describe ns command: kubectl describe ns ns-for-resource-quota The previously created namespace ns-for-resource-quota will have the corresponding resource quotas. Example of an Nginx pod with the following configuration: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-with-quota namespace: ns-for-resource-quota labels: app: nginx-with-quota spec: selector: matchLabels: app: nginx-with-quota replicas: 3 template: metadata: labels: app: nginx-with-quota spec: containers: - name: nginx image: nginx:1.22.1 resources: requests: cpu: 100m memory: 100Mi limits: cpu: 100m memory: 100Mi Here we define 3 replicas of the Nginx pod to test the quota mechanism. We also set minimum resource requests for the containers and apply limits to ensure the containers don't exceed the defined resources. Apply the configuration file: kubectl apply -f nginx-deployment-with-quota.yaml kubectl get all -n ns-for-resource-quota As a result, only two of the three replicas of the pod will be successfully created. The deployment will show an error message indicating that the resource quota for pod creation has been exceeded (in this case, we're trying to create more pods than allowed): However, the remaining two Nginx pods were successfully started: Conclusion Requests and limits are critical mechanisms in Kubernetes that allow for flexible resource allocation and control within the cluster, preventing unexpected errors in running applications and ensuring the stability of the cluster itself. We offer an affordable Kubernetes hosting platform, with transparent and scalable pricing for all workloads.
29 January 2025 · 9 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support