To ensure the stable operation of applications and evenly distribute traffic among pods in Kubernetes, a load balancer is used. It helps avoid overloading individual pods, maintaining high availability and stability of services.
To create a load balancer in Kubernetes, we define a Service
resource with the type LoadBalancer
. Below is an example of a basic manifest:
apiVersion: v1
kind: Service
metadata:
name: example-balancer
namespace: kubernetes-dashboard
spec:
selector:
app.kubernetes.io/name: nginx
ports:
- port: 80 # External port for accessing the application
targetPort: 80 # Pod port to which traffic is redirected
type: LoadBalancer
In this example, the load balancer redirects traffic from port 80 to port 80 inside the pods matching the selector app.kubernetes.io/name: nginx
.
If you need to add multiple rules for balancing, ensure that you specify the name
attribute for each port:
apiVersion: v1
kind: Service
metadata:
name: example-balancer
namespace: kubernetes-dashboard
spec:
selector:
app.kubernetes.io/name: nginx
ports:
- port: 80
targetPort: 80
name: http
- port: 443
targetPort: 443
name: https
type: LoadBalancer
The value of the name
attribute can be arbitrary.
For each port, you can specify the traffic protocol using the appProtocol
attribute. This explicitly defines how the load balancer will handle traffic. By default, the value is proto-tcp
.
The following values are supported:
k8s.hostman.com/proto-http
: standard HTTP traffick8s.hostman.com/proto-https
: HTTPS traffick8s.hostman.com/proto-tcp
: TCP traffick8s.hostman.com/proto-tcp-ssl
: TCP traffic with TLS supportk8s.hostman.com/proto-http2
: HTTP/2 trafficAfter creation, the load balancer will be visible in the management panel under the Load Balancers section, with a k8s
label.
A load balancer created through Kubernetes can only be modified via
kubectl
, not through the control panel or API.
For more flexible load balancer configuration in Kubernetes, you can use additional parameters. These are specified as labels
or annotations
in the Service
manifest.
Here’s an example manifest with parameters set via labels:
apiVersion: v1
kind: Service
metadata:
name: example-balancer
namespace: kubernetes-dashboard
labels:
k8s.hostman.com/attached-loadbalancer-algo: "leastconn"
k8s.hostman.com/attached-loadbalancer-ddos-guard-external-ip: "true"
spec:
selector:
app.kubernetes.io/name: nginx
ports:
- port: 80
appProtocol: k8s.hostman.com/proto-http
targetPort: 80
type: LoadBalancer
Example manifest with parameters set via annotations:
apiVersion: v1
kind: Service
metadata:
name: example-balancer
namespace: kubernetes-dashboard
labels:
app: nginx
annotations:
k8s.hostman.com/attached-loadbalancer-algo: "leastconn"
k8s.hostman.com/attached-loadbalancer-ddos-guard-external-ip: "true"
spec:
selector:
app.kubernetes.io/name: nginx
ports:
- port: 80
appProtocol: k8s.hostman.com/proto-http
targetPort: 80
type: LoadBalancer
In these examples, two additional parameters are defined:
k8s.hostman.com/attached-loadbalancer-algo
is the load balancing algorithm. Here: leastconn
(selects the server with the fewest active connections).
k8s.hostman.com/attached-loadbalancer-ddos-guard-external-ip
assigns an external IP address to the load balancer with DDoS protection.
Below is a table summarizing the available parameters for configuring a load balancer. Each parameter is specified as a label in the Service
manifest:
Parameter |
Purpose |
|
Specifies the load balancer configuration. The default is the minimum configuration for the zone. Get preset IDs via API. |
|
Balancing algorithm: |
|
Interval between health checks (in seconds). |
|
Timeout for health checks (in seconds). |
|
Number of failed checks before marking an upstream as unavailable. |
|
Number of successful checks needed to recover an upstream. |
|
Disables a public external IP for the load balancer. |
|
Excludes the service from Hostman load balancing. Useful when operating with a different LoadBalancer, such as |
|
Assigns an external IP with DDoS protection. |
|
Enables proxy mode for the load balancer. |
|
Timeout for establishing a TCP connection with the upstream (in milliseconds). |
|
Timeout for receiving new TCP segments from the client (in milliseconds). |
|
Timeout for waiting for a response from the backend (in milliseconds). |
|
Timeout for executing an HTTP request (in milliseconds). |
|
Maximum number of connections the load balancer can handle on the frontend. |
|
Enables automatic SSL certificate issuance. If set to false, the certificate will be deleted. |
|
The domain for which the SSL certificate should be issued. |
To demonstrate the functionality of a load balancer, we will create two Nginx deployments, each displaying its own HTML page. The load balancer will distribute requests randomly between the pods, showing one of the pages depending on which pod is selected.
To simplify management and allow quick removal of all resources associated with the load balancer, we will create a separate namespace. This makes testing and resource cleanup easier while keeping the main cluster clean.
Run the following command to create the namespace:
kubectl create namespace test-namespace
After creation, use this namespace for all subsequent resources, including the load balancer, deployments, and ConfigMap
. To do this, add the line namespace: test-namespace
to every manifest related to the example.
We will start by creating a ConfigMap
to store two HTML pages. Pod 1 will display a page with the heading "Pod 1," and Pod 2 will display a page with the heading "Pod 2." These pages will be connected to Nginx within the pods.
nginx-pages-configmap.yaml
:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-pages
namespace: test-namespace
data:
index-page1.html: |
<html>
<body>
<h1>Pod 1</h1>
<p>This is page served by Pod 1.</p>
</body>
</html>
index-page2.html: |
<html>
<body>
<h1>Pod 2</h1>
<p>This is page served by Pod 2.</p>
</body>
</html>
Here we create a ConfigMap
with two HTML files: index-page1.html
and index-page2.html
. These files will be mounted into Nginx pods, allowing each pod to display its specific page.
Apply the ConfigMap
:
kubectl apply -f nginx-pages-configmap.yaml
Next, we create two deployments, each using different HTML pages from the ConfigMap
. The deployments use the selector app: nginx
, which the load balancer will use to identify pods participating in traffic distribution.
nginx-deployment-pod1.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-pod1
namespace: test-namespace
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
volumeMounts:
- name: nginx-pages
mountPath: /usr/share/nginx/html/index.html
subPath: index-page1.html
ports:
- containerPort: 80
volumes:
- name: nginx-pages
configMap:
name: nginx-pages
This deployment creates a single pod (replica 1) with the Nginx image, which mounts the index-page1.html
file from the ConfigMap
into the directory /usr/share/nginx/html/index.html
. Port 80 is open for accessing the page.
Apply the deployment:
kubectl apply -f nginx-deployment-pod1.yaml
nginx-deployment-pod2.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-pod2
namespace: test-namespace
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
volumeMounts:
- name: nginx-pages
mountPath: /usr/share/nginx/html/index.html
subPath: index-page2.html
ports:
- containerPort: 80
volumes:
- name: nginx-pages
configMap:
name: nginx-pages
This deployment also creates an Nginx pod but mounts the index-page2.html
file, which has different content.
Apply the second deployment:
kubectl apply -f nginx-deployment-pod2.yaml
Now, create a load balancer that will direct requests to pods with the label app: nginx
.
nginx-loadbalancer.yaml
:
apiVersion: v1
kind: Service
metadata:
name: nginx-loadbalancer
namespace: test-namespace
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
appProtocol: k8s.hostman.com/proto-http
type: LoadBalancer
In this Service
, we specify type: LoadBalancer
, which creates a load balancer, and selector: app: nginx
, which directs requests to the Nginx pods from our deployments. Requests to the load balancer are distributed among the pods using the roundrobin
algorithm, which is the default.
Apply the load balancer:
kubectl apply -f nginx-loadbalancer.yaml
After creating the load balancer, you can find its public IP address in the control panel or by running the command:
kubectl get services -n test-namespace
Accessing this IP address will display a page served by one of the pods. Each time you refresh the page, traffic may be redirected to different pods, allowing the load balancer to switch the displayed page randomly.
Once you have verified the load balancer’s functionality, you can delete all the created pods and resources. Use the following commands:
kubectl delete service nginx-loadbalancer -n test-namespace
kubectl delete deployment nginx-pod1 -n test-namespace
kubectl delete deployment nginx-pod2 -n test-namespace
kubectl delete configmap nginx-pages -n test-namespace
These commands will remove the load balancer, pod deployments, and the ConfigMap
created earlier.
Alternatively, delete the entire namespace:
kubectl delete namespace test-namespace
This method will automatically remove all resources associated with the test environment.
If a load balancer fails to obtain an external IP address during creation, the following annotation will appear in the service:
k8s.hostman.com/attached-loadbalancer-ensuring-error: true
This means something went wrong while binding the external IP. In this case, we recommend recreating the load balancer or contacting technical support.
kubectl get svc --all-namespaces --field-selector spec.type=LoadBalancer
kubectl describe svc <service-name> -n <namespace>
k8s.hostman.com/attached-loadbalancer-ensuring-error: true