Sign In
Sign In

Load Balancing

Updated on 28 November 2024

To ensure the stable operation of applications and evenly distribute traffic among pods in Kubernetes, a load balancer is used. It helps avoid overloading individual pods, maintaining high availability and stability of services.

Basic Load Balancer Configuration

To create a load balancer in Kubernetes, we define a Service resource with the type LoadBalancer. Below is an example of a basic manifest:

apiVersion: v1
kind: Service
metadata:
  name: example-balancer
  namespace: kubernetes-dashboard
spec:
  selector:
    app.kubernetes.io/name: nginx
  ports:
    - port: 80            # External port for accessing the application
      targetPort: 80      # Pod port to which traffic is redirected
  type: LoadBalancer

In this example, the load balancer redirects traffic from port 80 to port 80 inside the pods matching the selector app.kubernetes.io/name: nginx.

If you need to add multiple rules for balancing, ensure that you specify the name attribute for each port:

apiVersion: v1
kind: Service
metadata:
  name: example-balancer
  namespace: kubernetes-dashboard
spec:
  selector:
    app.kubernetes.io/name: nginx
  ports:
    - port: 80
      targetPort: 80
      name: http
    - port: 443
      targetPort: 443
      name: https
  type: LoadBalancer

The value of the name attribute can be arbitrary.

For each port, you can specify the traffic protocol using the appProtocol attribute. This explicitly defines how the load balancer will handle traffic. By default, the value is proto-tcp.

The following values are supported:

  • k8s.hostman.com/proto-http: standard HTTP traffic
  • k8s.hostman.com/proto-https: HTTPS traffic
  • k8s.hostman.com/proto-tcp: TCP traffic
  • k8s.hostman.com/proto-tcp-ssl: TCP traffic with TLS support
  • k8s.hostman.com/proto-http2: HTTP/2 traffic

After creation, the load balancer will be visible in the management panel under the Load Balancers section, with a k8s label.

A load balancer created through Kubernetes can only be modified via kubectl, not through the control panel or API.

Additional Configuration Parameters

For more flexible load balancer configuration in Kubernetes, you can use additional parameters. These are specified as labels or annotations in the Service manifest.

Here’s an example manifest with parameters set via labels:

apiVersion: v1
kind: Service
metadata:
  name: example-balancer
  namespace: kubernetes-dashboard
  labels:
    k8s.hostman.com/attached-loadbalancer-algo: "leastconn"    
    k8s.hostman.com/attached-loadbalancer-ddos-guard-external-ip: "true" 
spec:
  selector:
    app.kubernetes.io/name: nginx
  ports:
    - port: 80
      appProtocol: k8s.hostman.com/proto-http
      targetPort: 80
  type: LoadBalancer

Example manifest with parameters set via annotations:

apiVersion: v1
kind: Service
metadata:
  name: example-balancer
  namespace: kubernetes-dashboard
  labels:
    app: nginx
  annotations:
    k8s.hostman.com/attached-loadbalancer-algo: "leastconn"    
    k8s.hostman.com/attached-loadbalancer-ddos-guard-external-ip: "true" 
spec:
  selector:
    app.kubernetes.io/name: nginx
  ports:
    - port: 80
      appProtocol: k8s.hostman.com/proto-http
      targetPort: 80
  type: LoadBalancer

In these examples, two additional parameters are defined:

  • k8s.hostman.com/attached-loadbalancer-algo is the load balancing algorithm. Here: leastconn (selects the server with the fewest active connections).

  • k8s.hostman.com/attached-loadbalancer-ddos-guard-external-ip assigns an external IP address to the load balancer with DDoS protection.

Available Load Balancer Parameters

Below is a table summarizing the available parameters for configuring a load balancer. Each parameter is specified as a label in the Service manifest:

Parameter

Purpose

k8s.hostman.com/attached-loadbalancer-preset-id: "391"

Specifies the load balancer configuration. The default is the minimum configuration for the zone. Get preset IDs via API.

k8s.hostman.com/attached-loadbalancer-algo: "roundrobin"

Balancing algorithm: roundrobin or leastconn.

k8s.hostman.com/attached-loadbalancer-healthcheck-check-interval: "10"

Interval between health checks (in seconds).

k8s.hostman.com/attached-loadbalancer-healthcheck-timeout: "5"

Timeout for health checks (in seconds).

k8s.hostman.com/attached-loadbalancer-healthcheck-error-count: "3"

Number of failed checks before marking an upstream as unavailable.

k8s.hostman.com/attached-loadbalancer-healthcheck-recover-count: "2"

Number of successful checks needed to recover an upstream.

k8s.hostman.com/attached-loadbalancer-no-external-ip: "true"

Disables a public external IP for the load balancer.

k8s.hostman.com/ignore-hostman-loadbalancer: "true"

Excludes the service from Hostman load balancing. Useful when operating with a different LoadBalancer, such as kube-vip or MetalLB.

k8s.hostman.com/attached-loadbalancer-ddos-guard-external-ip: "true"

Assigns an external IP with DDoS protection.

k8s.hostman.com/attached-loadbalancer-proxy-enable: "true"

Enables proxy mode for the load balancer.

k8s.hostman.com/attached-loadbalancer-connect-timeout: "5000"

Timeout for establishing a TCP connection with the upstream (in milliseconds).

k8s.hostman.com/attached-loadbalancer-client-timeout: "50000"

Timeout for receiving new TCP segments from the client (in milliseconds).

k8s.hostman.com/attached-loadbalancer-server-timeout: "50000"

Timeout for waiting for a response from the backend (in milliseconds).

k8s.hostman.com/attached-loadbalancer-http-request-timeout: "10000"

Timeout for executing an HTTP request (in milliseconds).

k8s.hostman.com/attached-loadbalancer-maxconn: "10000"

Maximum number of connections the load balancer can handle on the frontend.

k8s.hostman.com/attached-loadbalancer-ssl: "true"

Enables automatic SSL certificate issuance. If set to false, the certificate will be deleted.

k8s.hostman.com/attached-loadbalancer-ssl-fqdn: "example.com"

The domain for which the SSL certificate should be issued.

Practical Example of Using a Load Balancer

To demonstrate the functionality of a load balancer, we will create two Nginx deployments, each displaying its own HTML page. The load balancer will distribute requests randomly between the pods, showing one of the pages depending on which pod is selected.

Environment Setup

To simplify management and allow quick removal of all resources associated with the load balancer, we will create a separate namespace. This makes testing and resource cleanup easier while keeping the main cluster clean.

Run the following command to create the namespace:

kubectl create namespace test-namespace

After creation, use this namespace for all subsequent resources, including the load balancer, deployments, and ConfigMap. To do this, add the line namespace: test-namespace to every manifest related to the example.

Creating a ConfigMap for HTML Pages

We will start by creating a ConfigMap to store two HTML pages. Pod 1 will display a page with the heading "Pod 1," and Pod 2 will display a page with the heading "Pod 2." These pages will be connected to Nginx within the pods.

nginx-pages-configmap.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-pages
  namespace: test-namespace
data:
  index-page1.html: |
    <html>
    <body>
      <h1>Pod 1</h1>
      <p>This is page served by Pod 1.</p>
    </body>
    </html>
  index-page2.html: |
    <html>
    <body>
      <h1>Pod 2</h1>
      <p>This is page served by Pod 2.</p>
    </body>
    </html>

Here we create a ConfigMap with two HTML files: index-page1.html and index-page2.html. These files will be mounted into Nginx pods, allowing each pod to display its specific page.

Apply the ConfigMap:

kubectl apply -f nginx-pages-configmap.yaml

Creating Nginx Deployments

Next, we create two deployments, each using different HTML pages from the ConfigMap. The deployments use the selector app: nginx, which the load balancer will use to identify pods participating in traffic distribution.

nginx-deployment-pod1.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-pod1
  namespace: test-namespace
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        volumeMounts:
          - name: nginx-pages
            mountPath: /usr/share/nginx/html/index.html
            subPath: index-page1.html
        ports:
          - containerPort: 80
      volumes:
      - name: nginx-pages
        configMap:
          name: nginx-pages

This deployment creates a single pod (replica 1) with the Nginx image, which mounts the index-page1.html file from the ConfigMap into the directory /usr/share/nginx/html/index.html. Port 80 is open for accessing the page.

Apply the deployment:

kubectl apply -f nginx-deployment-pod1.yaml

nginx-deployment-pod2.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-pod2
  namespace: test-namespace
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          volumeMounts:
            - name: nginx-pages
              mountPath: /usr/share/nginx/html/index.html
              subPath: index-page2.html
          ports:
            - containerPort: 80
      volumes:
        - name: nginx-pages
          configMap:
            name: nginx-pages

This deployment also creates an Nginx pod but mounts the index-page2.html file, which has different content.

Apply the second deployment:

kubectl apply -f nginx-deployment-pod2.yaml

Configuring the Load Balancer

Now, create a load balancer that will direct requests to pods with the label app: nginx.

nginx-loadbalancer.yaml:

apiVersion: v1
kind: Service
metadata:
  name: nginx-loadbalancer
  namespace: test-namespace
spec:
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
      appProtocol: k8s.hostman.com/proto-http
  type: LoadBalancer

In this Service, we specify type: LoadBalancer, which creates a load balancer, and selector: app: nginx, which directs requests to the Nginx pods from our deployments. Requests to the load balancer are distributed among the pods using the roundrobin algorithm, which is the default.

Apply the load balancer:

kubectl apply -f nginx-loadbalancer.yaml

Verifying the Load Balancer

After creating the load balancer, you can find its public IP address in the control panel or by running the command:

kubectl get services -n test-namespace

Accessing this IP address will display a page served by one of the pods. Each time you refresh the page, traffic may be redirected to different pods, allowing the load balancer to switch the displayed page randomly.

Deleting Resources After Testing

Once you have verified the load balancer’s functionality, you can delete all the created pods and resources. Use the following commands:

kubectl delete service nginx-loadbalancer -n test-namespace
kubectl delete deployment nginx-pod1 -n test-namespace
kubectl delete deployment nginx-pod2 -n test-namespace
kubectl delete configmap nginx-pages -n test-namespace

These commands will remove the load balancer, pod deployments, and the ConfigMap created earlier.

Alternatively, delete the entire namespace:

kubectl delete namespace test-namespace

This method will automatically remove all resources associated with the test environment.

Troubleshoothing

Failed to Obtain IP

If a load balancer fails to obtain an external IP address during creation, the following annotation will appear in the service:

k8s.hostman.com/attached-loadbalancer-ensuring-error: true

This means something went wrong while binding the external IP. In this case, we recommend recreating the load balancer or contacting technical support.

How to Check

  1. List all services of type LoadBalancer in the cluster:
kubectl get svc --all-namespaces --field-selector spec.type=LoadBalancer
  1. View the annotations of the desired service:
kubectl describe svc <service-name> -n <namespace>
  1. If the output contains the following annotation, recreate the load balancer or contact support and provide your cluster ID.
k8s.hostman.com/attached-loadbalancer-ensuring-error: true
Was this page helpful?
Updated on 28 November 2024

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support