To ensure the stable operation of applications and evenly distribute traffic among pods in Kubernetes, a load balancer is used. It helps avoid overloading individual pods, maintaining high availability and stability of services.
To create a load balancer in Kubernetes, we define a Service
resource with the type LoadBalancer
. Below is an example of a basic manifest:
apiVersion: v1
kind: Service
metadata:
name: example-balancer
namespace: kubernetes-dashboard
spec:
selector:
app.kubernetes.io/name: nginx
ports:
- port: 80 # External port for accessing the application
targetPort: 80 # Pod port to which traffic is redirected
type: LoadBalancer
In this example, the load balancer redirects traffic from port 80 to port 80 inside the pods matching the selector app.kubernetes.io/name: nginx
.
If you need to add multiple rules for balancing, ensure that you specify the name
attribute for each port:
apiVersion: v1
kind: Service
metadata:
name: example-balancer
namespace: kubernetes-dashboard
spec:
selector:
app.kubernetes.io/name: nginx
ports:
- port: 80
targetPort: 80
name: http
- port: 443
targetPort: 443
name: https
type: LoadBalancer
The value of the name
attribute can be arbitrary.
After creation, the load balancer will be visible in the management panel under the Load Balancers section.
Once a load balancer is created, you should update its configuration via
kubectl
, not through the Hostman control panel, to avoid conflicts.
In the load balancer rules, you might notice that the port used for balancing differs from the one specified in the Service
. This is because the rules include a NodePort
.
NodePort
is a special port on each Kubernetes node that allows external traffic to be forwarded to a specific service inside the cluster. It acts as an intermediary between the load balancer and the internal pods, linking external traffic to the service. You can view it by running the following command:
kubectl describe service <service_name> -n <namespace>
When creating a LoadBalancer
type service, Kubernetes automatically assigns a NodePort
to allow incoming traffic through cluster nodes. Traffic directed to this port on any node (e.g., 31154 in this case) is forwarded to the service and then to the target pods.
For more flexible configuration of a load balancer in Kubernetes, additional parameters can be specified as labels in the Service
manifest.
Here’s an example manifest with additional parameters:
apiVersion: v1
kind: Service
metadata:
name: example-balancer
namespace: kubernetes-dashboard
labels:
k8s.hostman.com/attached-loadbalancer-algo: "leastconn"
spec:
selector:
app.kubernetes.io/name: nginx
ports:
- port: 80
targetPort: 80
type: LoadBalancer
Here, k8s.hostman.com/attached-loadbalancer-algo
specifies the load balancing algorithm. In this example, leastconn
is used, which selects the server with the least number of active connections.
Below is a table summarizing the available parameters for configuring a load balancer. Each parameter is specified as a label in the Service
manifest:
Parameter |
Purpose |
|
Specifies the load balancer configuration. The default is the minimum configuration for the zone. Get preset IDs via API. |
|
Balancing algorithm: |
|
Interval between health checks (in seconds). |
|
Timeout for health checks (in seconds). |
|
Number of failed checks before marking an upstream as unavailable. |
|
Number of successful checks needed to recover an upstream. |
|
Disables a public external IP for the load balancer. |
To demonstrate the functionality of a load balancer, we will create two Nginx deployments, each displaying its own HTML page. The load balancer will distribute requests randomly between the pods, showing one of the pages depending on which pod is selected.
To simplify management and allow quick removal of all resources associated with the load balancer, we will create a separate namespace. This makes testing and resource cleanup easier while keeping the main cluster clean.
Run the following command to create the namespace:
kubectl create namespace test-namespace
After creation, use this namespace for all subsequent resources, including the load balancer, deployments, and ConfigMap
. To do this, add the line namespace: test-namespace
to every manifest related to the example.
We will start by creating a ConfigMap
to store two HTML pages. Pod 1 will display a page with the heading "Pod 1," and Pod 2 will display a page with the heading "Pod 2." These pages will be connected to Nginx within the pods.
nginx-pages-configmap.yaml
:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-pages
namespace: test-namespace
data:
index-page1.html: |
<html>
<body>
<h1>Pod 1</h1>
<p>This is page served by Pod 1.</p>
</body>
</html>
index-page2.html: |
<html>
<body>
<h1>Pod 2</h1>
<p>This is page served by Pod 2.</p>
</body>
</html>
Here we create a ConfigMap
with two HTML files: index-page1.html
and index-page2.html
. These files will be mounted into Nginx pods, allowing each pod to display its specific page.
Apply the ConfigMap
:
kubectl apply -f nginx-pages-configmap.yaml
Next, we create two deployments, each using different HTML pages from the ConfigMap
. The deployments use the selector app: nginx
, which the load balancer will use to identify pods participating in traffic distribution.
nginx-deployment-pod1.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-pod1
namespace: test-namespace
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
volumeMounts:
- name: nginx-pages
mountPath: /usr/share/nginx/html/index.html
subPath: index-page1.html
ports:
- containerPort: 80
volumes:
- name: nginx-pages
configMap:
name: nginx-pages
This deployment creates a single pod (replica 1) with the Nginx image, which mounts the index-page1.html
file from the ConfigMap
into the directory /usr/share/nginx/html/index.html
. Port 80 is open for accessing the page.
Apply the deployment:
kubectl apply -f nginx-deployment-pod1.yaml
nginx-deployment-pod2.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-pod2
namespace: test-namespace
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
volumeMounts:
- name: nginx-pages
mountPath: /usr/share/nginx/html/index.html
subPath: index-page2.html
ports:
- containerPort: 80
volumes:
- name: nginx-pages
configMap:
name: nginx-pages
This deployment also creates an Nginx pod but mounts the index-page2.html
file, which has different content.
Apply the second deployment:
kubectl apply -f nginx-deployment-pod2.yaml
Now, create a load balancer that will direct requests to pods with the label app: nginx
.
nginx-loadbalancer.yaml
:
apiVersion: v1
kind: Service
metadata:
name: nginx-loadbalancer
namespace: test-namespace
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: LoadBalancer
In this Service
, we specify type: LoadBalancer
, which creates a load balancer, and selector: app: nginx
, which directs requests to the Nginx pods from our deployments. Requests to the load balancer are distributed among the pods using the roundrobin
algorithm, which is the default.
Apply the load balancer:
kubectl apply -f nginx-loadbalancer.yaml
After creating the load balancer, you can find its public IP address in the control panel or by running the command:
kubectl get services -n test-namespace
Accessing this IP address will display a page served by one of the pods. Each time you refresh the page, traffic may be redirected to different pods, allowing the load balancer to switch the displayed page randomly.
Once you have verified the load balancer’s functionality, you can delete all the created pods and resources. Use the following commands:
kubectl delete service nginx-loadbalancer -n test-namespace
kubectl delete deployment nginx-pod1 -n test-namespace
kubectl delete deployment nginx-pod2 -n test-namespace
kubectl delete configmap nginx-pages -n test-namespace
These commands will remove the load balancer, pod deployments, and the ConfigMap
created earlier.
Alternatively, delete the entire namespace:
kubectl delete namespace test-namespace
This method will automatically remove all resources associated with the test environment.