Sign In
Sign In

Load Balancing in Kubernetes

Load Balancing in Kubernetes
Hostman Team
Technical writer
Kubernetes
24.11.2023
Reading time: 9 min

Load balancing in Kubernetes is a variety of ways to redirect incoming traffic to specific servers in the cluster, thus distributing traffic evenly and making scaling tasks easier.

The main benefit of balancing is avoiding application downtime. It prevents planned downtime due to the deployment of a new software version or unplanned downtime due to hardware issues. 

In this article, we'll look at how load balancing helps stabilize the Kube cluster, increasing application availability. As Kubernetes services help load balancing, we will show how they work and then give specific examples of load balancing. 

But first of all, let's talk about how Kube implements pod tracking, which makes the balancing itself much easier.

Tracking pods in Kubernetes with and without a selector

Pods in Kubernetes are temporary objects that get a new IP each time they are started. After a task is completed, they are destroyed and then re-created on a new deployment. Without Kubernetes service tools, we would have to keep track of the IPs of all active pods, which would be a very complicated task, especially as our application scales. The Kube service solves this problem thanks to the selector. Let's take a look at this code (replace the values with your actual ones):

kind: Service
metadata:
  name: hostmanapp
spec:
  selector:
    app: hostmanapp
  ports:
    - protocol: TCP
      name: hostmanapp
      port: 5428
    targetPort: 5428

The selector ensures that services are correctly matched with the associated pods. When a service receives a pod with a matching label, it updates the pod's IP in the Endpoints object lists. Endpoints keep track of the IP addresses of all pods and update automatically. Each service creates its own Endpoint object.

This article won't go into too much detail about Endpoints. Just remember that the Endpoints update the list of IP addresses so that the Kube service can redirect its traffic.

Defining a service using a selector is the most common method, but we can do this without it. For example, if we migrate our application to Kube, we can evaluate its behavior without migrating the server. Let's try using an existing application hosted on the old server:

kind: Service
  metadata:
    name: hostmanapp-without-ep
  spec:
    ports:
      - protocol: TCP
        port: 5428
        targetPort: 5428
And then:
kind: Endpoints
  metadata:
    name: hostmanapp-without-ep
  subsets:
    - addresses:
        - ip: x.x.x.x #specify the IP
      ports:
      - port: 5428

This will set the name hostmanapp-without-ep to connect to the hostmanapp server.

-

Kubernetes services

Kube, by default, always creates a ClusterIP service. However, there are four types of services, each designed for its own tasks, and together, they help to provide quite flexible load balancing. Let's take a look at all of them and give code examples for customization. 

ClusterIP

Designed for intra-cluster communication between applications. It is configured like this (the application values are random; you should replace them with your own):

kind: Service #mandatory line to define any service
  metadata:
    name: hostmanapp
  spec:
    type: ClusterIP
    selector:
      app: hostmanapp
    ports:
      - protocol: TCP
        port: 5428
      targetPort: 5428

NodePort

External service for mapping pods to hosts via a persistent port defined separately below (all values are also random; replace them with your own):

kind: Service
  metadata:
    name: hostmanapp
  spec:
    type: NodePort
    selector:
      app: hostmanapp
    ports:
      - protocol: TCP
        port: 5428
        targetPort: 5428
      nodePort: 32157

LoadBalancer

LoadBalancer is a cloud infrastructure service that allows you to provide routing through a website, for example. Here is the code for launching it:

kind: Service
  metadata:
    name: hostmanapp
  spec:
    type: LoadBalancer
    selector:
      app: hostmanapp
    ports:
      - protocol: TCP
        port: 5428
      targetPort: 5428

ExternalName

This service is needed to provide out-of-cluster access. The way to do it is simple:

metadata:
  name: hostmanapp
spec:
  type: ExternalName
externalName: hostmanapp.mydomain.com

Note that any service will have a DNS name created using this pattern: service-name.space-name.svc.cluster.local. This record will point to the cluster IP. Without it, Kube will query the IPs of specific pods.

Varieties of balancing through Kube services

As we have seen from the descriptions of all four Kube services, you can organize load balancing in different ways. Let's start by describing how it is done inside the cluster.

Intra-cluster balancing

ClusterIP service is intended for intra-cluster balancing (you can find the code for configuring this and other Kube services above). It is suitable, for example, for organizing the interaction of separate groups of pods located within one Kube cluster. You can provide access to the service in two ways: through DNS or with the environment variables.

Above, we have already described the DNS method. Let's add that it is the most common and recommended way of interaction between microservices. But note that DNS works in Kube only with a DNS-server add-on: for example, CoreDNS.

As for environment variables, they are set when starting a new service via the service-name instruction. You may need PORT and SERVICE_HOST variables, and here are the directives for setting them:

service-name_PORT
service-name_SERVICE_HOST

External balancing

It can be performed using NodePort (hereafter NP) and LoadBalancer (hereafter LB). NP is suitable for balancing a limited number of services and has the advantage of providing connectivity without a dedicated external balancing tool.

The first limitation of NP is that it is suitable only for a private network; you can't organize an Internet connection via NP. Another disadvantage is that it only works over static ports in a limited range, and the service must allocate the same port to each host. This becomes problematic when the application scales to multiple microservices.

The LB provides a public IP or DNS to which external users can connect. Traffic flows from the LB to the matched service on the assigned port, redirecting it to the worker pods. However, the LB does not have a direct match to the pods.

Let's see how to create an external LB. In the example below, we will start a pod and connect its GUI from an external network. Note that the LB does not filter incoming or outgoing traffic. It is just a proxy to connect to the external network, redirecting traffic to the appropriate modules/services. Create a cluster with create cluster command and enter the following parameters (name and region values are random; you must also enter your SSH public key in the ssh-public-key field).

--name myHostmanCluster 
--region my-hostman-region 
--with-oidc 
--ssh-access 
--ssh-public-key <xxxxxxxxxx> 
--managed
Now we edit the pod's yaml:
kind: Pod
metadata:
  name: hostmanpod
labels:
  app: hostmanpod
spec:
  containers:
    - name: hostmanpod
      image: hostmanpod:latest

Now edit the pod’s YAML file:

kind: Pod
metadata:
  name: hostmanpod
labels:
  app: hostmanpod
spec:
  containers:
    - name: hostmanpod
    image: hostmanpod:latest

Then create a pod:

kubectl apply -f hostmanpod.yaml

Let's make sure it works:

kubectl get pods --selector='app=hostmanpod'

Now activate the LB (again, the values in the code samples are random):

kind: Service
  metadata:
    name: hostmanpod-ext-serv
  spec:
    type: LoadBalancer
    selector:
      app: hostmanpod
    ports:
      - name: hostmanpod-admin
        protocol: TCP
        port: 14953
      targetPort: 14953

Next, start the service and confirm:

kubectl apply -f hostmanpod-svc.yaml

service/hostmanpod-ext-serv created

kubectl get svc

Copy the obtained DNS to the browser and enter the port specified in the code above. Let's assume that we got this DNS:

http://b9f305e6d743a85cb32f48f6a210cb51.my-hostman-region.com

Then we should paste the following into the browser:

http://b9f305e6d743a85cb32f48f6a210cb51.my-hostman-region.com:14953

Now you can share this address with anyone who wants to connect to your administrator account. As you can see, creating and configuring an external load balancer for an application is quite easy. 

LoadBalancer indeed has some limitations, but Ingress helps to bypass them.

Balancing via Ingress

We have seen that LoadBalancer creates application instances for each service. This is fine as long as we have a few services, but as their number increases, it becomes difficult to manage them. Also, LB does not support URL routing, SSL, and more. This is where Ingress comes to the rescue, which is an extension for NP and LB. Ingress processes internal traffic to determine which pods or services to forward next. The main function of Ingress is load balancing, but it can also perform URL routing, SSL termination, and several other functions. There are quite a few Ingress configurations, and they are easy to find by queries. Here is one for illustrative purposes:

E0e66de8 4066 4b88 A5f1 A8afb7fbf78c

The example above defines the rules by which traffic from end users will flow. We should also add that Ingress is not a Kube service like LB or NP, but a set of rules used by these services. In addition, a cluster using Ingress needs an Ingress Controller. There are a few of these controllers, and you can check out some popular solutions: AWS ALB, NGINX, Istio, Traefik.

Different controllers have different features and capabilities, so you will have to evaluate them based on your requirements. But whichever controller you use, Ingress will greatly simplify the configuration and management of routing rules and help you implement SSL-based traffic. And, of course, like traditional tools, Ingress controllers support a variety of balancing algorithms.

Conclusion

We learned about the differences between Kubernetes services, how to access them, how to organize intra-cluster and external balancing, and got acquainted with additional tools. To effectively use Kube services, you need to understand which of them is optimal for your tasks and configure it accordingly. It will save a lot of debugging time and ensure trouble-free operation of your applications and services.

Kubernetes
24.11.2023
Reading time: 9 min

Similar

Kubernetes

Installing MongoDB in a Kubernetes Cluster

MongoDB is a widely used NoSQL database designed to store large volumes of unstructured data. Combined with Kubernetes, MongoDB becomes a powerful solution for scaling databases efficiently within a unified environment. Prerequisites To install MongoDB on Kubernetes, you'll need a configured cloud server (or a physical one) with superuser rights and a Kubernetes cluster. While any OS can be used, Linux is recommended for minimal installation issues. Step-by-Step MongoDB Installation Connect to the Server: Gain superuser access and install necessary software: sudo -s apt-get update && apt install curl apt-transport-https -y && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list && apt-get update && apt install kubectl -y Configure Kubernetes Environment: Create a directory, add the configuration file, and set the environment variable: mkdir /usr/local/etc/mongo && cd /usr/local/etc/mongo cat << EOF > testcluster.conf<insert your cluster config data here>EOF echo "export KUBECONFIG=testcluster.conf" >> ~/.bashrc Verify Connection: Use kubectl cluster-info to check the connection. A successful connection will display:  Kubernetes control plane is running at <IP>. Create MongoDB Configuration Files: Set up a container for data storage and create a Creds.yaml file for MongoDB credentials. Encrypt login and password using BASE64: echo <unencrypted data> | base64echo <encrypted data> | base64 -d Example: apiVersion: v1 data: username: <username encrypted with BASE64> password: <password encrypted with BASE64> kind: Secret metadata: creationTimestamp: null name: creds Deploy MongoDB: Create a PersistVolClaim.yaml file with MongoDB configuration and deploy it using: kubectl apply -f PersistVolClaim.yaml The file example: apiVersion: apps/v1 kind: Deployment metadata: labels: app: mongo name: mongo spec: replicas: 1 selector: matchLabels: app: mongo strategy: {} template: metadata: labels: app: mongo spec: containers: - image: mongo name: mongo args: ["--dbpath","/data/db"] livenessProbe: exec: command: - mongo - --disableImplicitSessions - --eval readinessProbe: exec: command: - mongo - --disableImplicitSessions - --eval env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: creds key: username - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: creds key: password volumeMounts: - name: "datadir" mountPath: "/data/db" volumes: - name: "datadir" persistentVolumeClaim: claimName: "mongopvc" Test MongoDB Connection: After deploying containers, verify the connection: kubectl exec deployment/client -it -- /bin/bashmongo If everything is connected successfully, the system will display a typical database prompt. To create a new database, simply switch to it; however, note that it will not be saved until you add some data. This can be done as follows: use database_name db.createCollection("newdata") show dbs The last command is used to verify that the newly created database exists. Considerations for MongoDB in Kubernetes Remote Storage: For flexibility, use remote storage for MongoDB to facilitate movement if needed. Resource Management: Configure requests and limits in replica pods to avoid performance issues. Pod Disruption Budget: Set up to maintain the desired number of running replicas. Other Tools and Customization The method of installing MongoDB in Kubernetes described here is one of many options. You can also use software specifically designed to work with Kubernetes, such as Helm or KubeDB. KubeDB, in particular, was created to simplify the integration of other products into Kubernetes. As for Helm, it is another popular solution by VMware (although VMware didn't develop it but acquired and now maintains the product). Another solution is Percona Operator. This modern, open-source application (developed in 2018) is user-friendly and continuously improved by the community. Some people use combined solutions like Percona + Helm. However, installing MongoDB using each of these applications has its nuances, so it's advisable to study these products before proceeding; plenty of documentation is available. In conclusion, you can use a customized MongoDB image to manage a MongoDB cluster in Kubernetes according to your specific needs. For example, the default MongoDB image doesn't include authentication. Therefore, you can download an image with pre-configured authentication or create your own. Of course, using customized Docker images is slightly more complex than the implementation described above. Still, it gives you full control over the database configurations and settings according to your requirements. You can find useful information on customizing the official MongoDB image here. Conclusion With this guide, you can deploy MongoDB in a Kubernetes cluster. However, further tasks will require some knowledge of Kubernetes, so if you're not familiar with it, we recommend first studying the official documentation.
23 August 2024 · 5 min to read
Kubernetes

Kubernetes Cluster: Installation, Configuration, and Management

Kubernetes, or K8s, is an open-source container orchestration platform developed by Google. The core concept behind Kubernetes is that a user installs it on a server, or more likely a cluster, and deploys various workloads on it. Kubernetes addresses challenges related to container creation, scaling, namespaces, access rights, and more. The primary interaction with the cluster is through YAML configuration files. This tutorial will guide you through creating and deploying a Kubernetes cluster locally. Creating Virtual Machines We will set up the Kubernetes cluster on two virtual machines: one acting as the master node and the other as a worker node. While deploying a cluster with only two nodes is not practical for real-world use, it is sufficient for educational purposes. If you wish to create a Kubernetes cluster with more nodes, simply repeat the process for each additional node. We will use Oracle's VirtualBox to create virtual machines, which you can download from this link. After installation, proceed to create the virtual machines. For the operating system, we will use Ubuntu Server, which can be downloaded here. After downloading, open VirtualBox. Click "Create" in VirtualBox to create a new virtual machine. The default settings are sufficient, but allocate 3 GB of RAM and 2 CPUs for the master node (which manages the Kubernetes cluster) and 2 GB of RAM for the worker node. Kubernetes requires a minimum of 2 CPUs for the master node. Create two virtual machines this way. After creating the virtual machines, create a boot image with the Ubuntu Server distribution. Go to "Storage" and click "Choose/Create a Disk Image." Click "Add" and select the Ubuntu Server distribution. Then, start both machines and install the operating system by selecting "Try or Install Ubuntu." During installation, create users for each system and choose the default settings. After installation, shut down both virtual machines and go to their settings. In the "Network" section, change the connection type to "Bridged Adapter" for each system so that the virtual machines can communicate with each other over the network. System Preparation Network Configuration Set the node names for the cluster. On the master node, execute the following command: sudo hostnamectl set-hostname master.local On the worker node, execute: sudo hostnamectl set-hostname worker.local If there are multiple worker nodes, assign each a unique name: worker1.local, worker2.local, and so on. To ensure that nodes are accessible by name, modify the hosts file on each node. Add the following lines: 192.168.43.80     master.local master192.168.43.77     worker.local worker Here, 192.168.43.80 and 192.168.43.77 are the IP addresses of each node. To find the IP address, use the ip addr command: ip addr Locate the IP address next to inet. Open the hosts file and make the necessary edits: sudo nano /etc/hosts To verify that the VMs can communicate with each other, ping the nodes: ping 192.168.43.80 If successful, you will receive a response similar to this: PING 192.168.43.80 (192.168.43.80) 56(84) bytes of data.64 bytes from 192.168.43.80: icmp_seq=1 ttl=64 time=0.054 ms Updating Packages and Installing Additional Utilities Next, install the necessary utilities and packages on each node. These steps should be applied to each node unless specified otherwise. Start by updating the package list and systems: sudo apt-get update && apt-get upgrade -y Then install the following packages: sudo apt-get install curl apt-transport-https git iptables-persistent -y Swap File Kubernetes will not start with an active swap file, so it needs to be disabled: sudo swapoff -a To prevent it from reactivating after a reboot, modify the fstab file: sudo nano /etc/fstab Comment out the line with #: # /swap.img      none    swap    sw      0       0 Kernel Configuration Load additional kernel modules: sudo nano /etc/modules-load.d/k8s.conf Add the following two lines to k8s.conf: br_netfilteroverlay Now, load the modules into the kernel: sudo modprobe br_netfiltersudo modprobe overlay Verify the modules are loaded successfully: sudo lsmod | egrep "br_netfilter|overlay" You should see output similar to this: overlay               147456  0br_netfilter           28672  0bridge                299008  1 br_netfilter Create a configuration file to process traffic through the bridge in netfilter: sudo nano /etc/sysctl.d/k8s.conf Add the following two lines: net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1 Apply the settings: sudo sysctl --system Docker Installation Run the following command to install Docker: sudo apt-get install docker docker.io -y For more details on installing Docker on Ubuntu, refer to the official guide. After installation, enable Docker to start on boot and restart the service: sudo systemctl enable dockersudo systemctl restart docker Kubernetes Installation Add the GPG key: sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - Next, create a repository configuration file: sudo nano /etc/apt/sources.list.d/kubernetes.list Add the following entry: deb https://apt.kubernetes.io/ kubernetes-xenial main Update the apt-get package list: sudo apt-get update Install the following packages: sudo apt-get install kubelet kubeadm kubectl Installation is now complete. Verify the Kubernetes client version: sudo kubectl version --client  The output should be similar to this: Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2"} Cluster Configuration Master Node Run the following command for the initial setup and preparation of the master node: sudo kubeadm init --pod-network-cidr=10.244.0.0/16 The --pod-network-cidr flag specifies the internal subnet address, with 10.244.0.0/16 being the default value. The process will take a few minutes. Upon completion, you will see the following message: Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.43.80:6443 --token f7sihu.wmgzwxkvbr8500al \--discovery-token-ca-cert-hash sha256:6746f66b2197ef496192c9e240b31275747734cf74057e04409c33b1ad280321 Save this command to connect the worker nodes to the master node. Create the KUBECONFIG environment variable: export KUBECONFIG=/etc/kubernetes/admin.conf Install the Container Network Interface (CNI): kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml Worker Node On the worker node, run the kubeadm join command obtained during the master node setup. After this, on the master node, enter: sudo kubectl get nodes The output should be: NAME                 STATUS      ROLES                        AGE    VERSIONmaster.local          Ready      control-plane,master          10m    v1.24.2worker.local          Ready      <none>                        79s    v1.24.2 The cluster is now deployed and ready for operation. Conclusion Setting up a Kubernetes cluster involves several steps, from creating and configuring virtual machines to installing and configuring the necessary software components. This tutorial provided a step-by-step guide to deploying a basic Kubernetes cluster on a local environment. While this setup is suitable for educational purposes, real-world deployments typically involve more nodes and more complex configurations. Kubernetes provides powerful tools for managing containerized applications, making it a valuable skill for modern IT professionals. By following this guide, you've taken the first steps in mastering Kubernetes and its ecosystem.
22 August 2024 · 7 min to read
Kubernetes

Running Kubernetes Clusters in the Cloud with VMware

Containerization is an effective way to deliver applications to customers. If your cloud IT infrastructure is deployed on VMware, you can use CSE, or Container Service Extension, to work with Kubernetes (K8s). This solution significantly accelerates the time from receiving code to deploying it in a production cloud system by automating the management (orchestration) of containers with the software. What is CSE? CSE is an extension to the VMware vCloud Director (VCD) platform that adds functionality for interacting with Kubernetes clusters—from creation to lifecycle management. Its installation allows for a comprehensive approach, integrating the management of both legacy and containerized applications within a single VMware infrastructure, while maintaining uniformity and a systematic management approach. Key features The CSE client facilitates cluster deployment, adds worker nodes, and configures NFS storage. A vCloud Director-based cloud offers high-security, multi-tenant (user-isolated) computing resources. The CSE server is a tool for configuring the configuration file and virtual machine templates. Creating and managing Kubernetes clusters in VMware is relatively complex, especially compared to tools like Docker Swarm, another cluster management tool for remote hosts. Kubernetes is often compared to vSphere, but the discussed platform offers more extensive functionality for managing a containerized IT infrastructure. This compensates for the drawbacks of a complex architecture and the high cost of the product. CSE Features The first thing the developers highlight about CSE is the ability to save on the already implemented VMware vCloud Director platform. All previously installed applications will continue to function as before (virtually invisible to the end client), while adding the ability to work with VMware Container. System resilience remains high regardless of traffic uniformity or platform load dynamics. Benefits of implementing the extension: A tool for managing clusters, node pools, and other resources. Significantly reduced time-to-market for any new developments. Increased availability of web resources, including cloud applications. Automatic server load distribution. Improved reliability and performance of CI/CD processes. The number of containers is unlimited as long as the physical server's resources (memory, CPU, etc.) are sufficient. This allows for parallel development of different projects that are initially isolated from each other. There are also no restrictions on the installed operating systems or programming languages. This is convenient when operating in an international market, even with just one physical server. Installing the CSE Extension in vcd-cli The vcd-cli (Command Line Interface) tool manages the infrastructure from the command line. By default, it does not support working with CSE. To enable it, you need to install the container-service-extension add-on: python3 -m pip install container-service-extension Next, you need to add the extension to the vcd-cli configuration file, located at ~/.vcd-cli/profiles.yaml. Open this file with a text editor and find the line active with the following value: extensions:- container_service_extension.client.cse After saving the changes to the configuration file, log in: vcd login <host> <organization_name> <login> Now, verify that the extension is indeed installed and actively interacting with the host: vcd cse versionCSE, Container Service Extension for VMware vCloud Director, version 3.0.1 vcd cse system infoproperty     value-----------  ------------------------------------------------------description  Container Service Extension for VMware vCloud Directorproduct      CSEversion      2.6.1 Creating a Kubernetes Cluster Next, let's look at activating a Kubernetes cluster within VMware. Integration with the vCloud Director platform allows managing the process from a single point in a familiar interface. Data center resources are typically pooled, and deployment is done through VM templates with pre-installed and pre-configured Kubernetes. You can create a cluster manually with the command: vcd cse cluster create <cluster_name> \        --network <network_name> \         --ssh-key ~/.ssh/id_rsa.pub \        --nodes <number_of_nodes> \        --template <template_name> The cluster and network names are mandatory. The rest are optional and will default if omitted. You can check the full list of active templates with the command: vcd cse template list The selected network must be of type Routed and connected to the internet. If either of these conditions is not met, the cluster initialization process will stall during the master node generation. You can use a "grey" network with NAT or Direct Connect technology. The result of the cluster creation will be visible in the vCloud Director platform's web interface, in the vApps section. After monitoring the status, the final step is to create a configuration file for Kubernetes. Generate it with the command: vcd cse cluster config <cluster_name> > config Then move the file to an appropriate location with the commands: mkdir ~/.kube/configcp config ~/.kube/config The cluster is now fully ready for use—from setting user parameters to deploying virtual machines, applications, and more. However, keep in mind that emulating containerization does have some limitations. Implementation Features For instance, the CSE extension does not support the LoadBalancer service type. Therefore, Kubernetes manifests using it (plus Ingress) will not work correctly. There are solutions to this drawback, and we'll discuss two of the most popular—MetalLB and Project Contour. MetalLB Using MetalLB with Kubernetes involves a load balancer that replaces cloud routing protocols with standard LB protocols. Here's an example of how to use it. 1) Create a namespace and add MetalLB using manifests: kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yamlkubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml 2) Next, configure node connection security. Without this, the transmitted pods will go into a CreateContainerConfigError status, and error messages such as secret memberlist not found will appear in the logs: kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" 3) Check the current status of the utility. If configured correctly, the controller and speaker will be displayed as running: kubectl get pod --namespace=metallb-system  NAME                          READY   STATUS    RESTARTS   AGEcontroller-57f648cb96-2lzm4   1/1     Running   0          5h52mspeaker-ccstt                 1/1     Running   0          5h52mspeaker-kbkps                 1/1     Running   0          5h52mspeaker-sqfqz                 1/1     Running   0          5h52m 4) Finally, manually create a configuration file: apiVersion: v1 kind: ConfigMap metadata:   namespace: metallb-system   name: config data:   config: |     address-pools:     - name: default       protocol: layer2       addresses:      - X.X.X.101-X.X.X.102 You should fill in the addresses parameter with the addresses that remain free and will handle the load balancing. Apply the configuration file: kubectl apply -f metallb-config.yaml The procedure for setting up a LoadBalancer for Kubernetes using MetalLB is complete; next is Ingress support, which is easier to implement with another tool. Project Contour Create a manifest with Project Contour using the command: kubectl apply -f https://projectcontour.io/quickstart/contour.yaml This command automatically deploys the Envoy proxy server, which listens on the standard ports 80 and 443. Conclusion Integrating Kubernetes into VMware with the Container Service Extension (CSE) unifies the management of legacy and containerized applications within VMware vCloud Director. While the setup may be complex, CSE enhances application deployment, scaling, and management, offering a resilient and scalable infrastructure. Despite some limitations, such as native LoadBalancer support, tools like MetalLB and Project Contour provide effective solutions. Overall, CSE empowers organizations to modernize their IT infrastructure, accelerating development and optimizing resources within a secure, multi-tenant cloud environment.
22 August 2024 · 7 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support