Sign In
Sign In

Running Kubernetes Clusters in the Cloud with VMware

Running Kubernetes Clusters in the Cloud with VMware
Hostman Team
Technical writer
Kubernetes
22.08.2024
Reading time: 7 min

Containerization is an effective way to deliver applications to customers. If your cloud IT infrastructure is deployed on VMware, you can use CSE, or Container Service Extension, to work with Kubernetes (K8s). This solution significantly accelerates the time from receiving code to deploying it in a production cloud system by automating the management (orchestration) of containers with the software.

What is CSE?

CSE is an extension to the VMware vCloud Director (VCD) platform that adds functionality for interacting with Kubernetes clusters—from creation to lifecycle management. Its installation allows for a comprehensive approach, integrating the management of both legacy and containerized applications within a single VMware infrastructure, while maintaining uniformity and a systematic management approach.

Key features

  • The CSE client facilitates cluster deployment, adds worker nodes, and configures NFS storage.

  • A vCloud Director-based cloud offers high-security, multi-tenant (user-isolated) computing resources.

  • The CSE server is a tool for configuring the configuration file and virtual machine templates.

Creating and managing Kubernetes clusters in VMware is relatively complex, especially compared to tools like Docker Swarm, another cluster management tool for remote hosts. Kubernetes is often compared to vSphere, but the discussed platform offers more extensive functionality for managing a containerized IT infrastructure. This compensates for the drawbacks of a complex architecture and the high cost of the product.

CSE Features

The first thing the developers highlight about CSE is the ability to save on the already implemented VMware vCloud Director platform. All previously installed applications will continue to function as before (virtually invisible to the end client), while adding the ability to work with VMware Container. System resilience remains high regardless of traffic uniformity or platform load dynamics.

Benefits of implementing the extension:

  • A tool for managing clusters, node pools, and other resources.
  • Significantly reduced time-to-market for any new developments.
  • Increased availability of web resources, including cloud applications.
  • Automatic server load distribution.
  • Improved reliability and performance of CI/CD processes.

The number of containers is unlimited as long as the physical server's resources (memory, CPU, etc.) are sufficient. This allows for parallel development of different projects that are initially isolated from each other. There are also no restrictions on the installed operating systems or programming languages. This is convenient when operating in an international market, even with just one physical server.

Installing the CSE Extension in vcd-cli

The vcd-cli (Command Line Interface) tool manages the infrastructure from the command line. By default, it does not support working with CSE. To enable it, you need to install the container-service-extension add-on:

python3 -m pip install container-service-extension

Next, you need to add the extension to the vcd-cli configuration file, located at ~/.vcd-cli/profiles.yaml. Open this file with a text editor and find the line active with the following value:

extensions:
- container_service_extension.client.cse

After saving the changes to the configuration file, log in:

vcd login <host> <organization_name> <login>

Now, verify that the extension is indeed installed and actively interacting with the host:

vcd cse version

CSE, Container Service Extension for VMware vCloud Director, version 3.0.1
vcd cse system info

property     value
-----------  ------------------------------------------------------
description  Container Service Extension for VMware vCloud Director
product      CSE
version      2.6.1

Creating a Kubernetes Cluster

Next, let's look at activating a Kubernetes cluster within VMware. Integration with the vCloud Director platform allows managing the process from a single point in a familiar interface. Data center resources are typically pooled, and deployment is done through VM templates with pre-installed and pre-configured Kubernetes.

You can create a cluster manually with the command:

vcd cse cluster create <cluster_name> \
       --network <network_name> \
        --ssh-key ~/.ssh/id_rsa.pub \
       --nodes <number_of_nodes> \
       --template <template_name>

The cluster and network names are mandatory. The rest are optional and will default if omitted. You can check the full list of active templates with the command:

vcd cse template list

The selected network must be of type Routed and connected to the internet. If either of these conditions is not met, the cluster initialization process will stall during the master node generation. You can use a "grey" network with NAT or Direct Connect technology. The result of the cluster creation will be visible in the vCloud Director platform's web interface, in the vApps section.

After monitoring the status, the final step is to create a configuration file for Kubernetes. Generate it with the command:

vcd cse cluster config <cluster_name> > config

Then move the file to an appropriate location with the commands:

mkdir ~/.kube/config
cp config ~/.kube/config

The cluster is now fully ready for use—from setting user parameters to deploying virtual machines, applications, and more. However, keep in mind that emulating containerization does have some limitations.

Implementation Features

For instance, the CSE extension does not support the LoadBalancer service type. Therefore, Kubernetes manifests using it (plus Ingress) will not work correctly. There are solutions to this drawback, and we'll discuss two of the most popular—MetalLB and Project Contour.

MetalLB

Using MetalLB with Kubernetes involves a load balancer that replaces cloud routing protocols with standard LB protocols. Here's an example of how to use it.

1) Create a namespace and add MetalLB using manifests:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml

2) Next, configure node connection security. Without this, the transmitted pods will go into a CreateContainerConfigError status, and error messages such as secret memberlist not found will appear in the logs:

kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

3) Check the current status of the utility. If configured correctly, the controller and speaker will be displayed as running:

kubectl get pod --namespace=metallb-system 
NAME                          READY   STATUS    RESTARTS   AGE
controller-57f648cb96-2lzm4   1/1     Running   0          5h52m
speaker-ccstt                 1/1     Running   0          5h52m
speaker-kbkps                 1/1     Running   0          5h52m
speaker-sqfqz                 1/1     Running   0          5h52m

4) Finally, manually create a configuration file:

apiVersion: v1 
kind: ConfigMap 
metadata: 
  namespace: metallb-system 
  name: config 
data: 
  config: | 
    address-pools: 
    - name: default 
      protocol: layer2 
      addresses: 
    - X.X.X.101-X.X.X.102

You should fill in the addresses parameter with the addresses that remain free and will handle the load balancing. Apply the configuration file:

kubectl apply -f metallb-config.yaml

The procedure for setting up a LoadBalancer for Kubernetes using MetalLB is complete; next is Ingress support, which is easier to implement with another tool.

Project Contour

Create a manifest with Project Contour using the command:

kubectl apply -f https://projectcontour.io/quickstart/contour.yaml

This command automatically deploys the Envoy proxy server, which listens on the standard ports 80 and 443.

Conclusion

Integrating Kubernetes into VMware with the Container Service Extension (CSE) unifies the management of legacy and containerized applications within VMware vCloud Director. While the setup may be complex, CSE enhances application deployment, scaling, and management, offering a resilient and scalable infrastructure. Despite some limitations, such as native LoadBalancer support, tools like MetalLB and Project Contour provide effective solutions. Overall, CSE empowers organizations to modernize their IT infrastructure, accelerating development and optimizing resources within a secure, multi-tenant cloud environment.

Kubernetes
22.08.2024
Reading time: 7 min

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start
Email us
Hostman's Support