Sign In
Sign In

How to Install a Kubernetes Cluster on Ubuntu 22.04

How to Install a Kubernetes Cluster on Ubuntu 22.04
Hostman Team
Technical writer
Kubernetes
15.08.2024
Reading time: 5 min

Kubernetes has been a leading orchestrator for containerized applications for almost a decade. Launched by Google in mid-2014, it quickly gained widespread popularity and support. Kubernetes supports the entire lifecycle of managing containerized applications, from pulling images from a registry to fully running containers.

In this tutorial, we will install a Kubernetes cluster of three nodes: one master node and two worker nodes.

Prerequisites

To install the Kubernetes cluster following this guide:

  • You will need three cloud servers or virtual machines running Ubuntu 22.04.

  • Each server should have at least 2 GB of RAM and at least 2 CPU cores. If you don't meet these requirements, initializing the cluster will fail.

Configuring the Operating System

Some OS-level settings need to be configured before installing and initializing the Kubernetes cluster. The commands provided in this section should be run on all three servers as the root user.

  1. Update repository lists, upgrade all packages, and install necessary packages:

apt update && apt -y upgrade && apt -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
  1. Disable SWAP:

First, check if SWAP is in use with the command:

free -h

Image16

If SWAP is active, disable it permanently by editing the fstab file:

nano /etc/fstab

Find the line containing swap.img and comment it out by adding a # at the beginning.

Image23

Then save the file and reboot the server:

reboot
  1. Load additional network modules:

Create a configuration file named k8s.conf to load necessary network modules:

cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

Image19

Load the modules:

modprobe overlay
modprobe br_netfilter
  1. Enable network bridge parameters:

Configure kernel parameters for network traffic routing:

cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
  1. Restart the kernel parameters:

sysctl --system
  1. Configure the firewall:

If using UFW or another firewall, open the following ports: 6443, 2379, 2380, 10250, 10259, 10257. Alternatively, disable UFW:

systemctl stop ufw && systemctl disable ufw

With the operating system now configured, we can proceed to install the CRI-O container runtime and Kubernetes.

Installing CRI-O

Since Kubernetes doesn't run containers by itself, it requires a container runtime. We'll install CRI-O, a lightweight container runtime.

Execute these commands on all three servers.

  1. Set variables for downloading the appropriate CRI-O version:

export OS=xUbuntu_22.04
export CRIO_VERSION=1.25

Image6

  1. Add the repositories:

echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" | tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$CRIO_VERSION/$OS/ /" | tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION.list
  1. Import the GPG keys:

curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION/$OS/Release.key | apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -
  1. Install CRI-O and additional utilities:

apt update && apt -y install cri-o cri-o-runc cri-tools
  1. Start CRI-O and enable it to start at boot:

systemctl start crio && systemctl enable crio
  1. Verify CRI-O status:

systemctl status crio

Image3

Installing Kubernetes

Now, let's install Kubernetes. Perform these steps on all three servers.

  1. Add the Kubernetes GPG key:

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
  1. Add the Kubernetes repository:

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list
  1. Install kubelet, kubeadm, and kubectl:

apt update && apt -y install kubelet kubeadm kubectl && apt-mark hold kubelet kubeadm kubectl
  1. Initialize the master node on the designated master server:

kubeadm init --pod-network-cidr=10.244.0.0/16

Follow the post-initialization steps, including setting up your kubeconfig file for cluster administration:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. Join worker nodes to the cluster:

Execute the kubeadm join command provided in the output on the remaining servers as the root user.

  1. Verify the nodes in the cluster:

kubectl get nodes

Image12

You can also view all the pods in the cluster:

kubectl get po -A

Image2

  1. Install the Flannel network plugin:

kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
  1. Check the pods status:
kubectl get po -n kube-flannel

Image21

  1. Verify the deployment of a simple Nginx web server:

Create a deployment file:

nano nginx_deployment.yaml

Add the following content to it:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.16.1
        ports:
        - containerPort: 80

And apply it:

kubectl apply -f nginx_deployment.yaml

Image18

Check the deployment status:

kubectl get po -n default

Image15

  1. Test the Nginx deployment:

Get the IP address of the pod. To do this, use the kubectl describe command with the name of one of the running pods as a parameter:

kubectl describe pod nginx-deployment-848dd6cfb5-rn5bv

Mlkmkl

Retrieve the IP address from the IP field and send a request using curl:

curl -i 10.244.1.2

If successful, you'll receive a 200 response code from Nginx.

Conclusion

Congratulations! You've successfully set up a Kubernetes cluster on Ubuntu 22.04 with a master node and two worker nodes, configured the CRI-O container runtime, and deployed a simple Nginx web server to verify that the cluster is functioning correctly. This setup provides a solid foundation for further exploration and deployment of more complex applications in your Kubernetes environment.

Kubernetes
15.08.2024
Reading time: 5 min

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start
Email us
Hostman's Support