Kubernetes has been a leading orchestrator for containerized applications for almost a decade. Launched by Google in mid-2014, it quickly gained widespread popularity and support. Kubernetes supports the entire lifecycle of managing containerized applications, from pulling images from a registry to fully running containers.
In this tutorial, we will install a Kubernetes cluster of three nodes: one master node and two worker nodes.
To install the Kubernetes cluster following this guide:
You will need three cloud servers or virtual machines running Ubuntu 22.04.
Each server should have at least 2 GB of RAM and at least 2 CPU cores. If you don't meet these requirements, initializing the cluster will fail.
Some OS-level settings need to be configured before installing and initializing the Kubernetes cluster. The commands provided in this section should be run on all three servers as the root user.
Update repository lists, upgrade all packages, and install necessary packages:
apt update && apt -y upgrade && apt -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
Disable SWAP:
First, check if SWAP is in use with the command:
free -h
If SWAP is active, disable it permanently by editing the fstab
file:
nano /etc/fstab
Find the line containing swap.img
and comment it out by adding a #
at the beginning.
Then save the file and reboot the server:
reboot
Load additional network modules:
Create a configuration file named k8s.conf
to load necessary network modules:
cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
Load the modules:
modprobe overlay
modprobe br_netfilter
Enable network bridge parameters:
Configure kernel parameters for network traffic routing:
cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
Restart the kernel parameters:
sysctl --system
Configure the firewall:
If using UFW or another firewall, open the following ports: 6443, 2379, 2380, 10250, 10259, 10257. Alternatively, disable UFW:
systemctl stop ufw && systemctl disable ufw
With the operating system now configured, we can proceed to install the CRI-O container runtime and Kubernetes.
Since Kubernetes doesn't run containers by itself, it requires a container runtime. We'll install CRI-O, a lightweight container runtime.
Execute these commands on all three servers.
Set variables for downloading the appropriate CRI-O version:
export OS=xUbuntu_22.04
export CRIO_VERSION=1.25
Add the repositories:
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" | tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$CRIO_VERSION/$OS/ /" | tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION.list
Import the GPG keys:
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION/$OS/Release.key | apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -
Install CRI-O and additional utilities:
apt update && apt -y install cri-o cri-o-runc cri-tools
Start CRI-O and enable it to start at boot:
systemctl start crio && systemctl enable crio
Verify CRI-O status:
systemctl status crio
Now, let's install Kubernetes. Perform these steps on all three servers.
Add the Kubernetes GPG key:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Add the Kubernetes repository:
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list
Install kubelet, kubeadm, and kubectl:
apt update && apt -y install kubelet kubeadm kubectl && apt-mark hold kubelet kubeadm kubectl
Initialize the master node on the designated master server:
kubeadm init --pod-network-cidr=10.244.0.0/16
Follow the post-initialization steps, including setting up your kubeconfig
file for cluster administration:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Join worker nodes to the cluster:
Execute the kubeadm join
command provided in the output on the remaining servers as the root
user.
Verify the nodes in the cluster:
kubectl get nodes
You can also view all the pods in the cluster:
kubectl get po -A
Install the Flannel network plugin:
kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
kubectl get po -n kube-flannel
Verify the deployment of a simple Nginx web server:
Create a deployment file:
nano nginx_deployment.yaml
Add the following content to it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16.1
ports:
- containerPort: 80
And apply it:
kubectl apply -f nginx_deployment.yaml
Check the deployment status:
kubectl get po -n default
Test the Nginx deployment:
Get the IP address of the pod. To do this, use the kubectl describe
command with the name of one of the running pods as a parameter:
kubectl describe pod nginx-deployment-848dd6cfb5-rn5bv
Retrieve the IP address from the IP field and send a request using curl
:
curl -i 10.244.1.2
If successful, you'll receive a 200 response code from Nginx.
Congratulations! You've successfully set up a Kubernetes cluster on Ubuntu 22.04 with a master node and two worker nodes, configured the CRI-O container runtime, and deployed a simple Nginx web server to verify that the cluster is functioning correctly. This setup provides a solid foundation for further exploration and deployment of more complex applications in your Kubernetes environment.