Deploying a Kubernetes Cluster
Beginning DevOps engineers pretty quickly find themselves needing to deploy a Kubernetes cluster, which is typically used to manage and run Docker containers. Let's look at a solid way to deploy a Kubernetes cluster on Ubuntu OS, and then summarize other possible options.
Kubernetes for DevOps: deploying, running, and scaling up
Let's start with a bit of important terminology. By cluster, we mean a pooling of resources under Kubernetes management. A cluster includes at least one master node and one worker node. While nodes are used to run containers, Kubernetes allows you to monitor the nodes and automatically manage and scale the cluster. The easiest way to deploy a Kubernetes cluster is as follows.
Deploying a cluster on Ubuntu: step-by-step instructions
For deployment, we will need external IPs for each node, and each node needs to have 2 GB RAM and 2 CPU cores. For Ubuntu, it is desirable to increase the amount of RAM to 4 GB and provide 30-35 GB of disk space. This configuration is enough to start, but you may need to add extra cloud resources later when the number of running containers increases. With Hostman, you can do this "on the fly".
We assume that you have already installed the OS and have two servers (nodes), one of which will be used as a master and the other as a worker.
Step 1: Generate SSH keys
You will need to generate SSH keys for each node so that you can manage the cluster remotely. Start with this command:
You can use the
-t flag to specify the type of key to generate. For example, to create an RSA key, execute:
ssh-keygen -t rsa
You can also use the
-b flag to specify the bit size:
ssh-keygen -b 2048 -t rsa
Now, you can specify the path to the file to store the key. The default path and file name are usually offered in this format:
/home/user_name/ .ssh/id_rsa. Press Enter if you want to use the default path and file name. Otherwise, enter the desired path and file name, and then press Enter. Next, you will be prompted to enter a password. We recommend doing this to protect the key from unauthorized use.
After confirming the password, the program will generate a pair of SSH keys, public and private, and save them to the specified path. The default key file names are
id_rsa for the private key and
id_rsa.pub for the public key.
Note the path and file names of the private and public key files. You will need to enter the SSH public key to use on the remote device. To log in, you must specify the path to the corresponding SSH private key and enter the password when prompted.
And one more important point regarding security: never share the SSH private key, otherwise anyone can get access to the server.
Step 2: Install packages
Now, let's connect to the worker node.
First, update the package list. Type:
Next, install the required packages via
sudo. Separate the package names with a space:
sudo apt-get install apt-transport-https ca-certificates curl -y
-y flag at the end will answer "yes" automatically to all system prompts.
Step 3: Obtain the GPG key
To do this, enter the following lines one by one:
sudo mkdir \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg - dearmor -o /etc/apt/keyrings/docker.gpg
Step 4: Install Docker
Finally, let's install Docker. Get the package:
sudo add-apt-repository 'deb [arch=amd64] your_URL_here'
your_URL_here, specify the address of the real repository, depending on your OS.
For example, for Ubuntu 22.04 'Jammy' the command will look like this:
sudo add-apt-repository 'deb [arch=amd64] https://download.docker.com/linux/ubuntu jammy stable'
Next, update the packages:
sudo apt update
Type the command:
sudo apt install docker-ce -y
Check that Docker is successfully installed:
sudo docker run hello-world
Step 5: Install Kubernetes modules
Now, we need to install the following Kubernetes modules:
Kubelet. We will need it for each node, as it controls the state of containers;
Kubeadm. It helps to automate the installation and configuration of other Kubernetes modules. It also should be installed on all nodes;
Kubectl. It's used in all projects with Kubernetes, as it is the one that starts the commands.
To install the modules, enter:
apt-get install -y kubelet kubeadm kubectl
And then reboot:
systemctl restart containerd
Step 6: Create a cluster
After configuring one node, you can easily create and deploy as many copies of it as you need using cloning. To do this, go to your server's page in the Hostman control panel and click Clone to create an exact copy of your node.
Next, we need to convert one of the worker nodes to a master node from which we will manage the cluster. To do this, enter the command:
kubeadm init --pod-network-cidr=10.244.0.0/16
In the output, we will get a long message starting with the line
Your Kubernetes control-plane has initialized successfully!. This means that the cluster is created.
Now, go to the last line, which is the token code. Copy and save it in any text editor because you will need it later for further configuration.
Step 7: Start the cluster
Use the command:
Next, allow containers to be started with the following:
kubectl taint nodes --all node-role.kubernetes.io/master-
Step 8: Provide intranet communication
For this purpose, install SDN Flannel, the latest version of which can be found here.
Next, to test it, enter:
kubectl -n kube-system get pods
Step 9: Create a token
Now, we need to get a token to authorize. Add the previously saved token or, if you forgot to save it, enter:
kubeadm token list
Once the token is created, let's start deploying the cluster. Note that the token is only valid for 24 hours, but you can always generate a new one using the following command:
kubeadm token create --print-join-command
Step 10. Connect working nodes
So, our cluster is up and running. Let's start connecting worker nodes to it using the token (IP and token values below are given just as an example):
kubeadm join 172.31.43.204:6443 --token fg691w.pu5qexz1n654vznt --discovery-token-ca-cert-hash [insert the generated token here and remove the square brackets]
If an error occurs (this sometimes happens), simply restart the cluster and re-enter the above
kubeadm join command.
Step 11. Check if it works
That's all. Now, we need to see if the nodes are responding.
kubectl get pods --all-namespaces
kubectl get nodes
If the output shows
Running and Ready, everything is done correctly.
Now, let's briefly look at other ways to deploy a cluster, particularly with VMware and Azure applications.
Other ways to deploy
To deploy a cluster, you will need vCloud Director with CSE installed and, of course, Kubernetes itself with the Kubectl plug-in we discussed above.
CSE, or Container service extension, is an extension for VMware products that provides full support for Kubernetes clusters in a virtualized infrastructure. The system requirements for the cluster and its nodes are the same as in the example above. The process of installing and deploying a Kubernetes cluster via vCloud Director is described in the documentation.
We will need the Azure CLI or PowerShell. The cluster in Azure CLI is created via
az aks create command with the following parameters (substitute your values instead of
az aks create \
If you are using PowerShell, then the similar commands will apply:
New-AzAksCluster -ResourceGroupName myResourceGroup_name_here -Name myAKSCluster_name_here -NodeCount 2 -GenerateSshKey -AcrNameToAttach <acrName_here>
Of course, Ubuntu is not the only OS where you can deploy a cluster. Almost all Linux-based systems are suitable for this, but keep in mind that the commands you enter may slightly differ. So, on Ubuntu, Docker is installed as follows:
apt-get install -y docker.io, but, for example, in CentOS the command will look a little different:
yum install -y docker.