Sign In
Sign In

How to Track Pod-to-Pod Traffic in Kubernetes

How to Track Pod-to-Pod Traffic in Kubernetes
Hostman Team
Technical writer
Kubernetes
16.08.2024
Reading time: 7 min

This article is for those seeking a simple and clear introduction to organizing and managing Kubernetes clusters or for those looking to optimize their projects. We will discuss the intricacies of Kubernetes networking technologies and explore some principles and mechanisms that govern Kubernetes pod-to-pod interactions. We'll examine the Kubernetes network model using a VirtualBox setup. By studying this solution's default settings, you'll understand how communication between pods is established in a Kubernetes environment, which you can apply in practice. But first, a bit of theory.

Two Kubernetes Standards

Instead of a rigid network implementation, Kubernetes uses the CRI and CNI standards, offering its own plugins with many configuration options for a Kubernetes cluster networking setup. Thus, instead of a single infrastructural solution, a wide range of alternatives is offered. Each solution allows you to create a unique Kubernetes pod network structure. This enables you to choose the approach that best meets your requirements, considering scalability, performance, fault tolerance, and compatibility with your infrastructure. Of course, such variability also brings some challenges with specific network solutions. But with proper configuration, you'll ensure optimal network performance, reliable connectivity, and convenient cluster management.

It's worth noting that both standards offer several popular solutions. For CRI, the main options are containerd and cri-o, while CNI includes Calico and Flannel. They all provide essential functions for container runtime environments, allowing you to choose the parameters that best suit your needs and preferences. Often, these plugins complement rather than exclude each other. Naturally, such combinations require some configuration effort to guarantee the stable operation of the entire assembly. This task is significantly simplified by distributions where CRI and CNI plugins are usually preconfigured so that the user can immediately start configuring Kubernetes applications.

Among paid distributions, Red Hat Openshift is noteworthy, but there are also quite functional free products, primarily OKD, minikube, and areguera-k8s-lab, which we will use to learn how to track pod-to-pod traffic.

Cluster Architecture

In vagrant-k8-lab, containerd and Flannel are embedded in each node, providing the following structure: 

Image4

Image source: medium.com

We should note that in this case, each node is connected to four interfaces: eth0, eth1, cni0, and lo. Each of them has its purpose, which we will now discuss in detail. You may also notice that several pods are already deployed in the cluster, which we'll use for subsequent tests.

Node-to-Internet Traffic, eth0

To download container images from outside, the CRI plugin requires outgoing traffic. The OS package manager also needs access to remote repositories. When Kubernetes nodes are deployed using VirtualBox, eth0 is connected by default in NAT mode. The guest OS gets IP 10.0.2.15 and exchanges data with VirtualBox via IP 10.0.2.3. These are the default settings, ensuring that virtual machines can interact with the "outside world" provided the host running VirtualBox has the necessary connection. Here's how it looks schematically:

Image3 (2)

Image source: medium.com

Of course, the default VirtualBox network scheme may seem somewhat illogical until you understand that virtual machines are connected to different networks, even if they all use the same address space. This VirtualBox design choice allows for external resource connections using a consistent IP addressing scheme across all virtual machines.

In the default VirtualBox network scheme, connections between virtual machines are handled individually. Each virtual machine is connected to its isolated 10.0.2.0/24 network inside VirtualBox, using a unique virtual channel for each internal network. Thus, virtual machines can interact with the outside world but not each other.

Now, let's create a pod using an image:

[vagrant@node-1 ~]$ kubectl run nginx --image=nginx:latest

Then check its status:

[vagrant@node-1 ~]$ kubectl get pods/nginx
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          10m

If the status is Running, everything is fine, and the pod is active on node 1, provided the Kubernetes scheduler selected this node. To verify this, enter the following commands sequentially:

[vagrant@node-1 ~]$ kubectl describe pods/nginx
[vagrant@node-1 ~]$ journalctl -f -u containerd

Node-to-Node Traffic, eth1

To connect nodes to a single external channel, a direct connection is required. Therefore, eth0 is unsuitable for this purpose, and another type of network connection is needed. Here, eth1 comes to the rescue, implementing a simple and effective scheme:

Image5

Image source: medium.com

As we can see, each node is assigned its IP within a specific range, starting with 192.168.56.10/24 for the master, 192.168.56.11/24 for the first worker node, and 192.168.56.12/24 for the second.

Now let's check the connection parameters for both worker nodes by entering the following commands:

[vagrant@controlplane ~]$ traceroute -n 192.168.56.11
[vagrant@controlplane ~]$ traceroute -n 192.168.56.12

If the connection is successful, you'll see the number of packets transmitted and the transmission time in milliseconds.

Pod-to-Pod Traffic Within a Node, cni0

Kubernetes pod-to-pod networking occurs when running Kubernetes applications on a single node. In this case, pods need unique IPs to exchange data. You know that pods have a defined lifespan and are destroyed as easily as they are created. This is implemented using the CNI plugin, and schematically it looks like this:

Image1

Image source: medium.com

Now let's create a pod from an image:

[vagrant@node-1 ~]$ kubectl run httpd --image=httpd

Next, we need to find out the IP of the created pod, which is done using:

[vagrant@controlplane ~]$ kubectl get pods -o wide

Now, using the busybox image, we create a new pod, substituting the obtained IP value:

[vagrant@node-1 ~]$ kubectl run traceroute --image=busybox -- traceroute IP

Finally, check the connection parameters by entering the following command to display the log:

[vagrant@node-1 ~]$ kubectl logs traceroute

If the log indicates only one hop during routing (traceroute), then we are dealing with cni0, meaning the k8s networking occurs within a single node.

Inter-Node Pod-to-Pod Traffic

Single-node Kubernetes clusters are good for testing but don't provide sufficient fault tolerance for real projects. If a single node goes down, the entire application crashes. That's why developers create multi-node clusters, where the load is often evenly distributed across nodes. For this purpose, the Flannel plugin is used, providing "smart" traffic routing without manually controlling the Kubernetes traffic routing process. Here's what a multi-node cluster looks like:

Image2

Image source: medium.com

Let's create a pod from an image, ensuring that the pod will be placed on the appropriate node (in our case, the first worker node):

[vagrant@controlplane ~]$ kubectl run httpd --image=httpd \
--overrides= '{"spec": { "nodeSelector": {"kubernetes.io/hostname": "node-1"}}}'

Then enter another familiar command:

[vagrant@controlplane ~]$ kubectl get pods -o wide

Now, as in the previous example, create a new pod using the busybox image, but this time on the second worker node, substituting the obtained IP value:

[vagrant@controlplane ~]$ kubectl run traceroute --image=busybox \
--overrides='{"spec": { "nodeSelector": {"kubernetes.io/hostname": "node-2"}}}' \
-- traceroute IP

Finally, execute the command to track the routing:

[vagrant@controlplane ~]$ kubectl logs traceroute

If everything works correctly, you'll see three hops corresponding to the cni0-eth1-cni0 scheme.

Conclusion

In this tutorial, we reviewed how you can easily manage k8s network configuration using standard Kubernetes tools and a connected distribution, creating various schemes and organizing and tracing Kubernetes pod traffic within single or multiple nodes in a cluster.

Kubernetes
16.08.2024
Reading time: 7 min

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start
Email us
Hostman's Support