In a Kubernetes cluster, every pod can reach every other pod. This is accomplished by deploying a POD networking solution to the cluster. A pod network is an internal virtual network that spans across all the nodes in the cluster to which all the Pods connect to. Through this network are able to communicate with each other. There are many solutions available for deploying such a network.
- node 1 has a web application pod
- node 2 has a database pod
The web app can reach the database, simply by using the IP of the database pod, however, there is no guarantee that the IP of the database pod will always remain the same. A better way for the web app to access the database is using a service. The service also gets an IP address assigned to it whenever a pod tries to reach the service using its IP or name it forwards the traffic to the pod.
The service can not join the pod network because the service is not an actual thing. It is not a container like a pod so it does not have any interfaces or an active listening process. It is a virtual component that only lives in the Kubernetes memory, but then we also said that the service should be accessible across the cluster from any nodes, so how is that achieved? That’s where Kube-proxy comes in.
Kube-proxy is a process that runs on each node in the Kubernetes cluster. It is job is to look for new services and every time a new service is created it creates the appropriate rules on each node to forward traffic to those services to the pod. One way it does this is by using IP Table rules. In this case, it creates an IP tables rule on each node in the cluster to forward traffic heading to the IP of the service.
Kube-Scheduler with Kubeadm
$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-f9fd979d6-wtk5k 1/1 Running 1 7d coredns-f9fd979d6-x5zxv 1/1 Running 1 7d etcd-kubemaster 1/1 Running 1 7d kube-apiserver-kubemaster 1/1 Running 1 7d kube-controller-manager-kubemaster 1/1 Running 1 7d kube-proxy-jnf5q 1/1 Running 1 6d23h kube-proxy-m9krm 1/1 Running 1 6d23h kube-proxy-zfbsh 1/1 Running 1 7d kube-scheduler-kubemaster 1/1 Running 1 7d weave-net-g4l7r 2/2 Running 3 7d weave-net-skdlq 2/2 Running 4 6d23h weave-net-xg67h 2/2 Running 4 6d23h
- kube-proxy-jnf5q, kube-proxy-m9krm, kube-proxy-zfbsh
$ kubectl get daemonset -n kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 7d6h
In fact, it is deployed as a daemon set, so a single pod is always deployed on each node in the cluster.