Service Networking in Kubernetes

You would rarely configure the pods to communicate directly with each other. If you want a pod to access services hosted on another pod you would always use a service.

When a service is created it is accessible from all pods of the cluster, irrespective of what nodes the pods are on while a pod is hosted on a node, a service is hosted across the cluster. It is not bound to a specific node.

ClusterIP & NodePort

ClusterIP

The service is only accessible from within the cluster. If pod 10.244.0.3 is a Database Application that is to be only accessed from within the cluster, then this type of service is fine.

apiVersion: v1
kind: Service
metadata:
  name: nginx-clusterip-service

spec:
  type: ClusterIP
  ports:
    - targetPort: 80
      port: 80
  selector:
    app: nginx
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  containers:
    - name: nginx-container
      image: nginx
      imagePullPolicy: IfNotPresent
      ports:
        - containerPort: 80
          protocol: TCP
$ kubectl create -f nginx-clusterip-service.yaml 
service/nginx-clusterip-service created

$ kubectl get service
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
nginx-clusterip-service   ClusterIP   10.108.251.120   <none>        80/TCP         4s

$ minikube ssh
      
$ curl 10.108.251.120
<title>Welcome to nginx!</title>

NodePort

However, the pod 10.244.2.2 is a web application that needs the application on the pod accessible outside the cluster. we have another service type which is NoedPort. This service also an IP assigned to it and works just like ClusterIP. But, in addition, it also exposes the application on a port on all nodes in the cluster. That way external users or applications have access to the service

apiVersion: v1
kind: Service
metadata:
  name: nginx-nodeport-service
spec:
  type: NodePort
  ports:
    - targetPort: 80
      port: 80
      nodePort: 31110
  selector:
    app: nginx
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  containers:
    - name: nginx-container
      image: nginx
      imagePullPolicy: IfNotPresent
      ports:
        - containerPort: 80
          protocol: TCP
$ kubectl create -f nginx-nodeport-service.yaml 
service/nginx-nodeport-service created

$ kubectl get service
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
nginx-nodeport-service    NodePort    10.107.136.235   <none>        80:31110/TCP   24s

$ minikube ip
192.168.64.3

$ curl 192.168.64.3:31110
<title>Welcome to nginx!</title>

$ minikube ssh

$ curl 10.107.136.235
<title>Welcome to nginx!</title>
Services in Kubernetes
  • How are the Services getting the IP addresses?
  • How are they made available across all the nodes in the cluster?
  • How is the NodePort service made available to external users through a port on each node?
  • who is doing that and how and where do we see it?

Every Kubernetes Node runs a Kubelet process, which is responsible for creating pods. Each Kubelet service on each node watches the changes in the cluster through the Kube-API server. Every new pod is to be created, it creates the pod on the nodes. and then the CNI plugin to configure networking for the pod. similarly, each node runs another component known as Kube-proxy. Kube proxy watches the changes in the cluster through the Kube-API server, and every time a new service is to be created. Kube-proxy gets into action. Unlike pods, services are not created on each node or assigned to each node. Services are a cluster-wide concept. Services exist across all the nodes in the cluster. As a matter of fact, they do not exist at all. There is no server or service really listening on the IP of the Service. There are no processes or namespaces or interfaces for a service. It is just a virtual object.

So, How were we can access the application on the pod through service?

When we create a service object in Kubernetes, it is assigned an IP address from a pre-defined range. The Kube proxy components running on each node, get the IP address and creates forwarding rules on each node in the cluster, saying any traffic coming to the IP, the IP of the service, should go to the IP of the pod. Once this is in place, whenever a pod tries to reach the IP of the service, it is forwarded to the Pods IP address which is accessible from any node in the cluster. Remember it is not just the IP, IP, and port combination.

Whenever services are created or deleted the Kube-proxy component creates or deletes the rules.

How are the rules created?

Kube-proxy supports different ways, such as userspace where Kube-proxy listens on a port for each service and proxies connections to the pods. By creating ipvs rules or the third and the default option is using Ip tables. The proxy mode can be set using the proxy mode option while configuring the Kube-proxy service. If this is not set, it defaults to iptables. So we will see how iptables are configured by Kube-proxy.

How iptables are configured by kube-proxy and how you can view them on the nodes.

$ kubectl get pods -o wide
NAME        READY   STATUS    RESTARTS   AGE    IP           NODE       
nginx-pod   1/1     Running   1          2d5h   172.17.0.2   worknode1

$ kubectl get service 
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
nginx-clusterip-service   ClusterIP   10.108.251.120   <none>        80/TCP         61m  

We have a pod name nginx-pod on node worknode1 it has IP address 172.17.0.2. Also, we create a ClusterIP service to make this pod available within the cluster. When the service is created Kubernetes assigns an IP address to it. It is set to 10.108.251.120. This range is specified in the Kube-apiservers option called service-cluster-ip-range which is by default set to 10.0.0.0/24.

$ kube-api-server --service-cluster-ip-range ipNet

There should not be a case where a pod and a service are assigned the same IP address.

When you create a NodePort service, kube-proxy creates iptable rules to forward all traffic coming on a port on all nodes to the respective backend pods.

Leave a Reply

Your email address will not be published.

ANOTE.DEV