- Check all the nodes are healthy.
- Check the status of the pods running on the cluster.
- Check the logs of the control plan components.
Check all the nodes are healthy.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster Ready control-plane,master 33h v1.20.2
kubenode01 Ready <none> 32h v1.20.2
kubenode02 Ready <none> 32h v1.20.2
Check the status of the pods running on the cluster.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-pod 1/1 Running 0 26h
If we had control plane components deployed as pods, in case of a cluster deployed with the Kubeadm tool, then we can check to make sure that the pods in the kube-system namespaces are running.
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-ht5xb 1/1 Running 2 33h
coredns-74ff55c5b-j4bfc 1/1 Running 2 33h
etcd-kubemaster 1/1 Running 2 33h
kube-apiserver-kubemaster 1/1 Running 2 33h
kube-controller-manager-kubemaster 1/1 Running 2 33h
kube-proxy-g68k4 1/1 Running 2 33h
kube-proxy-vcdpn 1/1 Running 2 32h
kube-proxy-xvclq 1/1 Running 2 32h
kube-scheduler-kubemaster 1/1 Running 2 33h
weave-net-7cgwj 2/2 Running 5 32h
weave-net-x7dq7 2/2 Running 5 32h
weave-net-zgtg7 2/2 Running 5 32h
If the control plane components are deployed as services as in our case, then check the status of services such as the kubelet
service kubelet status
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Fri 2021-01-29 08:07:10 UTC; 1 day 2h ago
Docs: https://kubernetes.io/docs/home/
Main PID: 843 (kubelet)
Tasks: 19 (limit: 2360)
CGroup: /system.slice/kubelet.service
└─843 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kube
Jan 29 08:07:33 kubemaster kubelet[843]: E0129 08:07:33.194207 843 kuberuntime_manager.go:702] killPodWithSyncResult failed: failed to "KillPodSandbox" for
Jan 29 08:07:33 kubemaster kubelet[843]: E0129 08:07:33.194223 843 pod_workers.go:191] Error syncing pod 5565c15b-7846-487b-9c84-a893a5323401 ("coredns-74ff
Jan 29 08:07:43 kubemaster kubelet[843]: W0129 08:07:43.568211 843 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace
Jan 29 08:07:43 kubemaster kubelet[843]: W0129 08:07:43.596589 843 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace
Jan 29 08:07:43 kubemaster kubelet[843]: weave-cni: Delete: no addresses for 5ac1a45cdc7c3e422f38d7f6d204e20cfb59adaa1909402ecf6ba8123478a06c
Jan 29 08:07:44 kubemaster kubelet[843]: W0129 08:07:44.442165 843 pod_container_deletor.go:79] Container "bc39bf89a0963d50544983d440b17370afd3c533be101530c
Jan 29 08:07:45 kubemaster kubelet[843]: W0129 08:07:45.554149 843 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace
Jan 29 08:07:45 kubemaster kubelet[843]: W0129 08:07:45.600079 843 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace
Jan 29 08:07:45 kubemaster kubelet[843]: weave-cni: Delete: no addresses for ef251470c24ceb4871a8dafdabb48cfcbfbf2c2ca1e90671421f14a91a827bdd
Jan 29 08:07:46 kubemaster kubelet[843]: W0129 08:07:46.610662 843 pod_container_deletor.go:79] Container "5d12f2234deb628b9102a1678345050f12c407e7a30691592
Check the logs of the control plan components.
kubectl logs -f kube-apiserver-kubemaster -n kube-system