Kube API Server is the primary management component in Kubernetes. When you run a kubectl
command, the kubectl
utility is reaching the Kube API Server. The Kube API Server first authenticates the request and validates it. It then retrieves the data from the ETCD cluster and responds back with the requested information.
Simulation: Creating a pod with no node assigned.
- Kube API Server – Authenticate user & Validate Request by Kube API Server itself.
- ETCD – Retrieve data from ETCD to the Kube API Server.
- ETCD – Update ETCD the information which a pod is created to the Kube API Server
pod created!
- Scheduler – scheduler continuously monitors the Kube API Server and realizes that there is a new pod with no node assigned. the scheduler identifies the right node to place the new POD on and communicates that back to the Kube API Server.
- Kube API Server updates the information to the ETCD.
- Kube API Server then passes the information to the kubelet in the appropriate worker node.
- the Kubelet then creates the pod on the node and instructs the container runtime engine to deploy the application image. Once done, the kubelet updates the status back to the Kube API Server, and the Kube API Server then updates the data in the ETCD.
The Kube API Server is at the center of all the different tasks that need to be performed to make a change in the cluster.
The Kube API Server is responsible for authenticating and validating, and validating requests, retrieving and updating data in the ETCD data store, in fact, Kube API Server is the only component that interacts directly with the ETCD datastore.
The other components such as the scheduler, Kube Controller Manager, and Kubelet uses the Kube API Server to perform updates in the cluster.
Kube API Server with Kubeadm
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-wtk5k 1/1 Running 1 6d21h
coredns-f9fd979d6-x5zxv 1/1 Running 1 6d21h
etcd-kubemaster 1/1 Running 1 6d21h
kube-apiserver-kubemaster 1/1 Running 1 6d21h
kube-controller-manager-kubemaster 1/1 Running 1 6d21h
kube-proxy-jnf5q 1/1 Running 1 6d21h
kube-proxy-m9krm 1/1 Running 1 6d21h
kube-proxy-zfbsh 1/1 Running 1 6d21h
kube-scheduler-kubemaster 1/1 Running 1 6d21h
weave-net-g4l7r 2/2 Running 3 6d21h
weave-net-skdlq 2/2 Running 4 6d21h
weave-net-xg67h 2/2 Running 4 6d21h
- kube-apiserver-kubemaste
Kube-apiserver pod definition file information with Kubeadm
$ cat /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.56.2:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.56.2
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: k8s.gcr.io/kube-apiserver:v1.19.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.56.2
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-apiserver
readinessProbe:
failureThreshold: 3
httpGet:
host: 192.168.56.2
path: /readyz
port: 6443
scheme: HTTPS
periodSeconds: 1
timeoutSeconds: 15
resources:
requests:
cpu: 250m
startupProbe:
failureThreshold: 24
httpGet:
host: 192.168.56.2
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-node-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}