ETCD in Kubernetes

ETCD is a distributed reliable key-value store that is Simple, Secure & Fast.

https://etcd.io/

A key value store stores information in a key and a value format. Put a key and a value and it saves that in the database. when the key is showed it returns the value. It can not have duplicate keys. As such it is not used as a replacement for a Relational Database. Instead it is used to store and retrieve small chunks of data such as configuration data that requires fast read and write.

Build the latest version

https://etcd.io/docs/v3.2.17/dl_build/

The ETCD datastore stores information regarding the cluster such as the nodes, pods, configs, secrets, accounts, roles, bindings and others. Every information you see when you run the kubectl get command is from the ETCD server. Every change you make to your clusters, such as adding additional nodes, deploying pods, or replica sets are updated in the ETCD server. If you set up your cluster from scratch then you deploy ETCD by downloading the ETCD binaries yourself, installing the binaries, and configuring ETCD as a service in your master node yourself.

If you set up your cluster using Kubadm then kubeadm deploys the ETCD server for you as a POD in the Kube-system namespace. You can explore the ETCD database using the etcdctl utility within this pod.

@kubemaster:~$ kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-f9fd979d6-wtk5k              1/1     Running   1          6d20h
coredns-f9fd979d6-x5zxv              1/1     Running   1          6d20h
etcd-kubemaster                      1/1     Running   1          6d20h
kube-apiserver-kubemaster            1/1     Running   1          6d20h
kube-controller-manager-kubemaster   1/1     Running   1          6d20h
kube-proxy-jnf5q                     1/1     Running   1          6d20h
kube-proxy-m9krm                     1/1     Running   1          6d20h
kube-proxy-zfbsh                     1/1     Running   1          6d20h
kube-scheduler-kubemaster            1/1     Running   1          6d20h
weave-net-g4l7r                      2/2     Running   3          6d20h
weave-net-skdlq                      2/2     Running   4          6d20h
weave-net-xg67h                      2/2     Running   4          6d20h

Inside etcd container.

$ kubectl exec -it etcd-kubemaster -n kube-system sh
  • namespace: -n kube-system
  • pod name: etcd-kubemaster
sh-5.0# etcdctl version
etcdctl version: 3.4.13
API version: 3.4

To get keys and values stored by Kubernetes etcd, run the etcdctl get command

https://etcd.io/docs/v3.4.0/dev-guide/interacting_v3/

Kubernetes stores data in the specific directory structure the root directory is a registry and under that, you have the various Kubernetes constructs such as minions or nodes, pods, replicasets, deployments, etc. In a high availability environment you will have multiple master nodes in the cluster and then you have multiple ETCD instances spread across the master nodes.

ETCD pod definition file information with Kubeadm

$ cat /etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.56.2:2379
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://192.168.56.2:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://192.168.56.2:2380
    - --initial-cluster=kubemaster=https://192.168.56.2:2380
    - --key-file=/etc/kubernetes/pki/etcd/server.key
    - --listen-client-urls=https://127.0.0.1:2379,https://192.168.56.2:2379
    - --listen-metrics-urls=http://127.0.0.1:2381
    - --listen-peer-urls=https://192.168.56.2:2380
    - --name=kubemaster
    - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    image: k8s.gcr.io/etcd:3.4.13-0
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /health
        port: 2381
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: etcd
    resources: {}
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /health
        port: 2381
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd-data
    - mountPath: /etc/kubernetes/pki/etcd
      name: etcd-certs
  hostNetwork: true
  priorityClassName: system-node-critical
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki/etcd
      type: DirectoryOrCreate
    name: etcd-certs
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data
status: {}

Leave a Reply

Your email address will not be published.

ANOTE.DEV