Cluster Upgrade in Kubernetes

Core control plan components

  • Kube-apiserver
  • Controller-manager
  • Kube-scheduler
  • Kubelet
  • Kube-proxy

And Kubectl

Components can be at different release versions since the Kube-apiserver is the primary component in the control plane and that is the component that all other components talked to node of the other components should ever be at a version higher than the Kube-apiserver. However, The kubectl unility could be the version higher than the API server.

At any time, Kubernetes support only up to the recent three minor versions. So, it is good time to upgrade the new version is updated. Also, when your kubernetes cluster is upgraded, the recommended approach is to upgrade one minor version at a time

ex) v1.10 to v1.11 to v1.12 to v1.13 to 1.14 to 1.15

The upgrade process depends on how the cluster is setup.

  • If your cluster is a managed Kubernetes cluster deployed on cloud service providers like Goolge for instance Google Kubernetes Engine lets you upgrade the cluster easily.
  • If you deployed the cluster using tools like kubeadm then the tool can help you plan and upgrade the cluster.
  • If you deployed the cluster from scratch then you manually upgrade the different components of the cluster yourself.

Upgrade process

You have a cluster with Master and Worker nodes running in production hosting pods. The nodes and components are at version 1.0 upgrading a cluster involves two major steps

  1. Upgrade your Master Nodes
    • While the master is being upgraded, the control plane components such as the apiserver, scheduler and controller managers go down briefly. The master node is going down does not mean your worker nodes and pods on the cluster are impacted. Since the master is down all management functions are down. And once the upgrade is complete and the cluster is back up it should function normally.
  2. Upgrade your Worker Nodes
    • One way: upgrade all of them at once but then your pods are down and users are no longer able to access the applications. Once the upgraed is complete the nodes are back up new pods are scheduled and users can resume access.
    • Second way: upgrade one node at a time.
    • Third way: Add new node to the cluster nodes with newer software version. The is especially convenient if you are on a cloud environment where you can easily provision new nodes and decommission old ones.

Kubeadm -upgrade

$ kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.19.4
[upgrade/versions] kubeadm version: v1.19.4
I1222 12:06:21.539144   30407 version.go:252] remote version is much newer: v1.20.1; falling back to: stable-1.19
[upgrade/versions] Latest stable version: v1.19.6
[upgrade/versions] Latest stable version: v1.19.6
[upgrade/versions] Latest version in the v1.19 series: v1.19.6
[upgrade/versions] Latest version in the v1.19 series: v1.19.6

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     3 x v1.19.4   v1.19.6

Upgrade to the latest version in the v1.19 series:

COMPONENT                 CURRENT    AVAILABLE
kube-apiserver            v1.19.4    v1.19.6
kube-controller-manager   v1.19.4    v1.19.6
kube-scheduler            v1.19.4    v1.19.6
kube-proxy                v1.19.4    v1.19.6
CoreDNS                   1.7.0      1.7.0
etcd                      3.4.13-0   3.4.13-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.19.6

Note: Before you can perform this upgrade, you have to update kubeadm to v1.19.6.

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________
  • Lists all the control plane components and their version and what version these can be upgraded to
  • Manually upgrade the Kubelet version on each nodes.
$ apt-get upgrade -y kubeadm=1.19.0-00
$ kubeadm upgrade apply v1.19.0

Depending on the setup you may or may not have Kubelet running on your master node.

If the cluster deployed with Kubeadm has kubelets on the master node which are used to run the control plane components as pod of the master node.

apt-get upgrade -y kubelet=1.19.0-00
systemctl restart kubelet

Worker nodes upgrade

We need to move the worknodes from the first worker node to the other nodes Using kubectl drain node-1, let you safely terminate all the pods from a node and we should use them on the other nodes

$ kubectl drain kubenode01
node/kubenode01 cordoned
$ apt-get upgrade -y kubeadm=1.19.0-00
$ kubeadm upgrade apply v1.19.0
$ apt-get upgrade -y kubelet=1.19.0-00
$ kubeadm upgrade node config --kubelet-version v1.19.0
$ systemctl restart kubelet
kubectl uncordon kubenode01
node/kubenode01 uncordoned

The same steps with other worker nodes.

Leave a Reply

Your email address will not be published.

ANOTE.DEV