Sunday, 14 March 2021

How to create a 3 node Cluster with Kubeadm in Kubernetes

 How to create a 3 node Cluster with Kubeadm in Kubernetes

In my example, I am going to create a 3 node cluster in Kubernetes. As a pre-requisite, I have created 3 servers on the Google Cloud with the names:-

1) Ubuntu1 - Master Node/Control Plane Node

2) Ubuntu2 - Worker Node

3) Ubuntu3 - Worker Node


1) I am able to access all the servers via the public IP. We have 3 servers here and we have logged into the master control plane server.

2) Install the Kernel modules on all the 3 nodes

Create configuration file for containerd: by running the below command

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF

This will load the overlay and br_netfilter module

3) Now we will load the modules by running the command:-

sudo modprobe overlay
sudo modprobe br_netfilter

4) Set system configurations for Kubernetes networking:

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF

5) Now we will apply the settings by running the command:-

sudo sysctl --system

6) Now we will update the repository and install the containerd

sudo apt-get update && sudo apt-get install -y containerd

7) Now we will create default configuration file for containerd:
sudo mkdir -p /etc/containerd

8) Now we will generate default containerd configuration and save to the newly created default file:

sudo containerd config default | sudo tee /etc/containerd/config.toml

9) Restart containerd to ensure new configuration file usage:

sudo systemctl restart containerd

10) Now we will disable swap by running the command

sudo swapoff -a

11) Now we will disable swap on startup in /etc/fsta by running the command

sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

12) Now we will install the dependency packages

sudo apt-get update && sudo apt-get install -y apt-transport-https curl

13) Now we will add the GPG key

curl -s | sudo apt-key add -

14) Now we will add Kubernetes to the repository list

cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb kubernetes-xenial main EOF

15) Now we will update all the packages

sudo apt-get update

16) Now we will install the kubernetes package

sudo apt-get install -y kubelet=1.20.1-00 kubeadm=1.20.1-00 kubectl=1.20.1-00

17) Now we will hold the automatic updates for the package

sudo apt-mark hold kubelet kubeadm kubectl

18) Now we will login to the 2 worker nodes and perform all the above steps

In my example, the worker nodes are ubuntu-2 and ubuntu-3.


Step -1 - Initialize the cluster

Initialize the Kubernetes cluster on the control plane node using kubeadm (Note: This is only performed on the Control Plane Node). In my example my control plane node is ubuntu-1 server.

sudo kubeadm init --pod-network-cidr

Step -2 Set kubectl access:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 3 Test Access to the cluster

kubectl version

Step 4 Install the Calico Network Add-On

On the Control plane node (Ubuntu1), we will install the calico network.

kubectl apply -f

Step 5  Check the status of the Calico components

kubectl get pods -n kube-system

Step 6  Join the worker nodes to the cluster

In the Control Plane Node, create the token and copy the kubeadm join command (NOTE:The join command can also be found in the output

kubeadm token create --print-join-command

Step 7  - Now run the output of the above command to the 2 worker nodes - Ubuntu2 and Ubuntu-3

kubeadm join --token 4i4md9.oziwagybigu5418f     --discovery-token-ca-cert-hash sha256:8c7260dbb1444f043d8f9dc08c7e9a88c824f2daa257297782b6658411f92dc1

Once the nodes are joined to the cluster, we can see the same on them.

Step 8  - In the Control Plane Node, view cluster status. It might take a while for the cluster to come up.

We can check the status by running the command on the control plane node - Ubuntu1

kubectl get nodes

No comments:

Post a comment