How to create a 3 node Cluster using Kubeadm Using Containerd
In my example, I am going to create a 3 node cluster in Kubernetes. As a pre-requisite, I have created 3 servers on the Google Cloud with the names:-
1) Ubuntu1 - Master Node/Control Plane Node
2) Ubuntu2 - Worker Node
3) Ubuntu3 - Worker Node
2) Install the Kernel modules on all the 3 nodes
Create configuration file for containerd: by running the below command
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF
8) Now we will generate default containerd configuration and save to the newly created default file:
sudo containerd config default | sudo tee /etc/containerd/config.toml
9) Restart containerd to ensure new configuration file usage:
sudo systemctl restart containerd
10) Now we will disable swap by running the command
sudo swapoff -a
11) Now we will disable swap on startup in /etc/fstab by running the command
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
12) Now we will install the dependency packages
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
13) Now we will add the GPG key
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
14) Now we will add Kubernetes to the repository list
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF
15) Now we will update all the packages
sudo apt-get update
16) Now we will install the kubernetes package
sudo apt-get install -y kubelet=1.20.1-00 kubeadm=1.20.1-00 kubectl=1.20.1-00
17) Now we will hold the automatic updates for the package
sudo apt-mark hold kubelet kubeadm kubectl
18) Now we will login to the 2 worker nodes and perform all the above steps
In my example, the worker nodes are ubuntu-2 and ubuntu-3.
Step -1 - Initialize the cluster
Initialize the Kubernetes cluster on the control plane node using kubeadm (Note: This is only performed on the Control Plane Node). In my example my control plane node is ubuntu-1 server.
sudo kubeadm init --pod-network-cidr 192.168.0.0/16
Step -2 Set kubectl access:
Step 3 Test Access to the cluster
Step 8 - In the Control Plane Node, view cluster status. It might take a while for the cluster to come up.
We can check the status by running the command on the control plane node - Ubuntu1
kubectl get nodes
Note:- Please open the below ports on the GCP - firewall
What does the casino mean in Texas? | Goyang FC
ReplyDeleteBet on sports 리턴 벳 online with Goyang FC. 라이브바카라조작 Get a great welcome bonus dafabet of 블루벳먹튀 100% 축구 토토 up to C$500 plus 20 free spins.