Kubernetes (k8s) is a free and open-source container orchestration tool. It is used for deploying, scaling and managing containerized based applications.
In this guide, we will cover how to install Kubernetes Cluster on Ubuntu 20.04 LTS Server (Focal Fossa) using kubeadm utility. In my lab setup, I have used three Ubuntu 20.04 machines.
Following are the system requirements on each Ubuntu system.
- Minimum of 2 GB RAM
- 2 Core (2 vCPUs)
- 15 GB Free Space on /var
- Privileged user with sudo rights
- Stable Internet Connection
Following are the details of my lab setup:
- Machine 1 (Ubuntu 20.04 LTS Server) – K8s-master – 192.168.1.40
- Machine 2 (Ubuntu 20.04 LTS Server) – K8s-node-0 – 192.168.1.41
- Machine 3 (Ubuntu 20.04 LTS Server) – K8s-node-1 – 192.168.1.42
Now let’s jump into the Kubernetes installation steps
Step1) Set hostname and add entries in /etc/hosts file
Use hostnamectl command to set hostname on each node, example is shown below:
$ sudo hostnamectl set-hostname "k8s-master" // Run this command on master node $ sudo hostnamectl set-hostname "k8s-node-0" // Run this command on node-0 $ sudo hostnamectl set-hostname "k8s-node-1" // Run this command on node-1
Add the following entries in /etc/hosts files on each node,
192.168.1.40 k8s-master 192.168.1.41 k8s-node-0 192.168.1.42 k8s-node-1
Step 2) Install Docker (Container Runtime) on all 3 nodes
Login to each node and run the following commands to install docker,
$ sudo apt update $ sudo apt install -y docker.io
Create the file ‘/etc/docker/daemon.json’ to fix cgroup error, add the following content to it.
{ "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" }
Now start and enable docker service on each node using beneath systemctl command,
$ sudo systemctl enable docker.service --now
Run the following command to verify the status of docker service and its version,
$ systemctl status docker $ docker --version
Step 3) Disable swap and enable IP forwarding on all nodes
To disable swap, edit /etc/fstab file and comment out the line which includes entry either swap partition or swap file.
$ sudo vi /etc/fstab
Save & exit the file
Run swapoff command to disable the swap on the fly
$ sudo swapoff -a
To enable the ip forwarding permanently, edit the file “/etc/sysctl.conf” and look for line “net.ipv4.ip_forward=1” and un-comment it. After making the changes in the file, execute the following command
$ sudo sysctl -p net.ipv4.ip_forward = 1 $
Step 4) Install Kubectl, kubelet and kubeadm on all nodes
Run the following commands on all 3 nodes to install kubectl , kubelet and kubeadm utility
$ sudo apt install -y apt-transport-https curl $ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add $ sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main" $ sudo apt update $ sudo apt install -y kubelet kubeadm kubectl
Note : At time of writing this article , Ubuntu 16.04 (Xenial Xerus ) Kubernetes repository was available but in future, when the kubernetes repository is available for Ubuntu 20.04 then replace xenial with focal word in above ‘apt-add-repository’ command.
Step 4) Initialize Kubernetes Cluster using kubeadm
Login to your master node (k8s-master) and run below ‘kubeadm init‘ command to initialize Kubernetes cluster,
$ sudo kubeadm init
Once the cluster is initialized successfully, we will get the following output
To start using the cluster as a regular user, let’s execute the following commands, commands are already there in output just copy paste them.
[email protected]:~$ mkdir -p $HOME/.kube [email protected]:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config pkuma[email protected]:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Now Join the worker nodes (k8s-node-0/1) to cluster, command to join the cluster is already there in the output. Copy “kubeadm join” command and paste it on both nodes (worker nodes).
Login to Node-0 and run following command,
[email protected]:~$ sudo kubeadm join 192.168.1.40:6443 --token b4sfnc.53ifyuncy017cnqq --discovery-token-ca-cert-hash sha256:5078c5b151bf776c7d2395cdae08080faa6f82973b989d29caaa4d58c28d0e4e
Login to Node-1 and run following command to join the cluster,
[email protected]:~$ sudo kubeadm join 192.168.1.40:6443 --token b4sfnc.53ifyuncy017cnqq --discovery-token-ca-cert-hash sha256:5078c5b151bf776c7d2395cdae08080faa6f82973b989d29caaa4d58c28d0e4e
From the master node run “kubectl get nodes” command to verify nodes status
[email protected]:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master NotReady master 27m v1.18.3 k8s-node-0 NotReady <none> 8m3s v1.18.3 k8s-node-1 NotReady <none> 7m19s v1.18.3 [email protected]:~$
As we can see both worker nodes and master node have joined the cluster, but status of each node is “NotReady”. To make the status “Ready” we must deploy Container Network Interface (CNI) based Pod network add-ons like calico, kube-router and weave-net. As the name suggests, pod network add-ons allow pods to communicate each other.
Step 5) Deploy Calico Pod Network Add-on
From the master node, run the following command to install Calico pod network add-on,
[email protected]:~$ kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
Once it has been deployed successfully then nodes status will become ready, let’s re-run kubectl command to verify nodes status
[email protected]:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 39m v1.18.3 k8s-node-0 Ready <none> 19m v1.18.3 k8s-node-1 Ready <none> 19m v1.18.3 [email protected]:~$
Run below command to verify status of pods from all namespaces
Perfect, above confirms that all the pods are running and are in healthy state. Let’s try to deploy pods, service and deployments to see whether our Kubernetes cluster is working fine or not.
Note: To enable bash completion feature on your master node, execute the followings
[email protected]:~$ echo 'source <(kubectl completion bash)' >>~/.bashrc [email protected]:~$ source .bashrc
Read Also : How to Setup Kubernetes Cluster on Google Cloud Platform (GCP)
Step 6) Test and Verify Kubernetes Cluster
Let’s create a deployment named nginx-web with nginx container image in the default namespace, run the following kubectl command from the master node,
[email protected]:~$ kubectl create deployment nginx-web --image=nginx deployment.apps/nginx-web created [email protected]:~$
Run below command to verify the status of deployment
[email protected]:~$ kubectl get deployments.apps NAME READY UP-TO-DATE AVAILABLE AGE nginx-web 1/1 1 1 41s [email protected]:~$ kubectl get deployments.apps -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginx-web 1/1 1 1 56s nginx nginx app=nginx-web [email protected]:~$ [email protected]:~$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-web-7748f7f978-nk8b2 1/1 Running 0 2m50s [email protected]:~$
As we can see that deployment has been created successfully with default replica.
Let’s scale up the deployment, set replicas as 4. Run the following command,
[email protected]:~$ kubectl scale --replicas=4 deployment nginx-web deployment.apps/nginx-web scaled [email protected]:~$
Now verify status of your deployment using following commands,
[email protected]:~$ kubectl get deployments.apps nginx-web NAME READY UP-TO-DATE AVAILABLE AGE nginx-web 4/4 4 4 13m [email protected]:~$ [email protected]:~$ kubectl describe deployments.apps nginx-web
Above confirms that nginx based deployment has been scale up successfully.
Let’s perform one more test, create a pod named “http-web” and expose it via service named “http-service” with port 80 and NodePort as a type.
Run the following command to create a pod,
[email protected]:~$ kubectl run http-web --image=httpd --port=80 pod/http-web created [email protected]:~$
Create a service using beneath command and expose above created pod on port 80,
[email protected]:~$ kubectl expose pod http-web --name=http-service --port=80 --type=NodePort service/http-service exposed [email protected]:~$ [email protected]:~$ kubectl get service http-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-service NodePort 10.101.152.138 <none> 80:31098/TCP 10s [email protected]:~$
Get Node IP or hostname on which http-web pod is deployed and then access webserver via NodePort (31098)
[email protected]:~$ kubectl get pods http-web -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES http-web 1/1 Running 0 59m 172.16.11.196 k8s-node-0 <none> <none> [email protected]:~$ [email protected]:~$ curl http://k8s-node-0:31098 <html><body><h1>It works!</h1></body></html> [email protected]:~$
Perfect, it is working fine as expected. This conclude the article and confirms that we have successfully setup Kubernetes cluster on Ubuntu 20.04 LTS Server.
Also Read : How to Setup NGINX Ingress Controller in Kubernetes
Also Read : How to Setup Private Docker Registry in Kubernetes (k8s)
Great article! It works. 🙂 Thank you.
Thanks for the great Guide, i faced some problems but with some research it was no Problem, i gonna describe the problems here for people that maybe face the same onces:
1. Error while ” kubeadm init ”
I have a virtual Server with 1 Core and 2GB of Ram, but K8s wants 2 Cores, my solution beside upgrading
the maschine the other solution was to adding to the Kubeadm init command the following command
“– ignore-preflight-errors=NumCPU”, i don’t now if this can result to problems but its a quick and dirty
solution if you only wants to train/test.
2. Nginx deployment pending
I do not wanted to deploy 2 virtual machine so i deployed the control Panel and the nginx deploment at
one but the pod never faced the light of being a pod, the solution was to permit the Master/Control Panel
to deploy at it self new pods, with the following command:
“kubectl taint nodes –all node- role.kubernetes.io/master-”
This possibility is disabled because of security concerns from kubernetes, at the moment i am
a beginner and i am testing k8s so its no problem for me but if you want to run it at a prod env i would
invest.
I hope that this comment helps some people that face similar problems.
for a another machine.
Thank you very Much, using this, I was able to run a kubernetes cluster
As per my understanding 1) You are using Oracle Virtualbox to create these 3 vms 2) You keep all 3 vm’s in the ‘Bridged’ mode & are on DHCP 3) You did not configure host-only network.
Please let me know if my understanding is wrong. Thank you.
In my case Calico pods are not able to start. I am using Oracle Virtualbox. My Kubernetes vm’s are in Bridged network mode. In 1st attempt, when calico pod was getting configured my connectivity to the master vm lost. I had to reboot the vm.
you can manually set the a static ip beyond your router’s dhcp range (check your router setting) for each vm.
so restart wont affect the lan ip address.
great article, thanks!
while kubectl apply calico it says
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Perhaps there’s another version? I didn’t want to mess it up all, so I applied the current configuration and it works really well until now, I’ll continue the tuto, thanks so much!
you can download the yaml file to your local, and then make the suggested edits from the warning.
and then use kubectl to apply the file.
A simple and great guide. Many thanks!
Thank You, Good Guide
Thanks Pradeep. I had issues with my network and IP addresses but eventually I managed to resolve all the issues along with your other instructions.
Thanks! It works!
Many thanks, Pradeep. In my environment, there are two amd64 (k8s-master with 16 GB Ram, and k8s-node-0 with 8 GB Ram) and one raspberry pi 3b (k8s-node-1 with 1 GB Ram), all running the latest ubuntu 20.04.2 server 64bit. I succeeded to join k8s-node-0 and running the tasks of your tutorial until the end:
[email protected]:~$ curl ‘http://10.100.205.206:80’
It works!
But the join command on k8s-node-1 (raspberry pi 3b) did not work:
[email protected]:~$ sudo kubeadm join 192.168.x.y:6443 –token jdxsul… –discovery-token-ca-cert-hash sha256:a2958ee…aeaf2bee4f68
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at ‘https://kubernetes.io/docs/setup/cri/’
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.4.0-1028-raspi
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
CONFIG_OVERLAY_FS: enabled (as module)
CONFIG_AUFS_FS: enabled (as module)
CONFIG_BLK_DEV_DM: enabled
DOCKER_VERSION: 19.03.8
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: missing
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: missing
[WARNING SystemVerification]: missing optional cgroups: hugetlb
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR SystemVerification]: missing required cgroups: memory
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
To see the stack trace of this error execute with –v=5 or higher
Is it possible to troubleshoot my problem, or should I choose another system?
[ERROR SystemVerification]: missing required cgroups: memory. The massage is clear.
you have a raspberry-pi with 1GB of RAM, you need a minimum of 2GB
You are awesome mate. it work
Thanks, it works.
Awesome! Works on a single Ubuntu 20.04 machine with 3 LXD virtual machines.