Are you looking for an easy guide on how to install Kubernetes Cluster on Ubuntu 22.04 (Jammy Jellyfish)?
The step-by-step guide on this page will show you how to install Kubernetes cluster on Ubuntu 22.04 using Kubeadm command step by step.
Kubernetes is a free and open-source container orchestration tool, it also known as k8s. With the help of Kubernetes, we can achieve automated deployment, scaling and management of containerized application.
A Kubernetes cluster consists of worker nodes on which application workload is deployed and a set up master nodes which are used to manage worker nodes and pods in the cluster.
In this guide, we are using one master node and two worker nodes. Following are system requirements on each node,
- Minimal install Ubuntu 22.04
- Minimum 2GB RAM or more
- Minimum 2 CPU cores / or 2 vCPU
- 20 GB free disk space on /var or more
- Sudo user with admin rights
- Internet connectivity on each node
Lab Setup
- Master Node: 192.168.1.173 – k8smaster.example.net
- First Worker Node: 192.168.1.174 – k8sworker1.example.net
- Second Worker Node: 192.168.1.175 – k8sworker2.example.net
Without any delay, let’s jump into the installation steps of Kubernetes cluster
Step 1) Set hostname and add entries in the hosts file
Login to to master node and set hostname using hostnamectl command,
$ sudo hostnamectl set-hostname "k8smaster.example.net" $ exec bash
On the worker nodes, run
$ sudo hostnamectl set-hostname "k8sworker1.example.net" // 1st worker node $ sudo hostnamectl set-hostname "k8sworker2.example.net" // 2nd worker node $ exec bash
Add the following entries in /etc/hosts file on each node
192.168.1.173 k8smaster.example.net k8smaster 192.168.1.174 k8sworker1.example.net k8sworker1 192.168.1.175 k8sworker2.example.net k8sworker2
Step 2) Disable swap & add kernel settings
Execute beneath swapoff and sed command to disable swap. Make sure to run the following commands on all the nodes.
$ sudo swapoff -a $ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Load the following kernel modules on all the nodes,
$ sudo tee /etc/modules-load.d/containerd.conf <<EOF overlay br_netfilter EOF $ sudo modprobe overlay $ sudo modprobe br_netfilter
Set the following Kernel parameters for Kubernetes, run beneath tee command
$ sudo tee /etc/sysctl.d/kubernetes.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF
Reload the above changes, run
$ sudo sysctl --system
Step 3) Install containerd run time
In this guide, we are using containerd run time for our Kubernetes cluster. So, to install containerd, first install its dependencies.
$ sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
Enable docker repository
$ sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg $ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Now, run following apt command to install containerd
$ sudo apt update $ sudo apt install -y containerd.io
Configure containerd so that it starts using systemd as cgroup.
$ containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1 $ sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
Restart and enable containerd service
$ sudo systemctl restart containerd $ sudo systemctl enable containerd
Step 4) Add apt repository for Kubernetes
Execute following commands to add apt repository for Kubernetes
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/kubernetes-xenial.gpg $ sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Note: At time of writing this guide, Xenial is the latest Kubernetes repository but when repository is available for Ubuntu 22.04 (Jammy Jellyfish) then you need replace xenial word with ‘jammy’ in ‘apt-add-repository’ command.
Step 5) Install Kubernetes components Kubectl, kubeadm & kubelet
Install Kubernetes components like kubectl, kubelet and Kubeadm utility on all the nodes. Run following set of commands,
$ sudo apt update $ sudo apt install -y kubelet kubeadm kubectl $ sudo apt-mark hold kubelet kubeadm kubectl
Step 6) Initialize Kubernetes cluster with Kubeadm command
Now, we are all set to initialize Kubernetes cluster. Run the following Kubeadm command from the master node only.
$ sudo kubeadm init --control-plane-endpoint=k8smaster.example.net
Output of above command,
As the output above confirms that control-plane has been initialize successfully. In output also we are getting set of commands for interacting the cluster and also the command for worker node to join the cluster.
So, to start interacting with cluster, run following commands from the master node,
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Now, try to run following kubectl commands to view cluster and node status
$ kubectl cluster-info $ kubectl get nodes
Output,
Join both the worker nodes to the cluster, command is already there is output, just copy paste on the worker nodes,
$ sudo kubeadm join k8smaster.example.net:6443 --token vt4ua6.wcma2y8pl4menxh2 \ --discovery-token-ca-cert-hash sha256:0494aa7fc6ced8f8e7b20137ec0c5d2699dc5f8e616656932ff9173c94962a36
Output from both the worker nodes,
Check the nodes status from master node using kubectl command,
$ kubectl get nodes
As we can see nodes status is ‘NotReady’, so to make it active. We must install CNI (Container Network Interface) or network add-on plugins like Calico, Flannel and Weave-net.
Step 6) Install Calico Pod Network Add-on
Run following kubectl command to install Calico network plugin from the master node,
$ kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml
Output of above commands would look like below,
Verify the status of pods in kube-system namespace,
$ kubectl get pods -n kube-system
Output,
Perfect, check the nodes status as well.
$ kubectl get nodes
Great, above confirms that nodes are active node. Now, we can say that our Kubernetes cluster is functional.
Step 7) Test Kubernetes Installation
To test Kubernetes installation, let’s try to deploy nginx based application and try to access it.
$ kubectl create deployment nginx-app --image=nginx --replicas=2
Check the status of nginx-app deployment
$ kubectl get deployment nginx-app NAME READY UP-TO-DATE AVAILABLE AGE nginx-app 2/2 2 2 68s $
Expose the deployment as NodePort,
$ kubectl expose deployment nginx-app --type=NodePort --port=80 service/nginx-app exposed $
Run following commands to view service status
$ kubectl get svc nginx-app $ kubectl describe svc nginx-app
Output of above commands,
Use following command to access nginx based application,
$ curl http://<woker-node-ip-addres>:31246
$ curl http://192.168.1.174:31246
Output,
Great, above output confirms that nginx based application is accessible.
That’s all from this guide, I hope you have found this guide useful. Kindly do post your queries and feedback in below comments section.
Also Read: How to Configure Static IP Address on Ubuntu 22.04 LTS
after going through 100s of videos and different blog post , finally your document helped me to setup working kubernetes cluster….kudoz
Thanks Ashish !! for your feedback.
I am glad, this post helps you to deploy Kubernetes Cluster.
The commands adding keys for the apt repos should be changed to something like this:
sudo curl -fsSL ‘https://download.docker.com/linux/ubuntu/gpg’ | sudo gpg –dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
Hi Niels,
Thanks for sharing the updated command. As per your suggestion, I have modified command in article as well.
Hi Niels, when running the “kubeadm init” or “kubeadm join” i had this error
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: blkio
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent–proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
To solve it, i had to set ip_forward content with 1 by following command:
echo 1 > /proc/sys/net/ipv4/ip_forward
thank you it worked out
Could you add the aws cloud provider configuration into this setup; so that we can utilize ebs persistent storage and AWS ELB.
WoW , same here after search aloot on google and try many tutorials , this is finaly work, Good job and many thanks!
I currently have Ubuntu 22.04 installed, and I am planning on installing GitLab. I’d want to have a possibly basic Kubernetes environment. I am quite used to kubecti, but I am not confident whether to use MicroK8s, Rancher, MiniKube, or something else. Do you have any opinion about my situation? Thank you so much for answering.
Thanks,
This article helped me to successfully setup the kubernetes cluster.
Finally, a tutorial that works! Thanks!!! Other tutorials did not work
Thank you! At last, one tutorial that works fine!
Again, thanks.
My last 2 days of installation struggle end up with your blog .. Thanks to your fantastic work…
Excellent guide. Works perfectly. I had to install AppArmor as it is not installed in lxd VM images.
You used “step 6)” twice.
This is a great tutorial, thank you for putting it together! I have managed to successfully create a cluster using it, which is awesome! Really appreciate the time you’ve taken here.
I don’t know why my pods is having crash status on calico
[email protected]:~# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-798cc86c47-g95km 1/1 Running 0 6m54s 172.16.16.129 k8smaster
calico-node-2jwsx 1/1 Running 0 6m54s 192.168.60.10 k8smaster
calico-node-5w2bj 0/1 Init:CrashLoopBackOff 5 (2m34s ago) 6m54s 192.168.60.11 k8sworker1
calico-node-8rqx5 0/1 Init:CrashLoopBackOff 6 (22s ago) 6m54s 192.168.60.12 k8sworker2
Awesome! Its worked after lot of failures, always problem facing in creating pod networks, Thank you so much
Thank you so much, after too many videos, I found this doc very good
Nice tutorial, you really help me 🙂
Finally – It works as described. After trying to migrate from AWS to baremetal for a few weeks this worked. I really appreciate you!
Hi Pradeep,
I have been trying to Set-up the K8S Cluster for the last 10 days. I have watched many youtube videos as well for that. Finally, today I set up the K8S Cluster with the help of your amazing step-to-step guide.
Thank you so much for your efforts.
Thanks for putting this guide together. Spent hours on various sites following different instructions with no success.
You guide made it nice and easy.
Great after 3 days, i am able to set up kubeadm set up using your article.
THE BEST TUTORIAL BY FAR, IT WORKS GRATEFUL, THE LINUX FOUNDATION ONE DIDN’T WORK FOR ME BUT THIS ONE DOES, THANK YOU
Just worked! thank you
so much helped for newbie
great article…. after going through so many website this is good one
Hi, nice tutorial, could you please help me, i have been follow the step until install calico, but when i run “$ kubectl get pods -n kube-system”, the status appears there are pending and running, see below :
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-7bdbfc669-cwdk5 0/1 Pending 0 8m
calico-node-qk4vp 0/1 Init:ImagePullBackOff 0 8m
calico-node-vbjg8 0/1 Init:ImagePullBackOff 0 8m
coredns-787d4945fb-lzblv 0/1 Pending 0 51m
coredns-787d4945fb-rqnkc 0/1 Pending 0 51m
etcd-k8smaster.alfatih.v2 1/1 Running 0 51m
kube-apiserver-k8smaster.alfatih.v2 1/1 Running 0 51m
kube-controller-manager-k8smaster.alfatih.v2 1/1 Running 0 51m
kube-proxy-52r46 1/1 Running 0 11m
kube-proxy-w5hdz 1/1 Running 0 51m
kube-scheduler-k8smaster.alfatih.v2 1/1 Running 0 51m
================================================================
FYI, i run this K8S on virtual box.
great guide ! easy to understand and every command worked like it should. big thanks
I am running into an issue with the nginx-app is not being exposed properly. There is no “External-IP”.
kubectl get svc nginx-app
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-app NodePort 10.109.94.99 80:30817/TCP 6m42s
Can anyone assit?
Thanks Pradeep for this wonderful document each and everything well explained and documented.
Much Appreciated for your efforts.
Hey pradeep, thanks for this document . it’s really help me to setup the cluster. all other doc make me sick .
Thanks for the step by step guide, clear and concise.
Thanks for taking to time to post the article.
It was my second attempt at installing K8s and the second post I looked at, which is quite good considering the amount of posts out there.
Now to get the dashboard installed and setup for using kvm instead of containers.
excellent tutorial, Last week works perfectly, but today, something happened with the calico step.
$ curl ‘https://projectcalico.docs.tigera.io/manifests/calico.yaml’ -O
$ kubectl apply -f calico.yaml
can you help?
Hi,
Use following command to install calico,
$ kubectl apply -f ‘https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml’
I have updated the same in post too.
works like a charm!! Thank you! Cheers from Mexico!
hi, after running kubeadm init, i ran kubectl cluster-info and it worked.
but after a few minutes, i ran again kubectl cluster-info, error occured “k8smaster.example.net:6443 was refused – did you specify the right host or port?”
can you help me? i have followed all instruction in this tutorial.
i’m using ubuntu 22.04, running on EC2 Instance (AWS)
Try running
sudo swapoff -a && sudo sed -i ‘/ swap / s/^\(.*\)$/#\1/g’ /etc/fstab
Can anyone help me with error.
[email protected]:~$ sudo kubeadm init –control-plane-endpoint=k8smaster.example.net
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[ERROR DirAvailable–var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
To see the stack trace of this error execute with –v=5 or higher
It seems like, you already have tried installing k8s cluster on your Ubuntu system because of that port 6443 is already in use.
$ sudo kubeadm reset cleanup-node
and then try init again
Hello,
I get this error on the worker nodes when I run `sudo kubeadm join`
[failure loading certificate for CA: couldn’t load the certificate file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory, failure loading key for service account: couldn’t load the private key file /etc/kubernetes/pki/sa.key: open /etc/kubernetes/pki/sa.key: no such file or directory, failure loading certificate for front-proxy CA: couldn’t load the certificate file /etc/kubernetes/pki/front-proxy-ca.crt: open /etc/kubernetes/pki/front-proxy-ca.crt: no such file or directory, failure loading certificate for etcd CA: couldn’t load the certificate file /etc/kubernetes/pki/etcd/ca.crt: open /etc/kubernetes/pki/etcd/ca.crt: no such file or directory]
What can I do to resolve this?
I had this problem too, then I realized the kubeadm init command output listed two separate join commands. The first was to add control-plane nodes, and that was the command I’d copied. Make sure you get the second command, which does not have the “–control-plane” parameter
Thanks very much.
Just executed the steps are in there, and all worked great!!! thanks
Excellent, this saved me days of time trying to fix coredns issue for ubuntu18.04.