How to Install Kubernetes Cluster on Rocky Linux 8

Hello techies, as we know Kubernetes (k8s) is a free and open-source container orchestration system. It is used for automating deployment and management of containerized applications. In this guide, we will cover how to install kubernetes cluster on Rocky Linux 8 with kubeadm step by step.

Minimum System Requirement for Kubernetes

  • 2 vCPUs or more
  • 2 GB RAM or more
  • Swap disabled
  • At least NIC card
  • Stable Internet Connection
  • One regular user with sudo privileges.

For demonstration, I am using following systems

  • Once Master Node / Control Plane (2 GB RAM, 2vCPU, 1 NIC Card, Minimal Rocky Linux 8 OS)
  • Two Worker Nodes (2GB RAM, 2vCPU, 1 NIC Card, Minimal Rocky Linux 8 OS)
  • Hostname of Master Node – control-node (
  • Hostname of Work Nodes – worker-node1(, worker-node2(

Without further ado, let’s deep dive into Kubernetes installation steps.

Note: These steps are also applicable for RHEL 8 and AlmaLinux OS.

Step 1) Set Hostname and update hosts file

Use hostnamectl command to set the hostname on control node and workers node.

Run beneath command on control node

$ sudo hostnamectl set-hostname "control-node"
$ exec bash

Execute following command on worker node1

$ sudo hostnamectl set-hostname "worker-node1"
$ exec bash

Worker  node 2

$ sudo hostnamectl set-hostname "worker-node2"
$ exec bash

Add the following entries in /etc/hosts file on control and worker nodes respectively.   control-node   worker-node1   worker-node2

Step 2) Disable Swap and Set SELinux in permissive mode

Disable swap, so that kubelet can work properly. Run below commands on all the nodes to disable it,

$ sudo swapoff -a
$ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Run beneath sed command on all the nodes to set SELinux in permissive mode

$ sudo setenforce 0
$ sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Step 3) Configure Firewall Rules on Master and Worker Nodes

On control plane, following ports must be allowed in firewall.


To allow above ports in control plane, run

$ sudo firewall-cmd --permanent --add-port=6443/tcp
$ sudo firewall-cmd --permanent --add-port=2379-2380/tcp
$ sudo firewall-cmd --permanent --add-port=10250/tcp
$ sudo firewall-cmd --permanent --add-port=10251/tcp
$ sudo firewall-cmd --permanent --add-port=10252/tcp
$ sudo firewall-cmd --reload
$ sudo modprobe br_netfilter
$ sudo sh -c "echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables"
$ sudo sh -c "echo '1' > /proc/sys/net/ipv4/ip_forward"

On worker Nodes, following ports must be allowed in firewall


$ sudo firewall-cmd --permanent --add-port=10250/tcp
$ sudo firewall-cmd --permanent --add-port=30000-32767/tcp                                                  
$ sudo firewall-cmd --reload
$ sudo modprobe br_netfilter
$ sudo sh -c "echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables"
$ sudo sh -c "echo '1' > /proc/sys/net/ipv4/ip_forward"

Step 4) Install Docker on Master and Worker Nodes

Install Docker on master and worker nodes. Here docker will provide container run time (CRI). To install latest docker, first we need to enable its repository by running following commands.

$ sudo dnf config-manager --add-repo=

Now, run below dnf command on all the nodes to install docker-ce (docker community edition)

$ sudo dnf install docker-ce -y



Once docker and its dependencies are installed then start and enable its service by running following commands

$ sudo systemctl start docker
$ sudo systemctl enable docker

Step 5) Install kubelet, Kubeadm and kubectl

Kubeadm is the utility through which we will install Kubernetes cluster. Kubectl is the command line utility used to interact with Kubernetes cluster. Kubelet is the component which will run all the nodes and will preform task like starting and stopping pods or containers.

To Install kubelet, Kubeadm and kubectl on all the nodes, first we need to enable Kubernetes repository.

Perform beneath commands on master and worker nodes.

$ cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
exclude=kubelet kubeadm kubectl

$ sudo dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

After installing above packages, enable kubelet service on all the nodes (control and worker nodes), run

$ sudo systemctl enable --now kubelet

Step 6) Install Kubernetes Cluster with Kubeadm

While installing Kubernetes cluster we should make sure that cgroup of container run time (CRI) matches with cgroup of kubelet. Typically, in Docker, cgroup is cgroupfs, so we must instruct Kubeadm to use cgroupfs as cgoup of kubelet. This can be done by passing a yaml in Kubeadm command,

Create kubeadm-config.yaml file on control plane with following content

$ vi kubeadm-config.yaml
# kubeadm-config.yaml
kind: ClusterConfiguration
kubernetesVersion: v1.23.4
kind: KubeletConfiguration
cgroupDriver: cgroupfs

Note: Replace the Kubernetes version as per your setup.

Now, we are all set to install (or initialize the cluster), run below Kubeadm command from control node,

$ sudo kubeadm init --config kubeadm-config.yaml

Output of above command would look like below,


Above output confirms that cluster has been initialized successfully.

Execute following commands to allow regular user to interact with cluster, these commands are already there in output.

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes


To make nodes in ready state and to enable cluster dns service (coredns) , install pod network ad-on (CNI – Container Network Interface).  Pods will start communicating each other once pod network ad-on is installed.  In this guide, I am installing calico as network ad-on. Run beneath kubectl command from control-plane.

$ kubectl apply -f



After the successful installation of calico network ad-on, control node and pods in kube-system namespace will be come ready and available respectively.


Now, next step is to join worker nodes to the cluster.

Step 7) Join Worker Nodes to Cluster

After the successful initialization of Kubernetes cluster, command to join any worker node to cluster is shown in output. So, copy that command and past it on worker nodes. So, in my case command is,

$ sudo kubeadm join --token jecxxg.ac3d3rpd4a7xbxx4 --discovery-token-ca-cert-hash sha256:1e4fbed060aafc564df75bc776c18f6787ab91685859e74d43449cf5a5d91d86

Run above commands on both the worker nodes.



Verify the status of both worker nodes from control-plane, run

[sysops@control-node ~]$ kubectl get nodes
NAME           STATUS   ROLES                  AGE     VERSION
control-node   Ready    control-plane,master   49m     v1.23.4
worker-node1   Ready    <none>                 5m18s   v1.23.4
worker-node2   Ready    <none>                 3m57s   v1.23.4
[sysops@control-node ~]$

Great, above output confirms that worker nodes have joined the cluster. That’s all from this guide, I hope you have found this guide informative. Please do share your queries and feedback in below comments section.

Also Read: How to Create Sudo User on RHEL | Rocky Linux | AlmaLinux

Share on:

I am a Cloud Consultant with over 15 years of experience in Linux, Kubernetes, cloud technologies (AWS, Azure, OpenStack), automation (Ansible, Terraform), and DevOps. I hold certifications like RHCA, CKA, CKAD, CKS, AWS, and Azure.

5 thoughts on “How to Install Kubernetes Cluster on Rocky Linux 8”

  1. I’m on Rocky Linux 8 and have run this over and over I get this:
    [root@k8s-master-1 keepalived]# sudo kubeadm init –control-plane-endpoint “vip-k8s-master:8443”
    [init] Using Kubernetes version: v1.24.2
    [preflight] Running pre-flight checks
    error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR CRI]: container runtime is not running: output: E0626 11:17:46.022101 9976 remote_runtime.go:925] “Status from runtime service failed” err=”rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService”
    time=”2022-06-26T11:17:46Z” level=fatal msg=”getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService”
    , error: exit status 1
    [preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
    To see the stack trace of this error execute with –v=5 or higher

    • Hi Rick,

      To fix the issue, kindly try following commands one after the another,

      # rm /etc/containerd/config.toml
      # systemctl restart containerd
      # kubeadm init

  2. Hi Rick,

    1st, thanks for this cool documentation 🙂

    2nd I think I found a bug in your kubeadm-config.yaml example.
    — should be — in order to have it valid.

    Hope this helps,


  3. small update regarding calico installation :
    kubectl apply -f ‘’


Leave a Comment