How to Install Kubernetes Cluster on Ubuntu 22.04

Are you looking for an easy guide on how to install Kubernetes Cluster on Ubuntu 22.04 (Jammy Jellyfish)?

The step-by-step guide on this page will show you how to install Kubernetes cluster on Ubuntu 22.04 using Kubeadm command step by step.

Kubernetes has become the de facto container orchestration platform, empowering developers and system administrators to manage and scale containerized applications effortlessly. If you’re an Ubuntu 22.04 user eager to harness the power of Kubernetes, you’ve come to the right place.

A Kubernetes cluster consists of worker nodes on which application workload is deployed and a set up master nodes which are used to manage worker nodes and pods in the cluster.

Prerequisites

In this guide, we are using one master node and two worker nodes. Following are system requirements on each node,

  • Minimal install Ubuntu 22.04
  • Minimum 2GB RAM or more
  • Minimum 2 CPU cores / or 2 vCPU
  • 20 GB free disk space on /var or more
  • Sudo user with admin rights
  • Internet connectivity on each node

Lab Setup

  • Master Node:  192.168.1.173 – k8smaster.example.net
  • First Worker Node:  192.168.1.174 – k8sworker1.example.net
  • Second Worker Node:  192.168.1.175 – k8sworker2.example.net

Without any delay, let’s jump into the installation steps of Kubernetes cluster

1) Set hostname on Each Node

Login to to master node and set hostname using hostnamectl command,

$ sudo hostnamectl set-hostname "k8smaster.example.net"
$ exec bash

On the worker nodes, run

$ sudo hostnamectl set-hostname "k8sworker1.example.net"   // 1st worker node
$ sudo hostnamectl set-hostname "k8sworker2.example.net"   // 2nd worker node
$ exec bash

Add the following entries in /etc/hosts file on each node

192.168.1.173   k8smaster.example.net k8smaster
192.168.1.174   k8sworker1.example.net k8sworker1
192.168.1.175   k8sworker2.example.net k8sworker2

2) Disable swap & Add kernel Parameters

Execute beneath swapoff and sed command to disable swap. Make sure to run the following commands on all the nodes.

$ sudo swapoff -a
$ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Load the following kernel modules on all the nodes,

$ sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
$ sudo modprobe overlay
$ sudo modprobe br_netfilter

Set the following Kernel parameters for Kubernetes, run beneath tee command

$ sudo tee /etc/sysctl.d/kubernetes.conf <<EOT
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOT

Reload the above changes, run

$ sudo sysctl --system

3) Install Containerd Runtime

In this guide, we are using containerd runtime for our Kubernetes cluster. So, to install containerd, first install its dependencies.

$ sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates

Enable docker repository

$ sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Now, run following apt command to install containerd

$ sudo apt update
$ sudo apt install -y containerd.io

Configure containerd so that it starts using systemd as cgroup.

$ containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
$ sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

Restart and enable containerd service

$ sudo systemctl restart containerd
$ sudo systemctl enable containerd

4) Add Apt Repository for Kubernetes

Kubernetes package is not available in the default Ubuntu 22.04 package repositories. So we need to add Kubernetes repository. run following command to download public signing key,

$ curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

Next, run following echo command to add Kubernetes apt repository.

$ echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

Note: At the time of writing this guide, Kubernetes v1.28 was available, replace this version with new higher version if available.

5) Install Kubectl, Kubeadm and Kubelet

Post adding the repositories, install Kubernetes components like kubectl, kubelet and Kubeadm utility on all the nodes. Execute following set of commands,

$ sudo apt update
$ sudo apt install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl

6) Install Kubernetes Cluster on Ubuntu 22.04

Now, we are all set to initialize Kubernetes cluster. Run the following Kubeadm command on the master node only.

$ sudo kubeadm init --control-plane-endpoint=k8smaster.example.net

Output of above command,

Kubeadm-initialize-kubernetes-ubuntu-22-04

After the initialization is complete, you will see a message with instructions on how to join worker nodes to the cluster. Make a note of the kubeadm join command for future reference.

So, to start interacting with cluster, run following commands on the master node,

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

next, try to run following kubectl commands to view cluster and node status

$ kubectl cluster-info
$ kubectl get nodes

Output,

Kubernetes-Cluster-Information-Kubectl-Command-Ubuntu

7) Join Worker Nodes to the Cluster

On each worker node, use the kubeadm join command you noted down earlier after initializing the master node on step 6. It should look something like this:

$ sudo kubeadm join k8smaster.example.net:6443 --token vt4ua6.wcma2y8pl4menxh2 \
   --discovery-token-ca-cert-hash sha256:0494aa7fc6ced8f8e7b20137ec0c5d2699dc5f8e616656932ff9173c94962a36

Output from both the worker nodes,

Woker1-Join-kubernetes-Cluster

Woker2-Join-kubernetes-Cluster

Above output from worker nodes confirms that both the nodes have joined the cluster.Check the nodes status from master node using kubectl command,

$ kubectl get nodes

Kubectl-Get-Nodes-Command-After-Joining-Worker-Nodes

As we can see nodes status is ‘NotReady’, so to make it active. We must install CNI (Container Network Interface) or network add-on plugins like Calico, Flannel and Weave-net.

8) Install Calico Network Plugin

A network plugin is required to enable communication between pods in the cluster. Run following kubectl command to install Calico network plugin from the master node,

$ kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml

Output of above commands would look like below,

Install-Calico-Pod-Network-Addon-From-Master-Node-Ubuntu

Verify the status of pods in kube-system namespace,

$ kubectl get pods -n kube-system

Output,

Kube-System-Pods-after-calico-installation

Perfect, check the nodes status as well.

$ kubectl get nodes

Nodes-Ready-Status-Post-Calico-Installation-Master-Node

Great, above confirms that nodes are active node. Now, we can say that our Kubernetes cluster is functional.

9) Test Your Kubernetes Cluster Installation

To test Kubernetes installation, let’s try to deploy nginx based application and try to access it.

$ kubectl create deployment nginx-app --image=nginx --replicas=2

Check the status of nginx-app deployment

$ kubectl get deployment nginx-app
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
nginx-app   2/2     2            2           68s
$

Expose the deployment as NodePort,

$ kubectl expose deployment nginx-app --type=NodePort --port=80
service/nginx-app exposed
$

Run following commands to view service status

$ kubectl get svc nginx-app
$ kubectl describe svc nginx-app

Output of above commands,

Deployment-Service-Status-k8s

Use following curl command to access nginx based application,

$ curl http://<woker-node-ip-addres>:31246

$ curl http://192.168.1.174:31246

Output,

Curl-Command-Access-Nginx-Kubernetes

Great, above output confirms that nginx based application is accessible.

That’s all from this guide, I hope you have found this guide useful. Kindly do post your queries and feedback in below comments section.

Also Read: How to Install Kubernetes Dashboard Using Helm

Conclusion:

Congratulations! You have successfully set up a Kubernetes cluster on Ubuntu 22.04. With Kubernetes at your disposal, you can now orchestrate, scale, and manage your containerized applications efficiently. Explore further by deploying more complex applications and services on your Kubernetes cluster, and take full advantage of the power and flexibility it offers.

Also Read: How to Install Kubernetes (K8s) Metrics Server Step by Step

Share Now!

68 thoughts on “How to Install Kubernetes Cluster on Ubuntu 22.04”

  1. after going through 100s of videos and different blog post , finally your document helped me to setup working kubernetes cluster….kudoz

    Reply
  2. The commands adding keys for the apt repos should be changed to something like this:

    sudo curl -fsSL ‘https://download.docker.com/linux/ubuntu/gpg’ | sudo gpg –dearmour -o /etc/apt/trusted.gpg.d/docker.gpg

    Reply
    • Hi Niels,

      Thanks for sharing the updated command. As per your suggestion, I have modified command in article as well.

      Reply
  3. Hi Niels, when running the “kubeadm init” or “kubeadm join” i had this error

    [preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: blkio
    error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR FileContent–proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
    [preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`

    To solve it, i had to set ip_forward content with 1 by following command:

    echo 1 > /proc/sys/net/ipv4/ip_forward

    Reply
    • If the k8s master token expired then also will get this kind of issue.
      check using below command to know token expired or not
      > kubadm token list
      if no results it means token expired
      Genrate new token like below
      >kubadm token create

      Reply
  4. I currently have Ubuntu 22.04 installed, and I am planning on installing GitLab. I’d want to have a possibly basic Kubernetes environment. I am quite used to kubecti, but I am not confident whether to use MicroK8s, Rancher, MiniKube, or something else. Do you have any opinion about my situation? Thank you so much for answering.

    Reply
  5. This is a great tutorial, thank you for putting it together! I have managed to successfully create a cluster using it, which is awesome! Really appreciate the time you’ve taken here.

    Reply
  6. I don’t know why my pods is having crash status on calico

    root@k8smaster:~# kubectl get pods -n kube-system -o wide
    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    calico-kube-controllers-798cc86c47-g95km 1/1 Running 0 6m54s 172.16.16.129 k8smaster
    calico-node-2jwsx 1/1 Running 0 6m54s 192.168.60.10 k8smaster
    calico-node-5w2bj 0/1 Init:CrashLoopBackOff 5 (2m34s ago) 6m54s 192.168.60.11 k8sworker1
    calico-node-8rqx5 0/1 Init:CrashLoopBackOff 6 (22s ago) 6m54s 192.168.60.12 k8sworker2

    Reply
  7. Finally – It works as described. After trying to migrate from AWS to baremetal for a few weeks this worked. I really appreciate you!

    Reply
  8. Hi Pradeep,

    I have been trying to Set-up the K8S Cluster for the last 10 days. I have watched many youtube videos as well for that. Finally, today I set up the K8S Cluster with the help of your amazing step-to-step guide.

    Thank you so much for your efforts.

    Reply
  9. Thanks for putting this guide together. Spent hours on various sites following different instructions with no success.
    You guide made it nice and easy.

    Reply
  10. Hi, nice tutorial, could you please help me, i have been follow the step until install calico, but when i run “$ kubectl get pods -n kube-system”, the status appears there are pending and running, see below :

    NAME READY STATUS RESTARTS AGE
    calico-kube-controllers-7bdbfc669-cwdk5 0/1 Pending 0 8m
    calico-node-qk4vp 0/1 Init:ImagePullBackOff 0 8m
    calico-node-vbjg8 0/1 Init:ImagePullBackOff 0 8m
    coredns-787d4945fb-lzblv 0/1 Pending 0 51m
    coredns-787d4945fb-rqnkc 0/1 Pending 0 51m
    etcd-k8smaster.alfatih.v2 1/1 Running 0 51m
    kube-apiserver-k8smaster.alfatih.v2 1/1 Running 0 51m
    kube-controller-manager-k8smaster.alfatih.v2 1/1 Running 0 51m
    kube-proxy-52r46 1/1 Running 0 11m
    kube-proxy-w5hdz 1/1 Running 0 51m
    kube-scheduler-k8smaster.alfatih.v2 1/1 Running 0 51m
    ================================================================
    FYI, i run this K8S on virtual box.

    Reply
  11. I am running into an issue with the nginx-app is not being exposed properly. There is no “External-IP”.

    kubectl get svc nginx-app
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    nginx-app NodePort 10.109.94.99 80:30817/TCP 6m42s

    Can anyone assit?

    Reply
  12. Thanks Pradeep for this wonderful document each and everything well explained and documented.
    Much Appreciated for your efforts.

    Reply
  13. Thanks for the step by step guide, clear and concise.

    Thanks for taking to time to post the article.

    It was my second attempt at installing K8s and the second post I looked at, which is quite good considering the amount of posts out there.

    Now to get the dashboard installed and setup for using kvm instead of containers.

    Reply
  14. excellent tutorial, Last week works perfectly, but today, something happened with the calico step.

    $ curl ‘https://projectcalico.docs.tigera.io/manifests/calico.yaml’ -O
    $ kubectl apply -f calico.yaml
    can you help?

    Reply
  15. hi, after running kubeadm init, i ran kubectl cluster-info and it worked.

    but after a few minutes, i ran again kubectl cluster-info, error occured “k8smaster.example.net:6443 was refused – did you specify the right host or port?”

    can you help me? i have followed all instruction in this tutorial.

    i’m using ubuntu 22.04, running on EC2 Instance (AWS)

    Reply
  16. Can anyone help me with error.

    ubuntu@k8smaster:~$ sudo kubeadm init –control-plane-endpoint=k8smaster.example.net

    [init] Using Kubernetes version: v1.26.1
    [preflight] Running pre-flight checks
    error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR Port-6443]: Port 6443 is in use
    [ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
    [ERROR FileAvailable–etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
    [ERROR FileAvailable–etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
    [ERROR FileAvailable–etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
    [ERROR Port-10250]: Port 10250 is in use
    [ERROR Port-2379]: Port 2379 is in use
    [ERROR Port-2380]: Port 2380 is in use
    [ERROR DirAvailable–var-lib-etcd]: /var/lib/etcd is not empty
    [preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
    To see the stack trace of this error execute with –v=5 or higher

    Reply
  17. Hello,
    I get this error on the worker nodes when I run `sudo kubeadm join`

    [failure loading certificate for CA: couldn’t load the certificate file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory, failure loading key for service account: couldn’t load the private key file /etc/kubernetes/pki/sa.key: open /etc/kubernetes/pki/sa.key: no such file or directory, failure loading certificate for front-proxy CA: couldn’t load the certificate file /etc/kubernetes/pki/front-proxy-ca.crt: open /etc/kubernetes/pki/front-proxy-ca.crt: no such file or directory, failure loading certificate for etcd CA: couldn’t load the certificate file /etc/kubernetes/pki/etcd/ca.crt: open /etc/kubernetes/pki/etcd/ca.crt: no such file or directory]

    What can I do to resolve this?

    Reply
    • I had this problem too, then I realized the kubeadm init command output listed two separate join commands. The first was to add control-plane nodes, and that was the command I’d copied. Make sure you get the second command, which does not have the “–control-plane” parameter

      Reply
  18. Great Job Pradeep –

    BTW, in this guide – I don’t see a reference of “sudo kubeadm init –pod-network-cidr=192.168.0.0/16” command anywhere. How POD Network is configured, what would be the POD Network CIDR (if anything default)? and where to find it? Can we update the config later?
    Thanks
    ~Bish

    Reply
  19. Hi == agreed best step by step –others failed.. Issuue at the end though.
    NO ENDPOINTS Here :

    ~$ kubectl describe svc nginx-app
    ————————————–
    Name: nginx-app
    Namespace: default
    Labels: app=nginx-app
    Annotations:
    Selector: app=nginx-app
    Type: NodePort
    IP Family Policy: SingleStack
    IP Families: IPv4
    IP: 10.99.225.11
    IPs: 10.99.225.11
    Port: 80/TCP
    TargetPort: 80/TCP
    NodePort: 32495/TCP
    Endpoints:
    Session Affinity: None
    External Traffic Policy: Cluster
    Events:
    ———————————————- and then
    curl ‘http://worker1 ip address:31246’
    curl: (7) Failed to connect to 10.165.2.129 port 31246 after 0 ms: Connection refused ??
    Thanks

    Reply
    • Hi Tom,

      As per your nginx service, nodeport is 32495 but in your curl command you are using different port. Please crosscheck the nodeport.

      Reply
  20. Thanks for the guide.
    Seems like I too have same story of others. Tried several docs and videos. Finally this helped me.

    Reply
  21. After following your steps I was able to install kubeadm successfully. I tried so many docs, videos for a week finally got to this so far.

    But now I get the error like this again. This was the reason i tried several installation guide. What could be the reason behind this error ?

    pseudouser@masterkube:~$ kubectl get nodes
    E0409 13:16:54.278679 8776 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused
    E0409 13:16:54.278888 8776 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused
    E0409 13:16:54.279978 8776 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused
    E0409 13:16:54.281517 8776 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused
    E0409 13:16:54.282856 8776 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused
    The connection to the server localhost:8080 was refused – did you specify the right host or port?
    pseudouser@masterkube:~$

    Reply
  22. Thanks a lot, I have spent 1 weeks, was getting coredns-pending in centos and coredns-creating in ubuntu with docker and containerd setup. Finally, it works with your perfect details.

    Reply
  23. Hi Niels and thank you for this very important document. It is by far the best I came across online.

    I am having issues with the worker nodes joining the cluster.
    Here is my setup on VMWare workstation. The VMs are all from CentOS8.
    192.168.234.130 k8smaster.example.net k8smaster
    192.168.234.131 k8sworker1.example.net k8sworker1
    192.168.234.132 k8sworker2.example.net k8sworker2

    Here is the error I am getting on both worker nodes:

    devops@k8smaster:~$ kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    k8smaster.example.net NotReady control-plane 2d7h v1.27.3
    devops@k8smaster:~$

    devops@k8sworker1:~$ sudo kubeadm join k8smaster.example.net:6443 –token vt4ua6.wcma2y8pl4menxh2 –discovery-token-ca-cert-hash sha256:0494aa7fc6ced8f8e7b20137ec0c5d2699dc5f8e616656932ff9173c94962a36
    [preflight] Running pre-flight checks
    error execution phase preflight: couldn’t validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID “vt4ua6”
    To see the stack trace of this error execute with –v=5 or higher
    devops@k8sworker1:~$

    devops@k8sworker2:~$ sudo kubeadm join k8smaster.example.net:6443 –token vt4ua6.wcma2y8pl4menxh2 –discovery-token-ca-cert-hash sha256:0494aa7fc6ced8f8e7b20137ec0c5d2699dc5f8e616656932ff9173c94962a36
    [sudo] password for devops:
    [preflight] Running pre-flight checks
    error execution phase preflight: couldn’t validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID “vt4ua6”
    To see the stack trace of this error execute with –v=5 or higher
    devops@k8sworker2:~$

    I processed with the CNI installation in hope that it will hep. But this is what I got below. The worker nodes are still not joining the cluster.

    devops@k8smaster:~$ kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    calico-kube-controllers-6c99c8747f-2skjx 1/1 Running 0 64s
    calico-node-v498j 1/1 Running 0 64s
    coredns-5d78c9869d-5ljmp 1/1 Running 0 2d7h
    coredns-5d78c9869d-qn29s 1/1 Running 0 2d7h
    etcd-k8smaster.example.net 1/1 Running 2 (35m ago) 2d7h
    kube-apiserver-k8smaster.example.net 1/1 Running 2 (35m ago) 2d7h
    kube-controller-manager-k8smaster.example.net 1/1 Running 2 (35m ago) 2d7h
    kube-proxy-vzw5z 1/1 Running 2 (35m ago) 2d7h
    kube-scheduler-k8smaster.example.net 1/1 Running 2 (35m ago) 2d7h
    devops@k8smaster:~$ kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    k8smaster.example.net Ready control-plane 2d7h v1.27.3
    devops@k8smaster:~$ kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    k8smaster.example.net Ready control-plane 2d7h v1.27.3

    Could you please help? Many Thank in advance.

    Joel

    Reply
    • you can use command “kubeadm token create –print-join-command” to get command to join
      Note: add sudo before command

      Reply
  24. After lots of efforts to setup by watching you tube video’s and blogs finally found solution and I able setup cluster in 10 minutes, If follow all the steps it will be successful in first attempt.

    Thank you very much for such wonderful information,

    Reply
  25. Thank you so much! As others already wrote this is the working solution after trying lots of other tutorials. Would be nice to get deeper into network for example where the NGINX Service got the IP from and so on.

    Reply
  26. $ sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    EOF

    change to

    sudo tee -a /etc/sysctl.d/kubernetes.conf <<EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    EOF

    Reply
  27. This is a great article and it brought me a lot closer to being able to install.
    There’s few edit’s I might suggestץ
    When installing on a VM with multiple NICs there might be all sort of problems.
    So what eventually I had to do is:
    1. Adding the additional params in the kubeadm init like so:
    sudo kubeadm init –control-plane-endpoint=k8s-master –apiserver-advertise-
    address= –cri-socket=/var/run/containerd/containerd.sock –pod-network-
    cidr=192.168.0.0/16
    2. Adding the following in the calico yaml:
    “ipam”: {
    “type”: “calico-ipam”,
    “assign_ipv4”: “true”,
    “assign_ipv6”: “false”,
    “iface”: “”
    }
    Without these two edits the commands would be confused regarding the ip and interface to use.

    Reply
  28. Hello.
    have followed the guide until sudo apt install -y containerd.io.
    When I run this command, I get the following error message:
    E: Unable to locate package containerd.io
    E: Couldn’t find any package by glob ‘containerd.io’

    What am I doing wrong?

    Reply

Leave a Comment