Install and Configure Kubernetes (k8s) 1.13 on Ubuntu 18.04 LTS / Ubuntu 18.10

14 Responses

  1. Glenn says:

    Excellent, worked a charm!

  2. huberttrz says:

    Got some issues on Ubuntu 18.04 / docker 18.06 / kubeadm 1.13 with flannel pod not starting on worker nodes.
    “Error registering network: failed to acquire lease: node “” pod cidr not assigned”
    The solution was as per ‘’ to path the nodes with the podCidr:
    kubectl patch node -p ‘{“spec”:{“podCIDR”:””}}’

    Maybe it’ll save some time for someone.

    Thanks for your work.

  3. nakul says:

    Issue in ubuntu 18.04 : When machine is rebooted, kubernetes is not started automatically and kubeadm has to be reset and worker has to join again.

  4. Askar says:

    Thank you using the article i was able to deploy k8 cluster in few minutes runing the master and worker nodes as vm under VirtualBox

  5. Andrei says:

    If you’re getting kubectl get nodes status ‘NotReady’, its because flannel is broken(?)

    Use cylium instead with this command:

    kubectl apply -f ‘’

  6. bernhard says:

    nice … thanks!

  7. manas says:

    [email protected]:~# sudo kubeadm join –token q2r9un.l7bapq1m18wtd9ij \
    > –discovery-token-ca-cert-hash sha256:72e5b3bcf18ae48d107da8a393b70b4896ca5abb8ec6609da98b4710b080ec55
    [preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at ‘’

    Note : While executing in slave node not going forward stuck there. Kindly anyone knows this problem?

    • Lionel says:

      1) check that you can run on the master “kubectl get nodes” if not because port6443 is closed, then run “strace -eopenat kubectl version” and retry
      2) if you can no more ping the master, run “sudo ip link set cni0 down”
      With this both command , I was then able to join the master from a slave.

  8. Saikumar G says:

    I got following error.

    [init] Using Kubernetes version: v1.15.1
    [preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at ‘’
    error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR Port-10251]: Port 10251 is in use
    [ERROR Port-10252]: Port 10252 is in use
    [ERROR Port-10250]: Port 10250 is in use
    [ERROR Port-2380]: Port 2380 is in use
    [preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`

  9. Shivdas Kanade says:

    Worked for me. Step by step explained. Thanks

  10. Nestor Alonso says:

    The ‘Not ready’ status on slave nodes, in my case, was due to a missing package in the nodes. Solved with apt-get install apt-transport-https. Then removed the nodes from the cluster (in master node), and joined the nodes to the cluster again from the slave nodes. A minute later, all nodes were ‘Ready’

  11. Saikumar G says:

    While adding the slave node in the last step I got a below-mentioned error. So please let me know the way to fix it.

    error execution phase preflight: couldn’t validate the identity of the API Server: abort connecting to API servers after a timeout of 5m0s

  12. Carl De Pasquale says:

    Hi – I just came across the post and it is great. Everything worked fine until I tried to start a pod. I keep getting
    web-server-pod 0/1 ContainerCreating 0 5m17s

    any ideas?

    Thanks ,

  13. doorsfan says:

    flannel is broken!
    kubectl apply -f “”

Leave a Reply to huberttrz Cancel reply

Your email address will not be published. Required fields are marked *