Building a Kubernetes cluster for the lab at home or at work doesn’t have to be complicated. Below are the steps to create a 3-node cluster (1 master + 2 workers).
Install three copies of Ubuntu 18.04. I used VirtualBox + Vagrant (and the image “ubuntu/bionic64”) to create mine. My nodes are named as follows:
- k8s-c3-n1 (ip: 192.168.11.101)
- k8s-c3-n2 (ip: 192.168.11.102)
- k8s-c3-n3 (ip: 192.168.11.103)
Be sure to disable swap. Kubernetes won’t work if it’s enabled. This can be done by commenting out or removing the entry for swap in /etc/fstab followed by a reboot. Temporarily it can be turned off with “swapoff -a”.
On all three nodes, download the Google apt key and add it:
Install kubeadm – also on all three nodes:
We also need Docker on all three nodes. Install it and enable the service as follows:
Configuring the master node:
On the master (k8s-c3-n1 in this case), enable the kubelet service:
Initialize the cluster:
Now let’s initialize the cluster with “kubeadmin init”. This is done on the master node. Note some detail about the two variables we need to provide:
- The API server advertise address: If your hosts have multiple network cards, specify which IP address the API server should bind to. Make sure the nodes are all able to communicate on the network chosen here.
- The pod network CIDR: The network you wish the pods to utilize. This appears to be arbitrary. I chose the 10.244.0.0/16 for my network.
The “kubeadmin init” will result in output similar to the below. Make note of the “kubeadmin join” string as it’s unique to your installation and we’ll need it whenever registering worker nodes:
Let’s finalize by running the commands as suggested in the “kubeadm join” output. This is done on the master node – same as we used for ‘kubeadmin join’.
Congratulations – Kubernetes is installed! However, there is still a little bit of work to be done. First installing a pod network and then adding our two worker nodes.
Adding the pod network:
We’ll use Flannel for this example. There are other network solutions available too, but this worked well for me in my lab, and it’s both quick and easy:
Joining the worker nodes:
Finally, let’s join our two workers to the master node by executing the “kubeadm join” string which was provided by our “kubeadm init” earlier.
On each of the two worker nodes, execute the “kubeadm join” string that is unique to your installation. For me it was as follows:
Let’s see if the three nodes are visible. On the master node, execute “kubectl get nodes”. Output should be similar to the below:
It may take around 30 sec to a minute for the workers to reach “Ready” status, but that’s it – Kubernetes is installed and ready.
We can now deploy containers / pods and they will be scheduled to run on the worker nodes. Note: There’s no load balancer or other fancy stuff installed by default so it’s pretty bare-bones 🙂
Deploy a test application:
Let’s verify that our cluster works by deploying a container with a web server.
On the master node (“k8s-c3-n1” in this example), deploy the “httpd” webserver using “kubectl run”. I picked the name “httpd-01” for this pod. This is arbitrary so feel free to use any name that makes sense in your installation.
We can now check that it’s running with “kubectl get pods”:
Since this is a web server we want to expose port 80 so it can be accessed. This creates a service which has to be named. I picked the name ‘httpd-01-http”, but chose any name that makes sense in your installation. Note that we’re referring to the name we gave our our pod at deployment: “httpd-01”.
Let’s find out more about our web server application / pod by using “kubectl get pods” and then “kubectl describe pod <pod id>”:
Among a lot of other information we can see it’s running on worker node “k8s-c3-n2”.
Let’s also get some information about the service we got when exposing port 80 earlier:
Here we can see the endpoint IP and port: “10.244.1.2:80”
Since we know it’s running on worker node “k8s-c3-n2”, let’s SSH there and verify that we can get the default webpage:
As shown in the output: “It works!”.
That goes both for the httpd container and for the Kubernetes cluster. Have fun!