Building a Kubernetes (k8s) cluster for the home lab
DEPRECIATED: The setup in this article has been superseded by a newer, better version as per the link below:
Introduction:
Building a Kubernetes cluster for the lab at home or at work doesn’t have to be complicated. Below are the steps to create a 3-node cluster (1 master + 2 workers).
Prerequisites:
Install three copies of Ubuntu 18.04. I used VirtualBox + Vagrant (and the image “ubuntu/bionic64”) to create mine. My nodes are named as follows:
- k8s-c3-n1 (ip: 192.168.11.101)
- k8s-c3-n2 (ip: 192.168.11.102)
- k8s-c3-n3 (ip: 192.168.11.103)
Be sure to disable swap. Kubernetes won’t work if it’s enabled. This can be done by commenting out or removing the entry for swap in /etc/fstab followed by a reboot. Temporarily it can be turned off with “swapoff -a”.
Installing Kubernetes:
On all three nodes, download the Google apt key and add it:
Install kubeadm – also on all three nodes:
Installing Docker:
We also need Docker on all three nodes. Install it and enable the service as follows:
Configuring the master node:
On the master (k8s-c3-n1 in this case), enable the kubelet service:
Initialize the cluster:
Now let’s initialize the cluster with “kubeadmin init”. This is done on the master node. Note some detail about the two variables we need to provide:
- The API server advertise address: If your hosts have multiple network cards, specify which IP address the API server should bind to. Make sure the nodes are all able to communicate on the network chosen here.
- The pod network CIDR: The network you wish the pods to utilize. This appears to be arbitrary. I chose the 10.244.0.0/16 for my network.
The “kubeadmin init” will result in output similar to the below. Make note of the “kubeadmin join” string as it’s unique to your installation and we’ll need it whenever registering worker nodes:
Let’s finalize by running the commands as suggested in the “kubeadm join” output. This is done on the master node – same as we used for ‘kubeadmin join’.
Congratulations – Kubernetes is installed! However, there is still a little bit of work to be done. First installing a pod network and then adding our two worker nodes.
Adding the pod network:
We’ll use Flannel for this example. There are other network solutions available too, but this worked well for me in my lab, and it’s both quick and easy:
Joining the worker nodes:
Finally, let’s join our two workers to the master node by executing the “kubeadm join” string which was provided by our “kubeadm init” earlier.
On each of the two worker nodes, execute the “kubeadm join” string that is unique to your installation. For me it was as follows:
Let’s see if the three nodes are visible. On the master node, execute “kubectl get nodes”. Output should be similar to the below:
It may take around 30 sec to a minute for the workers to reach “Ready” status, but that’s it – Kubernetes is installed and ready.
We can now deploy containers / pods and they will be scheduled to run on the worker nodes. Note: There’s no load balancer or other fancy stuff installed by default so it’s pretty bare-bones 🙂
Deploy a test application:
Let’s verify that our cluster works by deploying a container with a web server.
On the master node (“k8s-c3-n1” in this example), deploy the “httpd” webserver using “kubectl run”. I picked the name “httpd-01” for this pod. This is arbitrary so feel free to use any name that makes sense in your installation.
We can now check that it’s running with “kubectl get pods”:
Since this is a web server we want to expose port 80 so it can be accessed. This creates a service which has to be named. I picked the name ‘httpd-01-http”, but chose any name that makes sense in your installation. Note that we’re referring to the name we gave our our pod at deployment: “httpd-01”.
Let’s find out more about our web server application / pod by using “kubectl get pods” and then “kubectl describe pod <pod id>”:
Among a lot of other information we can see it’s running on worker node “k8s-c3-n2”.
Let’s also get some information about the service we got when exposing port 80 earlier:
Here we can see the endpoint IP and port: “10.244.1.2:80”
Since we know it’s running on worker node “k8s-c3-n2”, let’s SSH there and verify that we can get the default webpage:
As shown in the output: “It works!”.
That goes both for the httpd container and for the Kubernetes cluster. Have fun!
Tensorflow item recognition
Leveraging Google’s Tensorflow Machine Learning libraries for item recognition in images is fantastically easy to get going. The below Dockerfile will setup a container with everything required and allow the user to feed a URL to a file for classification:
Dockerfile:
Download raw from here: https://pastebin.com/raw/mdJ225vp
Save the above into a file called “Dockerfile”.
Enter the directory where the Dockerfile is saved and build the Docker image:
Verify the Docker image:
Run the image. We’ll expose SSH on port 22 on the container as 2222 on the host:
Verify the local Docker gateway IP using the container ID (81f13360885f in this case – use “docker ps” to find out):
SSH and execute the image classification script (password: “tensorflow”):
This is the image we’ve pulled down:
And this is the classification result:
Not too bad 🙂 Tensorflow accurately detects that the image contains a scooter, a crash helmet and even sees the disk brake on the scooter! Try with any image URL to see what Tensorflow will classify your image as. Have fun!
Ubuntu 18.04.1 – Change hostname
While I’d normally use “hostnamectl set-hostname ” to modify the hostname of a Linux box, that doesn’t work for Ubuntu 18.04. The hostname will remain unchanged. Instead, modify as follows.
In /etc/cloud/cloud.cfg, modify “preserve_hostname” from “false” to “true”:
vi /etc/cloud/cloud.cfg
Modify hostname to the value you want in /etc/hostname:
vi /etc/hostname
Reboot
New iDRAC Ansible module: Version 1.1 released
The recently released version 1.1 adds streaming Server Configuration File (SCP) support, enhanced RAID creation and many other goodies! See the release notes here for details: Dell EMC Ansible modules version 1.1
Below are some installation instructions (in particular for those who have been using the original Ansible modules).
System used:
CentOS 7.5
Get the new Ansible modules for iDRAC off Github:
Get the Dell EMC OpenManage Python SDK off Github:
Remove some packages or we will run into errors during the SDK install:
NOTE: This will uninstall Ansible. Backup your /etc/ansible/hosts file prior to Ansible removal!
Install the Dell EMC OpenManage Python SDK prerequisites:
Reinstall Ansible:
Install wheel:
Build .whl file:
Install the newly built module:
Install the new Dell EMC Ansible modules for iDRAC:
All done! The new Ansible modules are installed.
Modifying /etc/ansible/hosts:
The previous version of the Dell EMC Ansible modules for iDRAC required the following format:
The new modules require some different variables:
Trying it out:
Working perfectly 🙂