Photon OS on Raspberry Pi 3 model B+

Introduction

Photon OS is a VMware initiative to create a lightweight Linux based OS with container support. I have to admit my initial reaction to Photon OS was: “y tho?”

It’s a reasonable reaction. There are MANY Linux based OS options out there already and essentially all of them have container support. The reason for creating Photon OS would seem to be that VMware wants their own rubber-stamped Linux OS as part of an ecosystem under their control.

Photon OS’s redeeming feature is the fact that it’s really lightweight. Not as lightweight as Ubuntu Core though. Photon OS for Raspberry Pi weighs in at 512Mb while Ubuntu Core is 450Mb. Still, given the influence of VMware in virtualization and their (our) inroads into IoT / M2M with Pulse, it’s likely that Photon OS will take off eventually.

Currently the main barrier to widespread adoption of Photon OS is a lack of commercial support. At the moment it is simply available as an unsupported download from GitHub (here). This could change in the future though and in that case we may see it being utilized more broadly and also outside the lab environments it is currently inhabiting.

Note that unlike Raspbian, which is 32bit, Photon OS is a 64bit operating system. That too may be something that’ll help float the boat for some.

Getting started with Photon OS on the Raspberry Pi

First download the image from here: http://dl.bintray.com/vmware/photon/3.0/GA/rpi3/photon-rpi3-3.0-26156e2.tar.xz

Deflate the zx compressed image and save to a micro-SD card:

tar xf photon-rpi3-3.0-26156e2.tar.xz 
cd rpi3/
sudo dd if=photon-rpi3-3.0-26156e2d.raw of=/dev/mmcblk0 bs=4M;sudo sync

In this example the SD card device is /dev/mmcblk0. This may differ on other systems of course. Please check with “lsblk” or so and please do be careful. Linux / Unix folks don’t refer to dd as “Disk Destroyer” for nothing.

Boot the Raspberry Pi and log in. The default credentials are: root / changeme

DHCP and SSH are both enabled by default and should make it possible to access the Pi across the network if using a wired connection (I haven’t tried though). With a Raspberry Pi it’s likely a wireless connection would be more convenient however. Configuring Wi-Fi is easy and is described in the section that follows.

Photon OS Wi-Fi configuration

There are a few steps to go through for Wi-Fi connectivity but it’s not difficult.

Start the wpa_supplicant service

systemctl start wpa_supplicant@wlan0

Enable the wpa_supplicant service (so it starts with the Pi)

systemctl enable wpa_supplicant@wlan0

Check the service status

systemctl status wpa_supplicant@wlan0

Edit the dhcp settings to get DHCP for wlan0 and not eth0

root@photon-rpi3 [ ~ ]# cat /etc/systemd/network/99-dhcp-en.network 
[Match]
Name=e*

[Network]
DHCP=yes
IPv6AcceptRA=no
root@photon-rpi3 [ ~ ]# 

Change “Name=e*” to “Name=w*” to capture the wlan0 interface instead of the wired eth0 interface

root@photon-rpi3 [ ~ ]# vi /etc/systemd/network/99-dhcp-en.network

It should now look something like this:

root@photon-rpi3 [ ~ ]# cat /etc/systemd/network/99-dhcp-en.network 
[Match]
Name=w*

[Network]
DHCP=yes
IPv6AcceptRA=no
root@photon-rpi3 [ ~ ]# 

Restart networking

systemctl restart systemd-networkd

Configuring the wpa supplicant

wpa_passphrase yourSSID yourPassword >> /etc/wpa_supplicant/wpa_supplicant-wlan0.conf
reboot

Installing Docker

Photon OS comes in a few different sizes and in the larger ones both Docker and Kubernetes are preinstalled. Not so with the Raspberry Pi version though, so we need to install Docker manually.

Packages are installed with either “yum” or “tdnf”. Docker is available from tdnf so we’ll use that to run the install below.

Search for Docker packages

root@photon-rpi3 [ ~ ]# tdnf list | grep docker
docker.aarch64                              18.06.2-2.ph3       photon-updates
docker-doc.aarch64                          18.06.2-2.ph3       photon-updates
docker.aarch64                              18.06.1-2.ph3             photon
docker-doc.aarch64                          18.06.1-2.ph3             photon
ovn-docker.aarch64                          2.8.2-3.ph3               photon
docker-py.noarch                            3.5.0-1.ph3               photon
docker-py3.noarch                           3.5.0-1.ph3               photon
docker-pycreds.noarch                       0.3.0-1.ph3               photon
docker-pycreds3.noarch                      0.3.0-1.ph3               photon
root@photon-rpi3 [ ~ ]# 

Install Docker

root@photon-rpi3 [ ~ ]# tdnf install docker

Installing:
libapparmor                    aarch64         2.13-7.ph3           photon-updates   66.57k 68168
libsepol                       aarch64         2.8-1.ph3            photon          611.89k 626576
libselinux                     aarch64         2.8-1.ph3            photon          174.16k 178338
libseccomp                     aarch64         2.3.3-1.ph3          photon          286.28k 293153
libltdl                        aarch64         2.4.6-3.ph3          photon           35.53k 36384
device-mapper-libs             aarch64         2.02.181-1.ph3       photon          315.39k 322960
docker                         aarch64         18.06.2-2.ph3        photon-updates  154.39M 161893076

Total installed size: 155.85M 163418655
Is this ok [y/N]:y

Downloading:
libapparmor                              39330    100%
libsepol                                275180    100%
libselinux                               84756    100%
libseccomp                               80091    100%
libltdl                                  24218    100%
device-mapper-libs                      149078    100%
docker                                43826910    100%
Testing transaction
Running transaction
Installing/Updating: libsepol-2.8-1.ph3.aarch64
Installing/Updating: libselinux-2.8-1.ph3.aarch64
Installing/Updating: device-mapper-libs-2.02.181-1.ph3.aarch64
Installing/Updating: libltdl-2.4.6-3.ph3.aarch64
Installing/Updating: libseccomp-2.3.3-1.ph3.aarch64
Installing/Updating: libapparmor-2.13-7.ph3.aarch64
Installing/Updating: docker-18.06.2-2.ph3.aarch64

Complete!

Start and Enable the docker service

root@photon-rpi3 [ ~ ]# systemctl start docker
root@photon-rpi3 [ ~ ]# systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.
root@photon-rpi3 [ ~ ]# 

Verify the Docker installation

root@photon-rpi3 [ ~ ]# docker pull hello-world
Using default tag: latest
latest: Pulling from library/hello-world
3b4173355427: Pull complete 
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest
root@photon-rpi3 [ ~ ]# docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (arm64v8)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

root@photon-rpi3 [ ~ ]# 

That’s all! Photon OS is installed, Wi-Fi configured, Docker installed and verified. Ready to rock.

Trying out the Intel Neural Compute Stick 2 – Movidius NCS2

Disclaimer: The opinions in this article (and on this website in general) are entirely mine and not those of my employer Dell EMC. Testing has been done in a short period of time and may not accurately reflect real-world performance.

PDF version of this post is available here

What is it?

This week I’ve tested the Intel Neural Compute Stick 2 (NCS2). A USB stick for visual computing which originally comes from Movidius – a company acquired by Intel in September 2016. Intel’s landing page for the NCS2 can be found here.

The NCS2 is equipped with 16 VPU’s or Visual Processing Units which are designed specifically for image and video processing. With this it’s possible to run Machine Learning frameworks like Caffe and Tensorflow and to leverage CNN (Convolutional Neural Networks) to do inferencing on data.

So what?

What makes this really interesting is that since it comes in USB format it can simply be plugged into devices at the edge that normally lack the processing power to run machine learning frameworks. With that it becomes possible to process IoT data where it’s generated. The NCS2 has the potential to make Edge Computing a reality instead of just a buzzword.

The compound impact could be significant if the network backhaul is taken into consideration. Imagine replacing a constant stream of video data across the network with just the metadata resulting from running inferencing on the very same video stream. Massive amounts of video vs. some text data. Network savings alone could pay for this pretty quickly.

Note that the very same type of chip is sold embedded in video cameras and drones. They’re pretty expensive though. A webcam + a Raspberry Pi and one of the NCS2 sticks could be a very low-cost way to get a security camera with real-time item recognition for very little money in comparison.

Why not use a GPU?

Many edge devices lack the capabilities to host a GPU, either due to space, cost or thermal limitations (GPU’s generate a lot of heat and many edge devices are passively cooled).

Hang on – isn’t that just a devkit?

Yes and no. The USB stick can be used as a devkit to develop code for embedded versions of the Movidius chip of the kind that may be destined for cameras, drones, robots, etc. However, it can also be used as a very flexible drop-in solution for any edge devices or IoT gateways with low processing power but where the capability to do inferencing is desired.

Is it actually useful?

Yes, it looks like it might actually be able to do the job. The job being processing video directly on the edge devices. In particular it shines when plugged into platforms with low-end CPUs which would never be able to run inferencing on their own.

Functionality testing

To find out if it’s powerful enough to process video real-time I used a webcam and fed the video stream to a sample application from the OpenVINO toolkit (link). This particular demo app actually does several things at the same time: Facial recognition, Age detection, Pose detection and Mood detection. All these are run stacked in one command (all details will be included hands-on post shortly). It actually performs very well, although note that the video is not in full HD. Accuracy on the age detection isn’t the best, but of course that reflects more on the algorithm / training data than the NCS2 (it thinks I’m in my 20’s which is flattering).

Facial, Age and Mood detection with Intel Neural Compute Stick 2


Performance testing

Inferencing can be done on a CPU as well. So, for the NCS2 to be useful it would have to outperform whatever CPU is already on the platform it’s plugged into. Therefore I ran a benchmark test (this one) on both the NCS2 and a number of CPUs. The CPUs it was compared to were:

  • Atom E3827@1.74GHz
  • i7-4600U CPU@2.10GHz
  • i7-8850H CPU@2.60GHz

When the NCS2 was being used for the benchmark I was also curious to see if the platform / computer the NCS2 was plugged into affected the benchmark results. Maybe the NCS2 performance would be affected by the host CPU, memory and storage?

The platforms where the NCS2 was tested:

  • Dell Edge Gateway 5000
  • Dell Latitude 7440
  • Dell Precision 5530
Edge Gateway 5000 and Movidius

Note that the floating point precision differs when running the benchmark on CPU vs the NCS2, so it’s not completely apples to apples. This is because the NCS2 only support half precision (FP16) whereas the CPU only support full, or normal, precision (FP32). This probably doesn’t make a huge difference when doing inferencing, which is the only thing the NCS2 is likely to be doing in a real-world application. For learning however, the floating point precision may cause the algorithm to learn either garbage or nothing at all. This article is summarizes the topic nicely for those interested: FP16 and FP32 difference for deep learning

Each platform was tested with CPU and with the NCS2. Three platforms x 2 tests = 6 results.

NOTE: I only ran these tests a few times, so please don’t consider it exhaustive. It would need to be run dozens of times for each and have the results balanced out to get more accurate readings. However, this is all I had time for and it’s at least an indicator of performance.

The NCS2 completely outperform the Atom CPU on the Edge Gateway 5000. This is where it was tested initially. Further testing shows that it’s more or less equal to a Gen4 Intel i7 but falls behind when compared to a Gen8 Intel i7 CPU.

This is expected of course. The NCS2 isn’t a very expensive device at $87.99 USD (Amazon.com at the time of writing). This is a pretty cheap way to add Machine Learning power on devices which have lower-end CPUs, like IoT edge gateways.

There are slight differences in results when the NCS2 is running on less powerful platforms vs. newer machines. This indicates that there are more factors that play into the results than the NCS2 itself, like the type of CPU, memory and storage on the platform the NCS2 is plugged into.

From the results it’s clear that the Movidius NCS2 can’t compete with a modern i7 CPU, but of course it’s a lot cheaper and supposedly draw a lot less power. That would make it ideal for connecting to edge devices where limiting power consumption may be desired.

Practical considerations

For those who may be interested in getting one these I’d like to point out a few things.

1. USB speeds

The NCS2 changes speeds when an app uses it for execution of a neural network. More importantly, when this happens the OS believes that the original USB 2.0 device has been removed and is being replaced with a new USB 3.0 device. This is reversed when code execution finishes.

Movidius stick plugged in:

Feb 22 06:29:33 localhost kernel: [  396.100651] usb 1-1: new high-speed USB device number 11 using xhci_hcd<br>
Feb 22 06:29:33 localhost kernel: [  396.230055] usb 1-1: New USB device found, idVendor=03e7, idProduct=2485<br>
Feb 22 06:29:33 localhost kernel: [  396.230068] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3<br>
Feb 22 06:29:33 localhost kernel: [  396.230077] usb 1-1: Product: Movidius MyriadX<br>
Feb 22 06:29:33 localhost kernel: [  396.230084] usb 1-1: Manufacturer: Movidius Ltd.<br>
Feb 22 06:29:33 localhost kernel: [  396.230091] usb 1-1: SerialNumber: 03e72485

Benchmark_app run starting

Feb 22 06:30:19 localhost kernel: [  442.564640] usb 1-1: USB disconnect, device number 11<br>
Feb 22 06:30:20 localhost kernel: [  442.993334] usb 1-1: new high-speed USB device number 12 using xhci_hcd<br>
Feb 22 06:30:20 localhost kernel: [  443.122975] usb 1-1: New USB device found, idVendor=03e7, idProduct=f63b<br>
Feb 22 06:30:20 localhost kernel: [  443.122989] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3<br>
Feb 22 06:30:20 localhost kernel: [  443.122997] usb 1-1: Product: VSC Loopback Device<br>
Feb 22 06:30:20 localhost kernel: [  443.123005] usb 1-1: Manufacturer: Intel Corporation<br>
Feb 22 06:30:20 localhost kernel: [  443.123012] usb 1-1: SerialNumber: 00000000000000000

Benchmark_app run finished

Feb 22 06:31:28 localhost kernel: [  511.851893] usb 1-1: USB disconnect, device number 12<br>
Feb 22 06:31:29 localhost kernel: [  512.126213] usb 1-1: new high-speed USB device number 13 using xhci_hcd<br>
Feb 22 06:31:29 localhost kernel: [  512.255008] usb 1-1: New USB device found, idVendor=03e7, idProduct=2485<br>
Feb 22 06:31:29 localhost kernel: [  512.255020] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3<br>
Feb 22 06:31:29 localhost kernel: [  512.255027] usb 1-1: Product: Movidius MyriadX<br>
Feb 22 06:31:29 localhost kernel: [  512.255034] usb 1-1: Manufacturer: Movidius Ltd.<br>
Feb 22 06:31:29 localhost kernel: [  512.255040] usb 1-1: SerialNumber: 03e72485

2. The NCSDK and NCSDK2 can’t be used

There are two versions of the NCSDK available. Both of these are for the original NCS and won’t work with the NCS2. This is clearly stated on the Intel webpage but if you’re like me you may miss it and make the assumption that the NCSDK2 is for the NCS2 stick. I wasted a fair bit of time on this before realizing it wasn’t working by design.

Instead use the Intel distribution of OpenVINO which is available here.

3. Can it be run in a container?

Yes, but there are no pre-written Dockerfiles for the NCS2 as there were for the original NCS. The NCSDK2 contains a Docker file but it won’t work since it’s for a different version of the neural compute stick (see the NCSDK note above).

However, it’s not hard to build a container with OpenVINO and run that. I have verified that this works. In fact, there’s an excellent Dockerfile by Mateo Guzman available here and I’ve forked it here since I wanted to make some modifications to it. Feel free to use either of them.

Additionally, if you’re impatient or short on time, I’ve made a pre-built docker image on Docker Hub which can be accessed here.

NOTE: The container has to be run in privileged mode. This is due to the NCS2 device ID changing and the USB device being re-enumerated the moment code is loaded onto it (see the USB speed changes note above). The NCSDK2 has a way to change the mode of the original NCS so it can be run in non-privileged mode but this won’t work on the NCS2. I don’t know of a workaround so far.

I’ll write a post on usage shortly which will contain more detail on how to run the demo apps, download and optimize the models for the sample apps as well as how to runt the sample apps themselves.

4. Can it be used on a Raspberry Pi?

Yes, it appears so although I haven’t tested it on the Pi yet. Intel has instructions for installing OpenVINO on the Pi here. I may post a Dockerfile or Docker image for the Pi when I’ve had a chance to test it.

5. What about heat generation and cooling?

The NCS2 is passively cooled and actually is its own heatsink. It’s made of metal and the “fins” on both sides of it allow enough airflow to keep it cool. It does get warm during testing but so far not extremely so.

Intel Movidius NCS2 heatsink

6. Size

It’s a bit broad, which can make it difficult to plug in at times. It also risks covering other USB ports or as in my case – the power inlet for my laptop. A USB extension cable or powered USB hub could easily mitigate this of course.

Conclusion

The Intel Movidius Neural Compute Stick 2 does seem like a valid option for running inferencing at the edge. While it’s not as powerful as a full-on GPU nor a modern CPU, it has the potential to excel in the niche of low-power edge devices like IoT gateways where the onboard CPU isn’t powerful enough to do inferencing on its own.

Intel Movidius 2: Neural Compute stick for machine learning

Intel Movidius Neural Compute stick 2 – Machine Learning and Deep Learning inferencing in USB format(!)

Machine learning is now available on a stick! Well, not a normal stick from the forest but a USB stick. Currently testing the Intel Movidius 2 USB stick with our IoT edge devices. This thing is amazing! Now it’s possible to run neural networks with Tensorflow and Caffe without needing a GPU in the IoT device itself.

Why is this awesome? Because it removes the need for an expensive back haul from cameras and other edge devices to the core DC / Cloud. Instead of sending raw video across the network, just send the meta data resulting from analyzing the data on the IoT device itself.

This fits nicely in the Edge Computing arena and will be a great enhancement to essentially any edge device with a USB port. The intelligent edge is here!

Bluetooth on Ubuntu 18.04 server (used as desktop) on Dell Precision 5530

First install required packages:

sudo apt install bluetooth bluez bluez-tools rfkill pulseaudio-module-bluetooth

Reboot

Use bluetoothctl to scan, pair and connect to BT devices:

jonas@octo:~$ bluetoothctl 
[NEW] Controller 48:F1:7F:56:BB:6A octo [default]
Agent registered
[bluetooth]# agent on

Scan

[bluetooth]# scan on
Discovery started
[CHG] Controller 48:F1:7F:56:BB:6A Discovering: yes
[NEW] Device FC:A8:9A:EF:A6:E4 Swedish flag

Pair

[bluetooth]# pair FC:A8:9A:EF:A6:E4
Attempting to pair with FC:A8:9A:EF:A6:E4
[CHG] Device FC:A8:9A:EF:A6:E4 Connected: yes
[CHG] Device FC:A8:9A:EF:A6:E4 UUIDs: 00001101-0000-1000-8000-00805f9b34fb
[CHG] Device FC:A8:9A:EF:A6:E4 UUIDs: 00001108-0000-1000-8000-00805f9b34fb
[CHG] Device FC:A8:9A:EF:A6:E4 UUIDs: 0000110b-0000-1000-8000-00805f9b34fb
[CHG] Device FC:A8:9A:EF:A6:E4 UUIDs: 0000110c-0000-1000-8000-00805f9b34fb
[CHG] Device FC:A8:9A:EF:A6:E4 UUIDs: 0000110e-0000-1000-8000-00805f9b34fb
[CHG] Device FC:A8:9A:EF:A6:E4 UUIDs: 0000111e-0000-1000-8000-00805f9b34fb
[CHG] Device FC:A8:9A:EF:A6:E4 UUIDs: 00001200-0000-1000-8000-00805f9b34fb
[CHG] Device FC:A8:9A:EF:A6:E4 UUIDs: 00001801-0000-1000-8000-00805f9b34fb
[CHG] Device FC:A8:9A:EF:A6:E4 ServicesResolved: yes
[CHG] Device FC:A8:9A:EF:A6:E4 Paired: yes
Pairing successful
[CHG] Device FC:A8:9A:EF:A6:E4 ServicesResolved: no
[CHG] Device FC:A8:9A:EF:A6:E4 Connected: no

List paired devices:

[bluetooth]# devices
Device FC:A8:9A:EF:A6:E4 Swedish flag
[CHG] Device FC:A8:9A:EF:A6:E4 RSSI: -53
[CHG] Device FC:A8:9A:EF:A6:E4 ManufacturerData Key: 0x0057
[CHG] Device FC:A8:9A:EF:A6:E4 ManufacturerData Value:
  53 00 23 00 11 00 00                             S.#....         
[CHG] Device FC:A8:9A:EF:A6:E4 Appearance is nil

Connect to BT device:

[bluetooth]# connect FC:A8:9A:EF:A6:E4
Attempting to connect to FC:A8:9A:EF:A6:E4
[CHG] Device FC:A8:9A:EF:A6:E4 Connected: yes
Connection successful
[CHG] Device FC:A8:9A:EF:A6:E4 ServicesResolved: yes

Stop the device scan:

[Swedish flag]# scan off
Discovery stopped
[CHG] Controller 48:F1:7F:56:BB:6A Discovering: no
[CHG] Device FC:A8:9A:EF:A6:E4 TxPower is nil
[CHG] Device FC:A8:9A:EF:A6:E4 RSSI is nil

Exit bluetoothctl:

[Swedish flag]# exit
Agent unregistered
[DEL] Controller 48:F1:7F:56:BB:6A octo [default]

VBoxManage: error: Configuration error: Failed to get the “MAC” value (VERR_CFGM_VALUE_NOT_FOUND)

Failure to start a Virtualbox VM results in the following error:

The reason was port forwarding settings for one of the NICs. These settings had been set:

Removing these settings can be done by using the same commands but without values:

Building a Kubernetes (k8s) cluster for the home lab

Introduction:

Building a Kubernetes cluster for the lab at home or at work doesn’t have to be complicated. Below are the steps to create a 3-node cluster (1 master + 2 workers).

Prerequisites:

Install three copies of Ubuntu 18.04. I used VirtualBox + Vagrant (and the image “ubuntu/bionic64”) to create mine. My nodes are named as follows:

  • k8s-c3-n1 (ip: 192.168.11.101)
  • k8s-c3-n2 (ip: 192.168.11.102)
  • k8s-c3-n3 (ip: 192.168.11.103)

Be sure to disable swap. Kubernetes won’t work if it’s enabled. This can be done by commenting out or removing the entry for swap in /etc/fstab followed by a reboot. Temporarily it can be turned off with “swapoff -a”.

Installing Kubernetes:

On all three nodes, download the Google apt key and add it:

Install kubeadm – also on all three nodes:

Installing Docker:

We also need Docker on all three nodes. Install it and enable the service as follows:

Configuring the master node:

On the master (k8s-c3-n1 in this case), enable the kubelet service:

Initialize the cluster:

Now let’s initialize the cluster with “kubeadmin init”.  This is done on the master node. Note some detail about the two variables we need to provide:

  • The API server advertise address: If your hosts have multiple network cards, specify which IP address the API server should bind to. Make sure the nodes are all able to communicate on the network chosen here.
  • The pod network CIDR: The network you wish the pods to utilize. This appears to be arbitrary. I chose the 10.244.0.0/16 for my network.

The “kubeadmin init” will result in output similar to the below. Make note of the “kubeadmin join” string as it’s unique to your installation and we’ll need it whenever registering worker nodes:

Let’s finalize by running the commands as suggested in the “kubeadm join” output. This is done on the master node – same as we used for ‘kubeadmin join’.

Congratulations – Kubernetes is installed! However, there is still a little bit of work to be done. First installing a pod network and then adding our two worker nodes.

Adding the pod network:

We’ll use Flannel for this example. There are other network solutions available too, but this worked well for me in my lab, and it’s both quick and easy:

Joining the worker nodes:

Finally, let’s join our two workers to the master node by executing the “kubeadm join” string which was provided by our “kubeadm init” earlier.

On each of the two worker nodes, execute the “kubeadm join” string that is unique to your installation. For me it was as follows:

Let’s see if the three nodes are visible. On the master node, execute “kubectl get nodes”. Output should be similar to the below:

It may take around 30 sec to a minute for the workers to reach “Ready” status,  but that’s it – Kubernetes is installed and ready.

We can now deploy containers / pods and they will be scheduled to run on the worker nodes. Note: There’s no load balancer or other fancy stuff installed by default so it’s pretty bare-bones 🙂

Deploy a test application:

Let’s verify that our cluster works by deploying a container with a web server.

On the master node (“k8s-c3-n1” in this example), deploy the “httpd” webserver using “kubectl run”. I picked the name “httpd-01” for this pod. This is arbitrary so feel free to use any name that makes sense in your installation.

We can now check that it’s running with “kubectl get pods”:

Since this is a web server we want to expose port 80 so it can be accessed. This creates a service which has to be named. I picked the name ‘httpd-01-http”, but chose any name that makes sense in your installation. Note that we’re referring to the name we gave our our pod at deployment: “httpd-01”.

Let’s find out more about our web server application / pod by using “kubectl get pods” and then “kubectl describe pod <pod id>”:

 

Among a lot of other information we can see it’s running on worker node “k8s-c3-n2”.

Let’s also get some information about the service we got when exposing port 80 earlier:

Here we can see the endpoint IP and port: “10.244.1.2:80”

Since we know it’s running on worker node “k8s-c3-n2”, let’s SSH there and verify that we can get the default webpage:

As shown in the output: “It works!”.

That goes both for the httpd container and for the Kubernetes cluster. Have fun!

Tensorflow item recognition

Leveraging Google’s Tensorflow Machine Learning libraries for item recognition in images is fantastically easy to get going. The below Dockerfile will setup a container with everything required and allow the user to feed a URL to a file for classification:

Dockerfile:
Download raw from here: https://pastebin.com/raw/mdJ225vp

Save the above into a file called “Dockerfile”.
Enter the directory where the Dockerfile is saved and build the Docker image:

Verify the Docker image:

Run the image. We’ll expose SSH on port 22 on the container as 2222 on the host:

Verify the local Docker gateway IP using the container ID (81f13360885f in this case – use “docker ps” to find out):

SSH and execute the image classification script (password: “tensorflow”):

This is the image we’ve pulled down:

And this is the classification result:

Not too bad 🙂 Tensorflow accurately detects that the image contains a scooter, a crash helmet and even sees the disk brake on the scooter! Try with any image URL to see what Tensorflow will classify your image as. Have fun!

Ubuntu 18.04.1 – Change hostname

While I’d normally use “hostnamectl set-hostname ” to modify the hostname of a Linux box, that doesn’t work for Ubuntu 18.04. The hostname will remain unchanged. Instead, modify as follows.

In /etc/cloud/cloud.cfg, modify “preserve_hostname” from “false” to “true”:

vi /etc/cloud/cloud.cfg

Modify hostname to the value you want in /etc/hostname:

vi /etc/hostname

Reboot

New iDRAC Ansible module: Version 1.1 released

The recently released version 1.1 adds streaming Server Configuration File (SCP) support, enhanced RAID creation and many other goodies! See the release notes here for details: Dell EMC Ansible modules version 1.1

Below are some installation instructions (in particular for those who have been using the original Ansible modules).

System used:
CentOS 7.5

Get the new Ansible modules for iDRAC off Github:

Get the Dell EMC OpenManage Python SDK off Github:

Remove some packages or we will run into errors during the SDK install:
NOTE: This will uninstall Ansible. Backup your /etc/ansible/hosts file prior to Ansible removal!

Install the Dell EMC OpenManage Python SDK prerequisites:

Reinstall Ansible:

Install wheel:

Build .whl file:

Install the newly built module:

Install the new Dell EMC Ansible modules for iDRAC:

All done! The new Ansible modules are installed.

Modifying /etc/ansible/hosts:
The previous version of the Dell EMC Ansible modules for iDRAC required the following format:

The new modules require some different variables:

Trying it out:

Working perfectly 🙂

Arduino powered LED sign

Using an Arduino UNO and 8 strips of Neopixel ws2812 strips (60 LED/m) I was able to recreate the awesome LED sign created by Josh here: https://youtu.be/k-SYMPO8-f8

Needed some text to test it and picked the cf push haiku by Onsi Fakhouri:

cf push haiku here is my source code run it on the cloud for me i do not care how”