Postman calls for EdgeX Foundry

Viewing data, creating rules and export topics in EdgeX foundry can easily be done in Postman. I’ve put together a small collection of REST calls which may be useful for those who are starting out with EdgeX Foundry and want to leverage Postman for the API interaction.

The API calls are based on the contents of this API walkthrough:

In case the below “Run in Postman” link doesn’t work the collection can be downloaded and imported from here: link

Ansible with Dell PowerEdge servers

Automate everything and have more time left for coffee and ridiculously-sized donuts! PowerEdge servers and Ansible automation is a match made in silicon heaven (just ask Kryten!) Included are six videos covering everything from the ground up.

Installation steps for Ansible

To be used with the first video: The installation steps for Ansible as well as the OpenManage modules for PowerEdge can be downloaded from here: link

dnsmasq: failed to create listening socket for port 53: Address already in use

Ubuntu 19:10: systemd-resolved blocks port 53 and thereby preventing any service using port 53 (like dnsmasq) from starting

 Jan 29 03:31:58 ubuntupxe02 dnsmasq[2386]: dnsmasq: failed to create listening socket for port 53: Address already in use
 Jan 29 03:31:58 ubuntupxe02 dnsmasq[2386]: failed to create listening socket for port 53: Address already in use
 Jan 29 03:31:58 ubuntupxe02 dnsmasq[2386]: FAILED to start up
 Jan 29 03:31:58 ubuntupxe02 systemd[1]: dnsmasq.service: Control process exited, code=exited, status=2/INVALIDARGUMENT
 Jan 29 03:31:58 ubuntupxe02 systemd[1]: dnsmasq.service: Failed with result 'exit-code'.
 Jan 29 03:31:58 ubuntupxe02 systemd[1]: Failed to start dnsmasq - A lightweight DHCP and caching DNS server.

Verify that the port is used by systemd-resolve

jonas@ubuntupxe02:~$ sudo lsof -i -P -n | grep LIST
 systemd-r  784 systemd-resolve   13u  IPv4  19378      0t0  TCP (LISTEN)
 sshd       859            root    3u  IPv4  23918      0t0  TCP *:22 (LISTEN)
 sshd       859            root    4u  IPv6  23920      0t0  TCP *:22 (LISTEN)
 apache2   1705            root    4u  IPv6  27900      0t0  TCP *:80 (LISTEN)
 apache2   1706        www-data    4u  IPv6  27900      0t0  TCP *:80 (LISTEN)
 apache2   1707        www-data    4u  IPv6  27900      0t0  TCP *:80 (LISTEN)

Stop the service

jonas@ubuntupxe02:~$ sudo systemctl stop systemd-resolved

Edit the systemd-resolved config file

 jonas@ubuntupxe02:~$ sudo vi /etc/systemd/resolved.conf
 jonas@ubuntupxe02:~$ cat !$ | grep DNS
 cat /etc/systemd/resolved.conf | grep DNS

Create symlink to /etc/resolv.conf

jonas@ubuntupxe02:~$ sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf

Start systemd-resolved service

jonas@ubuntupxe02:~$ sudo systemctl start systemd-resolved

Start dnsmasq

jonas@ubuntupxe02:~$ sudo systemctl start dnsmasq
jonas@ubuntupxe02:~$ sudo systemctl status dnsmasq
 ● dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server
    Loaded: loaded (/lib/systemd/system/dnsmasq.service; enabled; vendor preset: enabled)
    Active: active (running) since Wed 2020-01-29 03:56:12 UTC; 6s ago
   Process: 1312 ExecStartPre=/usr/sbin/dnsmasq --test (code=exited, status=0/SUCCESS)
   Process: 1314 ExecStart=/etc/init.d/dnsmasq systemd-exec (code=exited, status=0/SUCCESS)

Verify that dnsmasq is now the user of port 53

jonas@ubuntupxe02:~$ sudo lsof -i -P -n | grep LISTEN
 sshd     823     root    3u  IPv4  23016      0t0  TCP *:22 (LISTEN)
 sshd     823     root    4u  IPv6  23018      0t0  TCP *:22 (LISTEN)
 apache2  874     root    4u  IPv6  22454      0t0  TCP *:80 (LISTEN)
 apache2  875 www-data    4u  IPv6  22454      0t0  TCP *:80 (LISTEN)
 apache2  876 www-data    4u  IPv6  22454      0t0  TCP *:80 (LISTEN)
 dnsmasq 1331  dnsmasq    5u  IPv4  28097      0t0  TCP *:53 (LISTEN)
 dnsmasq 1331  dnsmasq    7u  IPv6  28099      0t0  TCP *:53 (LISTEN)

Kudos to Nitin Gurbani for the solution

Kubernetes home lab: Upgraded edition with functional LoadBalancer and external access to pods

In a previous post we covered the deployment of a home k8s lab, but this post will show a much better way to do it as well as improving on the end result – a fully functional local cluster.

The installation is done using Vagrant with Flannel networking and MetalLB for load balancing.

Commands for the session can be downloaded from here:


Why go through the trouble of setting up a home lab for k8s? Well, while using public cloud services is a quick and easy way it will cost money to deploy and run. It will also rely upon predefined cloud formation templates which have already been created. Doing it locally can provide both a more economical way to use k8s as well as give more insight into the internal workings and how it’s actually set up.

Why not use Minikube? Because it’s overly simplified. Using a cluster deployment like this is not only a better learning and testing experience but it also provides an overall more realistic experience of a “real” k8s installation.

Configuration files

Download the Vagrant, Flannel and MetalLB files from GitHub or clone with Git

git clone

Video: Editing the config files and standing up the cluster

Video: Getting started using the new K8s cluster

Enjoy your new Kubernetes powers!

Ubuntu: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY

Error when adding repo to install kubectl

jonas@octo:~$ sudo apt-add-repository "deb kubernetes-xenial main"
Hit:1 bionic InRelease
Get:2  InRelease [1,106 B]                                                                                                  
Get:3  InRelease [1,103 B]                                                                                             
Hit:4 bionic-updates InRelease                                                                                                                         
Hit:5 bionic InRelease                                                                                                                            
Get:6  InRelease [1,096 B]                                                                                                        
Hit:7 bionic-backports InRelease                                                                                                                       
Hit:9 bionic-security InRelease                                                                                                                 
Get:8 kubernetes-xenial InRelease [8,993 B]                                                                   
Err:8 kubernetes-xenial InRelease                              
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
Hit:10 stable InRelease                             
Reading package lists… Done                                        
W: GPG error: kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
E: The repository ' kubernetes-xenial InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

Solved by adding the missing key as follows

jonas@octo:~$ sudo apt-key adv --keyserver --recv-keys 6A030B21BA07F4FB

Executing: /tmp/apt-key-gpghome.JKpxFjtwsU/ --keyserver --recv-keys 6A030B21BA07F4FB
gpg: key 6A030B21BA07F4FB: public key "Google Cloud Packages Automatic Signing Key <a href=""></a>" imported
gpg: Total number processed: 1
gpg:               imported: 1