Ubuntu 19:10: systemd-resolved blocks port 53 and thereby preventing any service using port 53 (like dnsmasq) from starting
Jan 29 03:31:58 ubuntupxe02 dnsmasq[2386]: dnsmasq: failed to create listening socket for port 53: Address already in use
Jan 29 03:31:58 ubuntupxe02 dnsmasq[2386]: failed to create listening socket for port 53: Address already in use
Jan 29 03:31:58 ubuntupxe02 dnsmasq[2386]: FAILED to start up
Jan 29 03:31:58 ubuntupxe02 systemd[1]: dnsmasq.service: Control process exited, code=exited, status=2/INVALIDARGUMENT
Jan 29 03:31:58 ubuntupxe02 systemd[1]: dnsmasq.service: Failed with result 'exit-code'.
Jan 29 03:31:58 ubuntupxe02 systemd[1]: Failed to start dnsmasq - A lightweight DHCP and caching DNS server.
In a previous post we covered the deployment of a home k8s lab, but this post will show a much better way to do it as well as improving on the end result – a fully functional local cluster.
The installation is done using Vagrant with Flannel networking and MetalLB for load balancing.
Why go through the trouble of setting up a home lab for k8s? Well, while using public cloud services is a quick and easy way it will cost money to deploy and run. It will also rely upon predefined cloud formation templates which have already been created. Doing it locally can provide both a more economical way to use k8s as well as give more insight into the internal workings and how it’s actually set up.
Why not use Minikube? Because it’s overly simplified. Using a cluster deployment like this is not only a better learning and testing experience but it also provides an overall more realistic experience of a “real” k8s installation.
Configuration files
Download the Vagrant, Flannel and MetalLB files from GitHub or clone with Git
jonas@octo:~$ sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Hit:1 http://jp.archive.ubuntu.com/ubuntu bionic InRelease
Get:2 https://nvidia.github.io/libnvidia-container/ubuntu18.04/amd64 InRelease [1,106 B]
Get:3 https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/amd64 InRelease [1,103 B]
Hit:4 http://jp.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:5 https://download.docker.com/linux/ubuntu bionic InRelease
Get:6 https://nvidia.github.io/nvidia-docker/ubuntu18.04/amd64 InRelease [1,096 B]
Hit:7 http://jp.archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:9 http://security.ubuntu.com/ubuntu bionic-security InRelease
Get:8 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
Err:8 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
Hit:10 https://cf-cli-debian-repo.s3.amazonaws.com stable InRelease
Reading package lists… Done
W: GPG error: https://packages.cloud.google.com/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
E: The repository 'http://apt.kubernetes.io kubernetes-xenial InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
Playing around with AWS Lambda, Rekognition, Polly, DynamoDB, Lex, S3, etc. to create a system for deploying Docker containers by talking to a Raspberry Pi. The containers are deployed locally on a PC running the “p4docker” service while the other two services (p4security and p4voiceui) are running on the Raspberry Pi.
It’s extremely quick to get started with EdgeX Foundry. Less than 5 minutes – including installing Docker and Docker-compose (provided you have a reasonable internet connection).
Note: This is for the Edinburgh 1.01 release. Other releases can be downloaded from here: link
vagrant@EdgeXblog:~$ sudo apt install apt-transport-https ca-certificates curl software-properties-common
Reading package lists… Done
Building dependency tree
Reading state information… Done
ca-certificates is already the newest version (20180409).
...
vagrant@EdgeXblog:~$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
OK
vagrant@EdgeXblog:~$ sudo apt install docker-ce
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following additional packages will be installed:
...
vagrant@EdgeXblog:~$ sudo docker-compose up -d
Creating network "vagrant_edgex-network" with driver "bridge"
Creating network "vagrant_default" with the default driver
Creating volume "vagrant_db-data" with default driver
Creating volume "vagrant_log-data" with default driver
Creating volume "vagrant_consul-config" with default driver
Creating volume "vagrant_consul-data" with default driver
...
List containers and ports
vagrant@EdgeXblog:~$ sudo docker-compose ps
Name Command State Ports
edgex-config-seed /edgex/cmd/config-seed/con … Exit 0
edgex-core-command /core-command --registry - … Up 0.0.0.0:48082->48082/tcp
edgex-core-consul docker-entrypoint.sh agent … Up 8300/tcp, 8301/tcp, 8301/udp, 8302/tcp, 8302/udp,
0.0.0.0:8400->8400/tcp, 0.0.0.0:8500->8500/tcp,
0.0.0.0:8600->8600/tcp, 8600/udp
edgex-core-data /core-data --registry --pr … Up 0.0.0.0:48080->48080/tcp, 0.0.0.0:5563->5563/tcp
edgex-core-metadata /core-metadata --registry … Up 0.0.0.0:48081->48081/tcp, 48082/tcp
edgex-device-virtual /device-virtual --profile= … Up 0.0.0.0:49990->49990/tcp
edgex-export-client /export-client --registry … Up 0.0.0.0:48071->48071/tcp
edgex-export-distro /export-distro --registry … Up 0.0.0.0:48070->48070/tcp, 0.0.0.0:5566->5566/tcp
edgex-files /bin/sh -c /usr/bin/tail - … Up
edgex-mongo docker-entrypoint.sh /bin/ … Up 0.0.0.0:27017->27017/tcp
edgex-support-logging /support-logging --registr … Up 0.0.0.0:48061->48061/tcp
edgex-support-notifications /support-notifications --r … Up 0.0.0.0:48060->48060/tcp
edgex-support-rulesengine /bin/sh -c java -jar -Djav … Up 0.0.0.0:48075->48075/tcp
edgex-support-scheduler /support-scheduler --regis … Up 0.0.0.0:48085->48085/tcp
edgex-sys-mgmt-agent /sys-mgmt-agent --registry … Up 0.0.0.0:48090->48090/tcp
edgex-ui-go ./edgex-ui-server Up 0.0.0.0:4000->4000/tcp
vagrant_portainer_1 /portainer -H unix:///var/ … Up 0.0.0.0:9000->9000/tcp
Access EdgeX Foundry
Either access directly via the API’s or use the console on port 4000: “http://<ubuntu ip>:4000”.
Username: “admin”
Password: “admin”
Shut down EdgeX Foundry
Not that you would ever want to, but just in case: Stopping EdgeX Foundry containers can be done as per the below. Make sure the command is executed in the same directory as the “docker-compose.yml” file is located in.