The environmental control system for Mr. Snuggles the ball python is now nearly finished. Throughout the last couple of weeks I’ve worked on learning Fusion 360 in order to create a new case. The previous one was done with Tinkercad and the one before that in FreeCAD. The first iteration was a Tupperware case, lol
The electronics didn’t end up as clean as I was hoping it would be but it works and that’s always been rule number one for this project. There will likely be a version after this and for that it would be great to etch a proper circuit board rather than using universal PCB boards as I have done up to now.
Currently I’m tuning the settings to dial in the right temperatures and humidity in the different parts of the enclosure. Grafana is a lifesaver as always 🙂
The next step may be to add a humidifier of some sort and hook it up to the humidity values. The biggest challenge in controlling the environment has been the humidity. This seems to be a common gripe among those who use glass terrariums and I can certainly see why. Right now a couple of towels on top of the mesh screen in the lid help keep things under control but it’s not an ideal solution as it also limits air flow.
As part of revising the entire environmental control system for the ball python “Mr. Snuggles” I’ve re-done the code for getting sensor telemetry and control of relays for heat mats and the heat lamp from scratch.
Last weekend I finished the updated wiring diagram and soldering. This weekend was the coding update, which will still require some work until it’s done. However, the main modules for DS18b20 sensor telemetry, DHT22 humidity and temperature as well as the actual relay control are done. These files can now be accessed from GitHub as per the below
Next step is to integrate Grafana with buttons for controlling the temperature. This previously required re-building the docker images and was rather inconvenient. Soon it will be as simple as tapping a touch panel. That part will require creating an API which Grafana can interact with and will likely be done next weekend if time allows.
Two years ago I built the “Snakeinator 2000” system for controlling temperature, humidity and lighting for my ballpython “Mr. Snuggles”. Currently I’m re-working the wiring and updating the sensors. As part of this effort it made sense to finally documenting the wiring as per the below. This will be supplemented with an update of the code base in the coming weeks.
Note: The 12V power supply / converter I’m using didn’t have a Fritzing model so the one included is a placeholder.
In spring of 2021 I wanted a proper VMware lab setup at home. The primary reason was, and still is, having an environment in which to learn and experiment with the latest VMware and AWS solutions. I strongly believe that actual hands-on experience is the gateway to real knowledge, despite how well the documentation may be written.
To that end I went about listing up what would be needed to make this dream of a home lab come true. The lack of space meant that the setup would end up in my bedroom and therefore needed to be quiet. That removed most 2nd hand enterprise servers from the list. Possibly with the exception of the VRTX chassis from Dell, which I would still REALLY want for a home lab, but it’s way to expensive – even 2nd hand.
As compatible with the VMware HCL as possible (as-is or via Flings)
Quiet (no enterprise servers)
Not too big (another nail in the coffin for full-depth 19″ servers)
Ability to run vSAN
Initially I considered the Intel NUCs and Skull / Ghost Canyon mini-PCs as these are very popular among home-lab enthusiasts. However, the 10Gbps requirement necessitated a PCIe slot and the models supporting this from Intel are very expensive.
The SuperMicro E300-9D was also on the list but they too tend to get expensive and a bit hard to get on short notice where I live.
Therefore, going with a custom build sounded more and more in line with what would work for this setup. In the end I settled on the below. The list contain all the parts used for the ESXi nodes, minus the network cards which are listed separately in the networking section below.
The choice of mainboard came down to the onboard network chipset. It had to be possible to run the ESXi installer and it won’t work if it can’t find the network. Initially I only had the onboard NIC and no 10Gbps cards. Unfortunately the release of vSphere version 7.x restricted the hardware support significantly. This time I was going to make an AMD build, but most of their mainboards come with Realtek onboard NICs and they are no longer recognized by the ESXi installer. Another consideration was size and expansion options. An ITX formfactor meant that the size of the PC case could be reduced while still having a PCIe slot for a 10Gbps NIC.
The Cooler Master H100 case has a single big fan which makes it pretty quiet. Its small size also makes it an ideal case for this small-footprint lab environment. It even comes with LEDs in the fan which are hooked up to the reset button on the case to switch between colors (or to turn it off completely).
Due to the onboard NIC support the build was restricted to an Intel CPU. Gen 11 had been released but Gen 10 CPUs were still perfectly fine and could be had for less money. Obviously, there was no plan to add a discreet GPU so the CPU also had to come with built-in graphics. The Core i5 10400 seemed to meet all criteria while having a good cost / performance balance.
The little ASRock H410M-ITX/ac mainboard supports up to 64Gb of RAM and I filled it up from the start. One can never have too much RAM. With three nodes we get a total of 192Gb which will be sufficient for most tasks. Likely there will come a day later when a single workload (looking at you NSX-T!!) will require more. This is the only area which I feel could become a limitation soon. For that day I’ll likely have to add a box with more memory specifically for covering that workload.
A vSAN environment was one of the goals for the lab and with an NVME PCIe SSD as the cache tier and a 2.5″ drive as the capacity tier this was accomplished. It was a bit scary ordering these parts without knowing if they would be recognized in vCenter as usable for vSAN, but in the end there was no issue at all. They were all recognized immediately and could be assigned to the vSAN storage pool.
For the actual ESXi install I was going to use a USB disk initially but ended up re-using some old 2.5″ and 3.5″ spinning rust drives for the hypervisor install. These are not part of the cost calculation above as I just used whatever was laying around at home. The cost of these is negligible though.
To ensure vSAN performance and to support the 10Gbps internet router uplink a 10Gbps managed switch was required. Copper ports become very expensive so SFP+ would be the way to go. Mikrotik has a good 8+1 port switch / router in their CRS309-1G-8S+IN model. In the end this was a good fit for the home lab because not only does it have 8x 10Gbps SFP+ ports, it is also fanless and the software support several advanced features, like BGP.
I’m still happy with the choice 6 months later. It’s a great switch but it took a while to get used to it. Most of us probably come from a Cisco or Juniper background. The configuration for the Mikrotik is completely different and won’t be intuitive for the majority of users.
On the server side I wanted something which would be guaranteed to work with ESXi, so a 10Gbps card which is on the HCL was a must. Intel has a lot of cards on the list and their X520 series can be found pretty easily. In the end I got three X520-DP2 (dual port) cards and they have worked perfectly so far.
There is also a 1Gbps managed Dell x1026p switch to allow for additional networking options with NSX-T. With the Mikrotik 10Gbps switch there the Dell switch is more an addition for corner cases. It does help when attaching other devices which doesn’t support 10Gbps though.
The Mikrotik has a permanent VPN connection to an AWS Transit Gateway and from there to various VPCs and sometimes the odd VMware Cloud on AWS SDDC.
Installation media etc.
These servers still require custom installation media to be created for the installation to work. Primarily for the onboard Intel networking and the USB network Fling. An explanation for how to create custom media can be found here.
vCenter is hosted on an NFS share from a separate server. This is done so it could be on shared storage for the cluster while simultaneously being separate from the vSAN while the environment is being built.
ESXi is installed over PXE to allow for fully automated installations.
That’s it – a fully functional VMware lab. Quiet and with reasonably high performance. Also, RGB LEDs adds at least 20% extra performance – a bit like red paint on a sports car 😉