Upgraded targeting system: Voice control

To make it more interesting I thought it’d be cool to restrict the machine learning model to a single type of item and add support for voice control to change between types. While I was at it I added the audio from the Portal turrets, because why not?

Voice controlled Docker container deployment system using AWS and a Raspberry Pi

Playing around with AWS Lambda, Rekognition, Polly, DynamoDB, Lex, S3, etc. to create a system for deploying Docker containers by talking to a Raspberry Pi. The containers are deployed locally on a PC running the “p4docker” service while the other two services (p4security and p4voiceui) are running on the Raspberry Pi.

This was part of a project for an internal Pied Piper course here at Dell Tech earlier this year: https://bigdatadownunder.com/2019/10/11/innovating-ground-up-project-piper/

The code can be found here:

EdgeX Foundry demo

Short demo of EdgeX Foundry using two Raspberry Pi’s. One to generate and send sensor data to EdgeX and another to play the role of an edge device which can receive commands from EdgeX depending on sensor values.

Note: This demo uses the Delhi release since I still haven’t updated the device profile for the “smartVent” Raspberry Pi to work with Edinburgh. I’ll post something cooler once that is working too.