Capture VMware performance data using Live Optics

Gaining insights into the details of what makes up a large and complex VMware environment can be challenging. This is especially true if gathering individual VM performance data is part of the goal.

Live Optics, a tool created by Dell Technologies, is a great way to gather performance data from a vSphere environment over time. It can run on a laptop or VM and has the ability to either take a snapshot of the environment without performance data or to run over a period from 10min to 7 days and report also on performance.

The data can be streamed continuously to a Live Optics endpoint. In this case the data gathering process can be viewed live through the Live Optics web portal. Alternatively the data can be saved locally as an encrypted SIOKIT file during the capture process and then be uploaded to the Live Optics portal once data collection is complete.

Below is a 5 min video showing the entire process of account creation, collection download and report export in Excel format for reference

Enable Telemetry Streaming with RACADM, Scripts and/or Redfish and Postman

This post aim to describe three methods with which to enable the Telemetry Streaming feature in the iDRAC9 on Dell EMC 14G PowerEdge servers:

  • Enable using RACADM / SSH
  • Enable using provided GitHub scripts
  • Enable using Redfish and Postman

Enabling using RACADM and Redfish are selective methods while using the GitHub script enables ALL reports in one go. Personally I’d recommend being selective to start with until it is clear what data is required / desired.

Note that enabling everything will result in just shy of 3M data points / 24h / server

Blog posts in this series:


Get a 30 day trial of the iDRAC9 Datacenter license here:

Enable using RACADM / SSH

Enable using GitHub script

Enable using Redfish and Postman

URI and payload for Postman


Auth: Basic (root / calvin by default)

GET for viewing current settings
PATCH for changing settings

Payload for enabling streaming telemetry (indentation doesn't work properly in Wordpress, sorry):
"Attributes": {
"Telemetry.1.EnableTelemetry": "Enabled"

Payload for enabling / disabling reports (indentation doesn't work properly in Wordpress, sorry)
"Attributes": {
"TelemetryCPUSensor.1.EnableTelemetry": "Enabled",
"TelemetryPowerStatistics.1.EnableTelemetry": "Disabled",
"TelemetrySensor.1.EnableTelemetry": "Enabled"

Configuring Telemetry Streaming

This article contains the practical steps to set up and configure Telemetry Streaming. It assumes it has already been enabled using one of the methods described in the previous article here. In this blog post we use the following:

  • Python script to collect the data
  • InfluxDB for storing the data
  • Grafana for visualizing the data

Blog posts in this series

Overview of the architecture

For the experienced user

Those with experience running containers, installing Python modules, etc., please refer to the below quick start

  • Capture the data from the iDRAC with this Python script: link
  • Run InfluxDB with the following settings: link
  • Create a Grafana instance and connect to InfluxDB to visualize the data

For those who prefer step-by-step instructions

To set this up, start with an Ubuntu server VM. The video below goes through all steps to get started from scratch, including installation of:

  • Python virtual environment
  • Python modules
  • Docker
  • InfluxDB
  • Grafana

Summary of all commands

The commands used below are also summarized in this text file for easy copy & paste: link

URL to get all metrics:


Setting up the environment

Update and install: 
sudo apt update
sudo apt upgrade -y
sudo apt install python3-venv python3-pip jq -y

Create a virtual environment:
python3 -m venv NAME-OF-ENV
source ./NAME-OF-ENV/bin/activate

Download the repositories from GitHub:
git clone
git clone

Install the Python modules:
cd idrac9-telemetry-streaming
pip3 install -r requirements.txt

Command for viewing the JSON data:
cat aaa | sed 's/\x27/"/g' | jq

Installing Docker

Installing prerequisite packages:
sudo apt install apt-transport-https ca-certificates curl software-properties-common -y

Adding the key for Docker-CE:
curl -fsSL | sudo apt-key add -

Adding the repository for Docker-CE
sudo add-apt-repository "deb [arch=amd64] eoan stable"

Installing Docker-CE
sudo apt update
sudo apt install docker-ce -y

Adding user to docker group: 
sudo usermod -aG docker ${USER}

Installation and commands for InfluxDB

Download the container image:
docker pull influxdb

Run the image, create DB and add credentials:
docker run \
-d \
--name influxdb \
-p 8086:8086 \
-e INFLUXDB_DB=telemetry \

View data in the container using the "influx" client:
docker exec -it influxdb influx -username root -password pass

Commands for the "influx" client:
show databases
show measurements
select * from MEASUREMENT
show field keys from MEASUREMENT

Downloading and running Grafana

Download the container image:
docker pull grafana/grafana

Run the Grafana instance:
docker run -d --name=grafana -p 3000:3000 grafana/grafana

Ansible with Dell PowerEdge servers

Automate everything and have more time left for coffee and ridiculously-sized donuts! PowerEdge servers and Ansible automation is a match made in silicon heaven (just ask Kryten!) Included are six videos covering everything from the ground up.

Installation steps for Ansible

To be used with the first video: The installation steps for Ansible as well as the OpenManage modules for PowerEdge can be downloaded from here: link

EdgeX Foundry demo

Short demo of EdgeX Foundry using two Raspberry Pi’s. One to generate and send sensor data to EdgeX and another to play the role of an edge device which can receive commands from EdgeX depending on sensor values.

Note: This demo uses the Delhi release since I still haven’t updated the device profile for the “smartVent” Raspberry Pi to work with Edinburgh. I’ll post something cooler once that is working too.