Compiling and running Movidius NCS2 Alexnet benchmark_app in a container

This is a crash course in getting the Movidius NCS2 neural compute stick up and running with a benchmark application. Even though only the benchmark app is covered, the same steps can be used to compile any of the other apps included with the OpenVINO toolkit.

For the REALLY impatient 🙂

  • RAW commands can be found here: link
  • RAW commands with all output can be found here: link

Environment:

  • Laptop / PC running Linux (I use Ubuntu 18.04 server)
  • Docker
  • Movidius NCS2 compute stick

Download a container pre-loaded with OpenVINO:

I created a container on dockerhub which already has the OpenVINO toolkit installed. Download it as follows:

docker pull jonaswerner/movidius_nc2_with_openvino:2018.5.455

Run the container in privileged mode

Privileged mode is required as the Movidius compute stick changes from USB2.0 to USB3.0 and is re-enumerated by the OS once the ML model is loaded into it. The container need to be able to access the “new” USB3.0 device once it appears.

sudo docker run -ti --privileged --net=host -v /dev:/dev jonaswerner/movidius_nc2_with_openvino:2018.5.455

Verify functionality and download sample code

From this point all commands are executed inside the container. It will carry the name of the host system (“octo” in my case) as it’s running in privileged mode.

[setupvars.sh] OpenVINO environment initialized
root@octo:/# 
root@octo:/# 
root@octo:/# cd /opt/intel/computer_vision_sdk/deployment_tools/demo/
root@octo:/opt/intel/computer_vision_sdk/deployment_tools/demo# 
root@octo:/opt/intel/computer_vision_sdk/deployment_tools/demo# ls -l
total 1752
-rw-r--r-- 1 root root    2933 Feb 21 04:35 README.txt
-rw-r--r-- 1 root root  310725 Feb 21 04:35 car.png
-rw-r--r-- 1 root root 1432032 Feb 21 04:35 car_1.bmp
-rwxr-xr-x 1 root root    6472 Feb 21 04:35 demo_security_barrier_camera.sh
-rwxr-xr-x 1 root root    8605 Feb 21 04:35 demo_squeezenet_download_convert_run.sh
-rw-r--r-- 1 root root   21675 Feb 21 04:35 squeezenet1.1.labels
root@octo:/opt/intel/computer_vision_sdk/deployment_tools/demo# ./demo_squeezenet_download_convert_run.sh -d MYRIAD
target = MYRIAD
target_precision = FP16

###################################################

Downloading the Caffe model and the prototxt
Installing dependencies

Output has been shortened for brevity. If all goes well it will finish with the following message:

Demo completed successfully.

This verifies that the Movidius NCS2 (referred to as “MYRIAD” in the command above) is working as expected. It will also have downloaded the sample code to multiple applications, including the benchmark_app we will build.

Download the Alexnet model

We now download the Alexnet model which will be used when executing the benchmark_app. Then we optimize it for FloatingPoint 16 (Movidius NCS2) and for FloatingPoint32 (CPU) so we can run benchmarks against both.

When we use the model optimizer (mo.py) to convert model to Inference Engine format we end up with a pair of files – one XML and one BIN.

Enter the correct directory and execute downloader.py

cd /opt/intel/computer_vision_sdk/deployment_tools/model_downloader
./downloader.py --name alexnet

Create directories where we can put the FP16 and FP32 files

mkdir /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/alexnet
mkdir /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/alexnet/FP16
mkdir /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/alexnet/FP32

Enter the base directory and execute the model optimizer

cd /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer
./mo.py --data_type=FP16 --input_model ../model_downloader/classification/alexnet/caffe/alexnet.caffemodel -o ./alexnet/FP16/
./mo.py --data_type=FP32 --input_model ../model_downloader/classification/alexnet/caffe/alexnet.caffemodel -o ./alexnet/FP32/

Compile the benchmark app from the sample source code

Note that in this case we’re doing the benchmark app but there are many interesting application samples included in the same directory.

cd ~/inference_engine_samples/benchmark_app/
make

After compiling the resulting binary can be found here: ~/inference_engine_samples/intel64/Release/

Run benchmarks for MYRIAD and CPU for comparison

Note that even though we run the inferencing against the same image (“car.png”) we have to change the model optimizer between FP16 for MOVIDIUS and FP32 for CPU depending on which of the two we intend to benchmark.

cd ~/inference_engine_samples/intel64/Release/
./benchmark_app -d MYRIAD -i /opt/intel/computer_vision_sdk/deployment_tools/demo/car.png -m /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/alexnet/FP16/alexnet.xml
./benchmark_app -d CPU -i /opt/intel/computer_vision_sdk/deployment_tools/demo/car.png -m /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/alexnet/FP32/alexnet.xml

That is all for this blog post, but it should have provided the required information to compile any of the other sample applications as well as the necessary instructions for how to download and optimize models required for some of the apps.

Example Redfish REST calls: Create RAID volume

Prerequisites:

  • PowerEdge Gen 14 server (R740, M640, etc)
  • iDRAC 9 with firmware version: 3.21.21.21 or above

Create RAID volume (virtual disk)

Method: POST

URI: https://<idrac ip>/redfish/v1/Systems/System.Embedded.1/Storage/<CONTROLLER>/Volumes

BODY / Payload:

The following is an example of creating a RAID1 (mirror) volume on the RAID.Slot.6-1 controller using two disks (bay 2 and bay 3):

{
    "VolumeType": "Mirrored",
    "Name": "VOL02-R1",
    "Drives": [
        {
            "@odata.id": "/redfish/v1/Systems/System.Embedded.1/Storage/Drives/Disk.Bay.2:Enclosure.Internal.0-1:RAID.Slot.6-1"
        },
        {
            "@odata.id": "/redfish/v1/Systems/System.Embedded.1/Storage/Drives/Disk.Bay.3:Enclosure.Internal.0-1:RAID.Slot.6-1"
        }
    ]
}

For RAID1 the VolumeType is “Mirrored”. For other RAID types please use the below:

  • RAID 0: “NonRedundant”
  • RAID 1: “Mirrored”
  • RAID 5: “StripedWithParity”
  • RAID 10: “SpannedMirrors”
  • RAID 50: “SpannedStripesWithParity”

HTTP status 202 will be returned if creation is successful.

Also see the following scripts on GitHub:

  1. https://github.com/dell/iDRAC-Redfish-Scripting/blob/master/Redfish%20Python/CreateVirtualDiskREDFISH.py
  2. https://github.com/dell/iDRAC-Redfish-Scripting/blob/master/Redfish%20Python/DeleteVirtualDiskREDFISH.py



Example Redfish REST calls: Attach / Detach ISO file

Assumptions:

  • PowerEdge Gen 14 server (R740, M640, etc)
  • iDRAC 9 with firmware version: 3.30.30.30 or above
  • The ISO file is hosted on a web server and can be reached via HTTP

Attach:

Method: POST
URI: https://<idrac_ip_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia

BODY:

{
     "Image": "http://<web_server>/<iso_file>.iso"
}

Detach

Method: POST
URI: https://<idrac_ip_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.EjectMedia

BODY:

{}

Yes, for the detach the BODY of the request is an empty pair of curly braces “{}”.

If all goes well, each REST call will return a “204 No Content” in response.