This is a crash course in getting the Movidius NCS2 neural compute stick up and running with a benchmark application. Even though only the benchmark app is covered, the same steps can be used to compile any of the other apps included with the OpenVINO toolkit.
For the REALLY impatient 🙂
Environment:
- Laptop / PC running Linux (I use Ubuntu 18.04 server)
- Docker
- Movidius NCS2 compute stick
Download a container pre-loaded with OpenVINO:
I created a container on dockerhub which already has the OpenVINO toolkit installed. Download it as follows:
docker pull jonaswerner/movidius_nc2_with_openvino:2018.5.455
Run the container in privileged mode
Privileged mode is required as the Movidius compute stick changes from USB2.0 to USB3.0 and is re-enumerated by the OS once the ML model is loaded into it. The container need to be able to access the “new” USB3.0 device once it appears.
sudo docker run -ti --privileged --net=host -v /dev:/dev jonaswerner/movidius_nc2_with_openvino:2018.5.455
Verify functionality and download sample code
From this point all commands are executed inside the container. It will carry the name of the host system (“octo” in my case) as it’s running in privileged mode.
[setupvars.sh] OpenVINO environment initialized root@octo:/# root@octo:/# root@octo:/# cd /opt/intel/computer_vision_sdk/deployment_tools/demo/ root@octo:/opt/intel/computer_vision_sdk/deployment_tools/demo# root@octo:/opt/intel/computer_vision_sdk/deployment_tools/demo# ls -l total 1752 -rw-r--r-- 1 root root 2933 Feb 21 04:35 README.txt -rw-r--r-- 1 root root 310725 Feb 21 04:35 car.png -rw-r--r-- 1 root root 1432032 Feb 21 04:35 car_1.bmp -rwxr-xr-x 1 root root 6472 Feb 21 04:35 demo_security_barrier_camera.sh -rwxr-xr-x 1 root root 8605 Feb 21 04:35 demo_squeezenet_download_convert_run.sh -rw-r--r-- 1 root root 21675 Feb 21 04:35 squeezenet1.1.labels
root@octo:/opt/intel/computer_vision_sdk/deployment_tools/demo# ./demo_squeezenet_download_convert_run.sh -d MYRIAD target = MYRIAD target_precision = FP16 ################################################### Downloading the Caffe model and the prototxt Installing dependencies
Output has been shortened for brevity. If all goes well it will finish with the following message:
Demo completed successfully.
This verifies that the Movidius NCS2 (referred to as “MYRIAD” in the command above) is working as expected. It will also have downloaded the sample code to multiple applications, including the benchmark_app we will build.
Download the Alexnet model
We now download the Alexnet model which will be used when executing the benchmark_app. Then we optimize it for FloatingPoint 16 (Movidius NCS2) and for FloatingPoint32 (CPU) so we can run benchmarks against both.
When we use the model optimizer (mo.py) to convert model to Inference Engine format we end up with a pair of files – one XML and one BIN.
Enter the correct directory and execute downloader.py
cd /opt/intel/computer_vision_sdk/deployment_tools/model_downloader
./downloader.py --name alexnet
Create directories where we can put the FP16 and FP32 files
mkdir /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/alexnet mkdir /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/alexnet/FP16 mkdir /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/alexnet/FP32
Enter the base directory and execute the model optimizer
cd /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer ./mo.py --data_type=FP16 --input_model ../model_downloader/classification/alexnet/caffe/alexnet.caffemodel -o ./alexnet/FP16/ ./mo.py --data_type=FP32 --input_model ../model_downloader/classification/alexnet/caffe/alexnet.caffemodel -o ./alexnet/FP32/
Compile the benchmark app from the sample source code
Note that in this case we’re doing the benchmark app but there are many interesting application samples included in the same directory.
cd ~/inference_engine_samples/benchmark_app/ make
After compiling the resulting binary can be found here: ~/inference_engine_samples/intel64/Release/
Run benchmarks for MYRIAD and CPU for comparison
Note that even though we run the inferencing against the same image (“car.png”) we have to change the model optimizer between FP16 for MOVIDIUS and FP32 for CPU depending on which of the two we intend to benchmark.
cd ~/inference_engine_samples/intel64/Release/ ./benchmark_app -d MYRIAD -i /opt/intel/computer_vision_sdk/deployment_tools/demo/car.png -m /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/alexnet/FP16/alexnet.xml ./benchmark_app -d CPU -i /opt/intel/computer_vision_sdk/deployment_tools/demo/car.png -m /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/alexnet/FP32/alexnet.xml
That is all for this blog post, but it should have provided the required information to compile any of the other sample applications as well as the necessary instructions for how to download and optimize models required for some of the apps.