As part of preparing for OpenStack days in Tokyo 2017 I built an environment to show how GPU pass-through can be used on OpenStack as a means of providing instances ready for Machine learning and Deep learning. This is a rundown of the process
Introduction
Deep Learning and Machine Learning have in recent years grown to become increasingly vital in the advancement of humanity in key areas such as life sciences, medicine and artificial intelligence. Traditionally it has been difficult and costly to create scalable, self-service environments to enable developers and end users alike to leverage these technological advancements. In this post we’ll look at the practical steps for the process of enable GPU powered virtual instances on Red Hat OpenStack. These can in turn be utilized by research staff to run in-house or commercial software for Deep Learning and Machine Learning.
Benefits
Virtual instances for Deep Learning and Machine Learning become easy and quick to create and consume. The addition of GPU powered Nova compute nodes can be done smoothly with no impact to existing cloud infrastructure. Users can choose from multiple GPU types and virtual machine types and the Nova Scheduler will be aware of where the required GPU resources reside for instance creation.
Prerequisites
This post describes how to modify key OpenStack services on an already deployed cloud to allow for GPU pass-through and subsequent assignment to virtual instances. As such it assumes an already functional Red Hat OpenStack overcloud is available. The environment used for the example in this document is running Red Hat OSP10 (Newton) on Dell EMC PowerEdge servers. The GPU enabled servers used for this example are PowerEdge C4130’s with NVIDIA M60 GPUs.
Process outline
After a Nova compute node with GPUs has been added to the cluster using Ironic bare-metal provisioning the following steps are taken:
- Disabling the Nouveau driver on the GPU compute node
- Enabling IOMMU in the kernel boot options
- Modifying the Nova compute service to allow PCIe pass-through
- Modifying the Nova scheduler service to filter on the GPU ID
- Creating a flavor utilizing the GPU ID
Each step is described in more detail below.
Disabling the Nouveau driver on the GPU compute node
On the Undercloud, list the current Overcloud server nodes
[stack@toksc-osp10b-dir-01 ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| 8449f79f-fc17-4927-a2f3-5aefc7692154 | overcloud-cephstorage-0 | ACTIVE | - | Running | ctlplane=192.0.2.14 |
| ac063e8d-9762-4f2a-bf19-bd90de726be4 | overcloud-cephstorage-1 | ACTIVE | - | Running | ctlplane=192.0.2.9 |
| b7410a12-b752-455c-8146-d856f9e6c5ab | overcloud-cephstorage-2 | ACTIVE | - | Running | ctlplane=192.0.2.12 |
| 4853962d-4fd8-466d-bcdb-c62df41bd953 | overcloud-cephstorage-3 | ACTIVE | - | Running | ctlplane=192.0.2.17 |
| 6ceb66b4-3b70-4171-ba4a-e0eff1f677a9 | overcloud-compute-0 | ACTIVE | - | Running | ctlplane=192.0.2.16 |
| 00c7d048-d9dd-4279-9919-7d1c86974c46 | overcloud-compute-1 | ACTIVE | - | Running | ctlplane=192.0.2.19 |
| 2700095a-319c-4b5d-8b17-96ddadca96f9 | overcloud-compute-2 | ACTIVE | - | Running | ctlplane=192.0.2.21 |
| 0d210259-44a7-4804-b084-f2af1506305b | overcloud-compute-3 | ACTIVE | - | Running | ctlplane=192.0.2.15 |
| e469714f-ce40-4b55-921e-bcadcb2ae231 | overcloud-compute-4 | ACTIVE | - | Running | ctlplane=192.0.2.10 |
| fefd2dcd-5bf7-4ac5-a7a4-ed9f70c63155 | overcloud-compute-5 | ACTIVE | - | Running | ctlplane=192.0.2.13 |
| 085cce69-216b-4090-b825-bdcc4f5d6efa | overcloud-compute-6 | ACTIVE | - | Running | ctlplane=192.0.2.20 |
| 64065ea7-9e69-47fe-ad87-ed787f671621 | overcloud-compute-7 | ACTIVE | - | Running | ctlplane=192.0.2.18 |
| cff03230-4751-462f-a6b4-6578bd5b9602 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.22 |
| 333b84fc-142c-40cb-9b8d-1566f7a6a384 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.24 |
| 20ffdd99-330f-4164-831b-394eaa540133 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.11 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
Compute nodes 6 and 7 are equipped with NVIDIA M60 GPU cards. Node 6 will be used for this example.
From the Undercloud, SSH to the GPU compute node:
[stack@toksc-osp10b-dir-01 ~]$ ssh heat-admin@192.0.2.20
Last login: Tue May 30 06:36:38 2017 from gateway
[heat-admin@overcloud-compute-6 ~]$
[heat-admin@overcloud-compute-6 ~]$
Verify that the NVIDIA GPU cards are present and recognized:
[heat-admin@overcloud-compute-6 ~]$ lspci -nn | grep NVIDIA
04:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204GL [Tesla M60] [10de:13f2] (rev a1)
05:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204GL [Tesla M60] [10de:13f2] (rev a1)
84:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204GL [Tesla M60] [10de:13f2] (rev a1)
85:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204GL [Tesla M60] [10de:13f2] (rev a1)
Use the device ID obtained in the previous command to check if the Nouveau driver is currently in use for the GPUs:
[heat-admin@overcloud-compute-6 ~]$ lspci -nnk -d 10de:13f2
04:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204GL [Tesla M60] [10de:13f2] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:115e]
Kernel driver in use: nouveau
Kernel modules: nouveau
Disable the Nouveau driver and enable IOMMU in the kernel boot options:
[heat-admin@overcloud-compute-6 ~]$ sudo su -
Last login: 火 5月 30 06:37:02 UTC 2017 on pts/0
[root@overcloud-compute-6 ~]#
[root@overcloud-compute-6 ~]# cd /boot/grub2/
Make a backup of the grub.cfg file before modifying it:
[root@overcloud-compute-6 grub2]# cp -p grub.cfg grub.cfg.orig.`date +%Y-%m-%d_%H-%M`
[root@overcloud-compute-6 grub2]# vi grub.cfg
Modify the following line and append the Noveau blacklist and Intel IOMMU options:
linux16 /boot/vmlinuz-3.10.0-514.2.2.el7.x86_64 root=UUID=a69bf0c7-8d41-42c5-b1f0-e64719aa7ffb ro console=tty0 console=ttyS0,115200n8 crashkernel=auto rhgb quiet
After modification:
linux16 /boot/vmlinuz-3.10.0-514.2.2.el7.x86_64 root=UUID=a69bf0c7-8d41-42c5-b1f0-e64719aa7ffb ro console=tty0 console=ttyS0,115200n8 crashkernel=auto rhgb quiet modprobe.blacklist=nouveau intel_iommu=on iommu=pt
Also modify the rescue boot option:
linux16 /boot/vmlinuz-0-rescue-e1622fe8eb7d44d0a2d57ce6991b2120 root=UUID=a69bf0c7-8d41-42c5-b1f0-e64719aa7ffb ro console=tty0 console=ttyS0,115200n8 crashkernel=auto rhgb quiet
After modification:
linux16 /boot/vmlinuz-0-rescue-e1622fe8eb7d44d0a2d57ce6991b2120 root=UUID=a69bf0c7-8d41-42c5-b1f0-e64719aa7ffb ro console=tty0 console=ttyS0,115200n8 crashkernel=auto rhgb quiet modprobe.blacklist=nouveau intel_iommu=on iommu=pt
Make the same modifications to “/etc/defaults/grub”:
[heat-admin@overcloud-compute-6 ~]$ vi /etc/default/grub
Re-generate the GRUB configuration files with grub2-mkconfig:
[root@overcloud-compute-6 grub2]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-514.2.2.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-514.2.2.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-e1622fe8eb7d44d0a2d57ce6991b2120
Found initrd image: /boot/initramfs-0-rescue-e1622fe8eb7d44d0a2d57ce6991b2120.img
done
Reboot the Nova compute node:
[root@overcloud-compute-6 grub2]# reboot
PolicyKit daemon disconnected from the bus.
We are no longer a registered authentication agent.
Connection to 192.0.2.20 closed by remote host.
Connection to 192.0.2.20 closed.<\pre>
After the reboot is complete, SSH to the node to verify that the Nouveau module is no longer active for the GPUs:
[stack@toksc-osp10b-dir-01 ~]$ ssh heat-admin@192.0.2.20
Last login: Tue May 30 07:39:42 2017 from 192.0.2.1
[heat-admin@overcloud-compute-6 ~]$
[heat-admin@overcloud-compute-6 ~]$
[heat-admin@overcloud-compute-6 ~]$
[heat-admin@overcloud-compute-6 ~]$ lspci -nnk -d 10de:13f2
04:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204GL [Tesla M60] [10de:13f2] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:115e]
Kernel modules: nouveau
The Kernel module is present but not listed as being active. PCIe pass-through is now possible.
Modifying the Nova compute service to allow PCIe pass-through
From the Undercloud, SSH to the compute node and become root with sudo:
[stack@toksc-osp10b-dir-01 ~]$ ssh heat-admin@192.0.2.20
[heat-admin@overcloud-compute-6 ~]$ sudo su -
Last login: 火 5月 30 07:40:13 UTC 2017 on pts/0
Backup the nova.conf file and edit the configuration file:
[root@overcloud-compute-6 ~]# cd /etc/nova
[root@overcloud-compute-6 nova]# cp -p nova.conf nova.conf.orig.`date +%Y-%m-%d_%H-%M`
[root@overcloud-compute-6 nova]# vi nova.conf
Add the following two lines at the beginning of the “[DEFAULT]” section:
pci_alias = { "vendor_id":"10de", "product_id":"13f2", "device_type":"type-PCI", "name":"m60" }
pci_passthrough_whitelist = { "vendor_id": "10de", "product_id": "13f2" }
Note:
The values for “vendor_id” and “product_id” can be found in the output of “lspci -nn | grep NVIDIA” as shown earlier. Note that the PCIe alias and whitelist is made on a Vendor / Product basis. This means no specific data for each PCIe device is required and new cards of the same type can be added and used without having to modify the configuration file.
The value for “name” is arbitrary and can be anything. However, it will be used to filter on the GPU type later and a brief, descriptive name is suggested as best-practice. A value of “m60” is used in this example.
Restart the Nova compute service:
[root@overcloud-compute-6 nova]# systemctl restart openstack-nova-compute.service
Modifying the Nova scheduler service to filter on the GPU ID
On each of the Nova Controller nodes, perform the following steps:
From the Undercloud, SSH to the controller nodes and become root with sudo:
[stack@toksc-osp10b-dir-01 ~]$ ssh heat-admin@192.0.2.20
[heat-admin@overcloud-compute-6 ~]$ sudo su -
Last login: 火 5月 30 07:40:13 UTC 2017 on pts/0
Create a backup and then modify the nova.conf configuration file:
[root@ overcloud-controller-0 ~]# cd /etc/nova
[root@ overcloud-controller-0 ~]# cp -p nova.conf nova.conf.orig.`date +%Y-%m-%d_%H-%M`
[root@ overcloud-controller-0 ~]# vi nova.conf
Add the following three lines at the beginning of the “[DEFAULT]” section:
pci_alias = { "vendor_id":"10de", "product_id":"13f2", "device_type":"type-PCI", "name":"m60" }
pci_passthrough_whitelist = { "vendor_id": "10de", "product_id": "13f2" }
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, DiskFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter
Note: Ensure to match the values for “vendor_id”, “product_id” and “name” with those used while modifying the nova.conf file on the Nova compute node.
Note: Also change “scheduler_use_baremetal_filters” from “False” to “True”
Restart the nova-scheduler service:
[root@ overcloud-controller-0 ~]# systemctl restart openstack-nova-scheduler.service
Creating a flavor utilizing the GPU ID
The only step remaining is to create a flavor to utilize the GPU. For this a flavor containing a PCIe filter matching the “name” value in the nova.conf files will be created.
Create the base flavor without PCIe passthrough alias:
[stack@toksc-osp10b-dir-01 ~]$ openstack flavor create gpu-mid-01 --ram 4096 --disk 15 --vcpus 4
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 15 |
| id | 04447428-3944-4909-99d5-d5eaf6e83191 |
| name | gpu-mid-01 |
| os-flavor-access:is_public | True |
| properties | |
| ram | 4096 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 4 |
+----------------------------+--------------------------------------+
Check that the flavor has been created correctly:
[stack@toksc-osp10b-dir-01 ~]$ openstack flavor list
+--------------------------------------+------------+------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+------------+------+------+-----------+-------+-----------+
| 04447428-3944-4909-99d5-d5eaf6e83191 | gpu-mid-01 | 4096 | 15 | 0 | 4 | True |
+--------------------------------------+------------+------+------+-----------+-------+-----------+
Add the PCIe passthrough alias information to the flavor:
[stack@toksc-osp10b-dir-01 ~]$ openstack flavor set gpu-mid-01 --property "pci_passthrough:alias"="m60:1"
Note: The “m60:1” indicate that one (1) of the specified resource – in this case a GPU, is requested. If more than one GPU is required for a particular flavor, just modify the value. For example: “m60:2” for a dual-GPU flavor.
Verify that the flavor has been modified correctly:
[stack@toksc-osp10b-dir-01 ~]$ nova flavor-show gpu-mid-01
+----------------------------+--------------------------------------+
| Property | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 15 |
| extra_specs | {"pci_passthrough:alias": "m60"} |
| id | 04447428-3944-4909-99d5-d5eaf6e83191 |
| name | gpu-mid-01 |
| os-flavor-access:is_public | True |
| ram | 4096 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 4 |
+----------------------------+--------------------------------------+
That is all. Instances with the GPU flavor can now be created via the command line or the Horizon web interface.