Add repo:
sudo add-apt-repository ppa:doctormo/wacom-plus
Remove repo:
sudo add-apt-repository -r ppa:doctormo/wacom-plus
Add repo:
sudo add-apt-repository ppa:doctormo/wacom-plus
Remove repo:
sudo add-apt-repository -r ppa:doctormo/wacom-plus
Enabling certificate login on iDRAC makes it possible to run commands quickly and across many servers. It can be extremely useful in many cases. This post will show how to enable certificate based login on iDRAC and how to run commands against multiple servers in sequence.
jonas@hydra:~$ ssh root@192.168.1.120 root@192.168.1.120's password: /admin1-> racadm racadm>>get idrac.users racadm get idrac.users iDRAC.Users.1 [Key=iDRAC.Embedded.1#Users.1] iDRAC.Users.2 [Key=iDRAC.Embedded.1#Users.2] iDRAC.Users.3 [Key=iDRAC.Embedded.1#Users.3] iDRAC.Users.4 [Key=iDRAC.Embedded.1#Users.4] iDRAC.Users.5 [Key=iDRAC.Embedded.1#Users.5] iDRAC.Users.6 [Key=iDRAC.Embedded.1#Users.6] iDRAC.Users.7 [Key=iDRAC.Embedded.1#Users.7] iDRAC.Users.8 [Key=iDRAC.Embedded.1#Users.8] iDRAC.Users.9 [Key=iDRAC.Embedded.1#Users.9] iDRAC.Users.10 [Key=iDRAC.Embedded.1#Users.10] iDRAC.Users.11 [Key=iDRAC.Embedded.1#Users.11] iDRAC.Users.12 [Key=iDRAC.Embedded.1#Users.12] iDRAC.Users.13 [Key=iDRAC.Embedded.1#Users.13] iDRAC.Users.14 [Key=iDRAC.Embedded.1#Users.14] iDRAC.Users.15 [Key=iDRAC.Embedded.1#Users.15] iDRAC.Users.16 [Key=iDRAC.Embedded.1#Users.16]
racadm>>get iDRAC.Users.10 racadm get iDRAC.Users.10 [Key=iDRAC.Embedded.1#Users.10] Enable=Disabled IpmiLanPrivilege=15 MD5v3Key= !!Password=******** (Write-Only) Privilege=0x0 SHA1v3Key= SHA256Password= SHA256PasswordSalt= SNMPv3AuthenticationType=SHA SNMPv3Enable=Disabled SNMPv3PrivacyType=AES SolEnable=Disabled UserName=
racadm>>set iDRAC.Users.10.UserName jonas racadm set iDRAC.Users.10.UserName jonas [Key=iDRAC.Embedded.1#Users.10] Object value modified successfully racadm>>set iDRAC.Users.10.Password calvin racadm set iDRAC.Users.10.Password calvin [Key=iDRAC.Embedded.1#Users.10] Object value modified successfully racadm>>set iDRAC.Users.10.Privilege 0x1ff racadm set iDRAC.Users.10.Privilege 0x1ff [Key=iDRAC.Embedded.1#Users.10] Object value modified successfully racadm>>set iDRAC.Users.10.IpmiLanPrivilege 4 racadm set iDRAC.Users.10.IpmiLanPrivilege 4 [Key=iDRAC.Embedded.1#Users.10] Object value modified successfully racadm>>set iDRAC.Users.10.Enable enabled racadm set iDRAC.Users.10.Enable enabled [Key=iDRAC.Embedded.1#Users.10] Object value modified successfully racadm>>exit /admin1-> exit CLP Session terminated Connection to 192.168.1.120 closed. jonas@hydra:~$
jonas@hydra:~$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/jonas/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/jonas/.ssh/id_rsa. Your public key has been saved in /home/jonas/.ssh/id_rsa.pub. The key fingerprint is: 43:15:av:24:2f:55:c5:5c:y5:v2:75:3e:ad:fa:f0:eb jonas@hydra The key's randomart image is: +--[ RSA 2048]----+ | | | | | | | o . | | + S . | | o + o | | . o o + . | |+.o .o . | |=o ..=B. | +-----------------+ jonas@hydra:~$
jonas@hydra:~$ cat ~/.ssh/id_rsa.pub ssh-rsa AAASBAASdfjsgdfnsryserhbnsfgjkdTFXNFTSDtjdRTYjsdrwsrthjsTGJsdRJGKdRTjsrtjksidHMdFgjdNsfgbCFjkdfghikdMddndRTYjdmdyikdr+EYFFTM8et+UH7uHPlC6PwWNJWn147gmN16o6JJBXzEt1MSI5Tz659lOhVO8sNomP7aV3onCS59ioED3ctdD7N4YYomVnkqHxu2SpI7B1SrXXmCi3iwY3Q3TXaYBgRc7IOG7j3P9UgNHcJ3OgFn+qcps9Dq1pXIeWDSEFwCI19T8nOjsZxLCN/DmphuwEG7J6f+q+xqhQ9t0rLwZGCmcCEi9eSnvQSjOtLwHUIJJu7RzS95PAW3qmTwem2YbtHT jonas@hydra jonas@hydra:~$
jonas@hydra:~$ ssh jonas@192.168.1.120 "racadm sshpkauth -i 10 -k 1 -t 'ssh-rsa AAASBAASdfjsgdfnsryserhbnsfgjkdTFXNFTSDtjdRTYjsdrwsrthjsTGJsdRJGKdRTjsrtjksidHMdFgjdNsfgbCFjkdfghikdMddndRTYjdmdyikdr+EYFFTM8et+UH7uHPlC6PwWNJWn147gmN16o6JJBXzEt1MSI5Tz659lOhVO8sNomP7aV3onCS59ioED3ctdD7N4YYomVnkqHxu2SpI7B1SrXXmCi3iwY3Q3TXaYBgRc7IOG7j3P9UgNHcJ3OgFn+qcps9Dq1pXIeWDSEFwCI19T8nOjsZxLCN/DmphuwEG7J6f+q+xqhQ9t0rLwZGCmcCEi9eSnvQSjOtLwHUIJJu7RzS95PAW3qmTwem2YbtHT jonas@hydra'" jonas@192.168.1.120's password: PK SSH Authentication operation completed successfully. jonas@hydra:~$ jonas@hydra:~$
jonas@hydra:~$ ssh jonas@192.168.1.120 "racadm sshpkauth -v -i 10 -k all" --- User 10 --- Key 1 : ssh-rsa AAASBAASdfjsgdfnsryserhbnsfgjkdTFXNFTSDtjdRTYjsdrwsrthjsTGJsdRJGKdRTjsrtjksidHMdFgjdNsfgbCFjkdfghikdMddndRTYjdmdyikdr+EYFFTM8et+UH7uHPlC6PwWNJWn147gmN16o6JJBXzEt1MSI5Tz659lOhVO8sNomP7aV3onCS59ioED3ctdD7N4YYomVnkqHxu2SpI7B1SrXXmCi3iwY3Q3TXaYBgRc7IOG7j3P9UgNHcJ3OgFn+qcps9Dq1pXIeWDSEFwCI19T8nOjsZxLCN/DmphuwEG7J6f+q+xqhQ9t0rLwZGCmcCEi9eSnvQSjOtLwHUIJJu7RzS95PAW3qmTwem2YbtHT jonas@hydra Key 2 : Key 3 : Key 4 :
jonas@hydra:~$ for i in {131..134}; do echo -n "Server number: $i: "; ssh 192.168.1.$i "racadm serveraction powerstatus"; done Server number: 131: Server power status: ON Server number: 132: Server power status: ON Server number: 133: Server power status: ON Server number: 134: Server power status: ON jonas@hydra:~$ jonas@hydra:~$ for i in {1..4}; do echo -n "Server number: $i: "; ssh 192.168.1.17$i "racadm storage get vdisks"; done Server number: 1: Disk.Virtual.0:RAID.Integrated.1-1 Server number: 2: Disk.Virtual.0:RAID.Integrated.1-1 Server number: 3: Disk.Virtual.0:RAID.Integrated.1-1 Server number: 4: Disk.Virtual.0:RAID.Integrated.1-1 jonas@hydra:~$
I recently made a collection of videos for people to get started with Redfish on iDRAC using either PowerShell or Python. Hopefully they’ll be helpful for those starting out with the Redfish API on Dell EMC servers (or in general).
For scripts, please refer to the Dell EMC Github page here:
https://github.com/dell/iDRAC-Redfish-Scripting
Redfish with Python: Getting started with the environment
Redfish with Python: Basic scripts
Redfish with Python: Modifying server settings with SCP (Server Configuration Profiles)
Redfish with PowerShell: Setting up the environment
Redfish with PowerShell: Modifying server settings
Redfish with PowerShell: Modifying server settings with SCP (Server Configuration Profiles)
Video done at the Dell EMC Solution Center in Tokyo for Dell EMC World 2017
As part of preparing for OpenStack days in Tokyo 2017 I built an environment to show how GPU pass-through can be used on OpenStack as a means of providing instances ready for Machine learning and Deep learning. This is a rundown of the process
Deep Learning and Machine Learning have in recent years grown to become increasingly vital in the advancement of humanity in key areas such as life sciences, medicine and artificial intelligence. Traditionally it has been difficult and costly to create scalable, self-service environments to enable developers and end users alike to leverage these technological advancements. In this post we’ll look at the practical steps for the process of enable GPU powered virtual instances on Red Hat OpenStack. These can in turn be utilized by research staff to run in-house or commercial software for Deep Learning and Machine Learning.
Virtual instances for Deep Learning and Machine Learning become easy and quick to create and consume. The addition of GPU powered Nova compute nodes can be done smoothly with no impact to existing cloud infrastructure. Users can choose from multiple GPU types and virtual machine types and the Nova Scheduler will be aware of where the required GPU resources reside for instance creation.
This post describes how to modify key OpenStack services on an already deployed cloud to allow for GPU pass-through and subsequent assignment to virtual instances. As such it assumes an already functional Red Hat OpenStack overcloud is available. The environment used for the example in this document is running Red Hat OSP10 (Newton) on Dell EMC PowerEdge servers. The GPU enabled servers used for this example are PowerEdge C4130’s with NVIDIA M60 GPUs.
After a Nova compute node with GPUs has been added to the cluster using Ironic bare-metal provisioning the following steps are taken:
Each step is described in more detail below.
On the Undercloud, list the current Overcloud server nodes
[stack@toksc-osp10b-dir-01 ~]$ nova list +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ | 8449f79f-fc17-4927-a2f3-5aefc7692154 | overcloud-cephstorage-0 | ACTIVE | - | Running | ctlplane=192.0.2.14 | | ac063e8d-9762-4f2a-bf19-bd90de726be4 | overcloud-cephstorage-1 | ACTIVE | - | Running | ctlplane=192.0.2.9 | | b7410a12-b752-455c-8146-d856f9e6c5ab | overcloud-cephstorage-2 | ACTIVE | - | Running | ctlplane=192.0.2.12 | | 4853962d-4fd8-466d-bcdb-c62df41bd953 | overcloud-cephstorage-3 | ACTIVE | - | Running | ctlplane=192.0.2.17 | | 6ceb66b4-3b70-4171-ba4a-e0eff1f677a9 | overcloud-compute-0 | ACTIVE | - | Running | ctlplane=192.0.2.16 | | 00c7d048-d9dd-4279-9919-7d1c86974c46 | overcloud-compute-1 | ACTIVE | - | Running | ctlplane=192.0.2.19 | | 2700095a-319c-4b5d-8b17-96ddadca96f9 | overcloud-compute-2 | ACTIVE | - | Running | ctlplane=192.0.2.21 | | 0d210259-44a7-4804-b084-f2af1506305b | overcloud-compute-3 | ACTIVE | - | Running | ctlplane=192.0.2.15 | | e469714f-ce40-4b55-921e-bcadcb2ae231 | overcloud-compute-4 | ACTIVE | - | Running | ctlplane=192.0.2.10 | | fefd2dcd-5bf7-4ac5-a7a4-ed9f70c63155 | overcloud-compute-5 | ACTIVE | - | Running | ctlplane=192.0.2.13 | | 085cce69-216b-4090-b825-bdcc4f5d6efa | overcloud-compute-6 | ACTIVE | - | Running | ctlplane=192.0.2.20 | | 64065ea7-9e69-47fe-ad87-ed787f671621 | overcloud-compute-7 | ACTIVE | - | Running | ctlplane=192.0.2.18 | | cff03230-4751-462f-a6b4-6578bd5b9602 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.22 | | 333b84fc-142c-40cb-9b8d-1566f7a6a384 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.24 | | 20ffdd99-330f-4164-831b-394eaa540133 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.11 | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
From the Undercloud, SSH to the GPU compute node:
[stack@toksc-osp10b-dir-01 ~]$ ssh heat-admin@192.0.2.20 Last login: Tue May 30 06:36:38 2017 from gateway [heat-admin@overcloud-compute-6 ~]$ [heat-admin@overcloud-compute-6 ~]$ Verify that the NVIDIA GPU cards are present and recognized: [heat-admin@overcloud-compute-6 ~]$ lspci -nn | grep NVIDIA 04:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204GL [Tesla M60] [10de:13f2] (rev a1) 05:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204GL [Tesla M60] [10de:13f2] (rev a1) 84:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204GL [Tesla M60] [10de:13f2] (rev a1) 85:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204GL [Tesla M60] [10de:13f2] (rev a1)
[heat-admin@overcloud-compute-6 ~]$ lspci -nnk -d 10de:13f2 04:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204GL [Tesla M60] [10de:13f2] (rev a1) Subsystem: NVIDIA Corporation Device [10de:115e] Kernel driver in use: nouveau Kernel modules: nouveau
Disable the Nouveau driver and enable IOMMU in the kernel boot options:
[heat-admin@overcloud-compute-6 ~]$ sudo su - Last login: 火 5月 30 06:37:02 UTC 2017 on pts/0 [root@overcloud-compute-6 ~]# [root@overcloud-compute-6 ~]# cd /boot/grub2/ Make a backup of the grub.cfg file before modifying it: [root@overcloud-compute-6 grub2]# cp -p grub.cfg grub.cfg.orig.`date +%Y-%m-%d_%H-%M` [root@overcloud-compute-6 grub2]# vi grub.cfg
linux16 /boot/vmlinuz-3.10.0-514.2.2.el7.x86_64 root=UUID=a69bf0c7-8d41-42c5-b1f0-e64719aa7ffb ro console=tty0 console=ttyS0,115200n8 crashkernel=auto rhgb quiet
linux16 /boot/vmlinuz-3.10.0-514.2.2.el7.x86_64 root=UUID=a69bf0c7-8d41-42c5-b1f0-e64719aa7ffb ro console=tty0 console=ttyS0,115200n8 crashkernel=auto rhgb quiet modprobe.blacklist=nouveau intel_iommu=on iommu=pt
linux16 /boot/vmlinuz-0-rescue-e1622fe8eb7d44d0a2d57ce6991b2120 root=UUID=a69bf0c7-8d41-42c5-b1f0-e64719aa7ffb ro console=tty0 console=ttyS0,115200n8 crashkernel=auto rhgb quiet
linux16 /boot/vmlinuz-0-rescue-e1622fe8eb7d44d0a2d57ce6991b2120 root=UUID=a69bf0c7-8d41-42c5-b1f0-e64719aa7ffb ro console=tty0 console=ttyS0,115200n8 crashkernel=auto rhgb quiet modprobe.blacklist=nouveau intel_iommu=on iommu=pt
[heat-admin@overcloud-compute-6 ~]$ vi /etc/default/grub
[root@overcloud-compute-6 grub2]# grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.10.0-514.2.2.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-514.2.2.el7.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-e1622fe8eb7d44d0a2d57ce6991b2120 Found initrd image: /boot/initramfs-0-rescue-e1622fe8eb7d44d0a2d57ce6991b2120.img done
[root@overcloud-compute-6 grub2]# reboot PolicyKit daemon disconnected from the bus. We are no longer a registered authentication agent. Connection to 192.0.2.20 closed by remote host. Connection to 192.0.2.20 closed.<\pre> After the reboot is complete, SSH to the node to verify that the Nouveau module is no longer active for the GPUs:
[stack@toksc-osp10b-dir-01 ~]$ ssh heat-admin@192.0.2.20 Last login: Tue May 30 07:39:42 2017 from 192.0.2.1 [heat-admin@overcloud-compute-6 ~]$ [heat-admin@overcloud-compute-6 ~]$ [heat-admin@overcloud-compute-6 ~]$ [heat-admin@overcloud-compute-6 ~]$ lspci -nnk -d 10de:13f2 04:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204GL [Tesla M60] [10de:13f2] (rev a1) Subsystem: NVIDIA Corporation Device [10de:115e] Kernel modules: nouveau
From the Undercloud, SSH to the compute node and become root with sudo:
[stack@toksc-osp10b-dir-01 ~]$ ssh heat-admin@192.0.2.20 [heat-admin@overcloud-compute-6 ~]$ sudo su - Last login: 火 5月 30 07:40:13 UTC 2017 on pts/0
[root@overcloud-compute-6 ~]# cd /etc/nova [root@overcloud-compute-6 nova]# cp -p nova.conf nova.conf.orig.`date +%Y-%m-%d_%H-%M` [root@overcloud-compute-6 nova]# vi nova.conf
pci_alias = { "vendor_id":"10de", "product_id":"13f2", "device_type":"type-PCI", "name":"m60" } pci_passthrough_whitelist = { "vendor_id": "10de", "product_id": "13f2" }
The value for “name” is arbitrary and can be anything. However, it will be used to filter on the GPU type later and a brief, descriptive name is suggested as best-practice. A value of “m60” is used in this example.
Restart the Nova compute service:
[root@overcloud-compute-6 nova]# systemctl restart openstack-nova-compute.service
On each of the Nova Controller nodes, perform the following steps:
From the Undercloud, SSH to the controller nodes and become root with sudo:
[stack@toksc-osp10b-dir-01 ~]$ ssh heat-admin@192.0.2.20 [heat-admin@overcloud-compute-6 ~]$ sudo su - Last login: 火 5月 30 07:40:13 UTC 2017 on pts/0
[root@ overcloud-controller-0 ~]# cd /etc/nova [root@ overcloud-controller-0 ~]# cp -p nova.conf nova.conf.orig.`date +%Y-%m-%d_%H-%M` [root@ overcloud-controller-0 ~]# vi nova.conf
pci_alias = { "vendor_id":"10de", "product_id":"13f2", "device_type":"type-PCI", "name":"m60" } pci_passthrough_whitelist = { "vendor_id": "10de", "product_id": "13f2" } scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, DiskFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter
Note: Also change “scheduler_use_baremetal_filters” from “False” to “True”
Restart the nova-scheduler service:
[root@ overcloud-controller-0 ~]# systemctl restart openstack-nova-scheduler.service
The only step remaining is to create a flavor to utilize the GPU. For this a flavor containing a PCIe filter matching the “name” value in the nova.conf files will be created.
Create the base flavor without PCIe passthrough alias:
[stack@toksc-osp10b-dir-01 ~]$ openstack flavor create gpu-mid-01 --ram 4096 --disk 15 --vcpus 4 +----------------------------+--------------------------------------+ | Field | Value | +----------------------------+--------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 15 | | id | 04447428-3944-4909-99d5-d5eaf6e83191 | | name | gpu-mid-01 | | os-flavor-access:is_public | True | | properties | | | ram | 4096 | | rxtx_factor | 1.0 | | swap | | | vcpus | 4 | +----------------------------+--------------------------------------+
[stack@toksc-osp10b-dir-01 ~]$ openstack flavor list +--------------------------------------+------------+------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +--------------------------------------+------------+------+------+-----------+-------+-----------+ | 04447428-3944-4909-99d5-d5eaf6e83191 | gpu-mid-01 | 4096 | 15 | 0 | 4 | True | +--------------------------------------+------------+------+------+-----------+-------+-----------+
[stack@toksc-osp10b-dir-01 ~]$ openstack flavor set gpu-mid-01 --property "pci_passthrough:alias"="m60:1"
Verify that the flavor has been modified correctly:
[stack@toksc-osp10b-dir-01 ~]$ nova flavor-show gpu-mid-01 +----------------------------+--------------------------------------+ | Property | Value | +----------------------------+--------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 15 | | extra_specs | {"pci_passthrough:alias": "m60"} | | id | 04447428-3944-4909-99d5-d5eaf6e83191 | | name | gpu-mid-01 | | os-flavor-access:is_public | True | | ram | 4096 | | rxtx_factor | 1.0 | | swap | | | vcpus | 4 | +----------------------------+--------------------------------------+