Nova live migration fails with “Migration pre-check error: CPU doesn’t have compatibility.”
This week I’m hosting a hands-on OpenStack training for some clients. The ability to perform Live migrations of running instances between hosts is one of the things they want to see and I had setup the environment to support this.
Live migrations had been working fine for over a week when it finally decided to throw errors this morning.
The error on the command line when trying to do a live migration:
ERROR (BadRequest): Migration pre-check error: CPU doesn’t have compatibility.
internal error: Unknown CPU model Haswell-noTSX
Refer to https://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult (HTTP 400) (Request-ID: req-227fd8fb-eba4-4f40-b707-bb31569ed14f)
Normally this would happen if the hosts running nova-compute had different CPU types, but in this case they are all identical (Dell C6320 nodes).
Checked the CPU map in /usr/share/libvirt/cpu_map.xml and the CPU is listed.
Since the CPU’s are the same on all nodes it’s obviously the lookup of that CPU type that fails. So, I tried to disable the check by editing /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py. This resulted in the error disappearing but my instances staying put on whatever host they were originally running on. Not much better.
Finally I started modifying the /etc/nova/nova.conf files on both the controller node and the nova-compute nodes. The changes that fixed it were as follows:
Old setting:
#cpu_mode=
New setting:
cpu_mode=custom
Old setting:
#cpu_model=
New setting:
cpu_model=kvm64
I also have the following settings which may or may not matter in this case:
virt_type=kvm
limit_cpu_features=false
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED
After restarting nova on both the controller and all three compute nodes, live migrations are working fine again. Not sure why they stopped in the first place, but at least this seems to have done the job.
Checking instances for each node:
[root@c6320-n1 ~(keystone_admin)]#
[root@c6320-n1 ~(keystone_admin)]# for i in {2..4}; do nova hypervisor-servers c6320-n$i; done
+————————————–+——————-+—————+———————+
| ID | Name | Hypervisor ID | Hypervisor Hostname |
+————————————–+——————-+—————+———————+
| aaac652f-65d9-49e4-aea2-603fc2db26c3 | instance-0000009c | 1 | c6320-n2 |
+————————————–+——————-+—————+———————+
+—-+——+—————+———————+
| ID | Name | Hypervisor ID | Hypervisor Hostname |
+—-+——+—————+———————+
+—-+——+—————+———————+
+—-+——+—————+———————+
| ID | Name | Hypervisor ID | Hypervisor Hostname |
+—-+——+—————+———————+
+—-+——+—————+———————+
Performing the migration:
[root@c6320-n1 ~(keystone_admin)]# nova live-migration aaac652f-65d9-49e4-aea2-603fc2db26c3 c6320-n4
Verifying that the instance has moved from node2 to node4:
[root@c6320-n1 ~(keystone_admin)]# for i in {2..4}; do nova hypervisor-servers c6320-n$i; done
+—-+——+—————+———————+
| ID | Name | Hypervisor ID | Hypervisor Hostname |
+—-+——+—————+———————+
+—-+——+—————+———————+
+—-+——+—————+———————+
| ID | Name | Hypervisor ID | Hypervisor Hostname |
+—-+——+—————+———————+
+—-+——+—————+———————+
+————————————–+——————-+—————+———————+
| ID | Name | Hypervisor ID | Hypervisor Hostname |
+————————————–+——————-+—————+———————+
| aaac652f-65d9-49e4-aea2-603fc2db26c3 | instance-0000009c | 3 | c6320-n4 |
+————————————–+——————-+—————+———————+