Nova live migration fails with “Migration pre-check error: CPU doesn’t have compatibility.”

This week I’m hosting a hands-on OpenStack training for some clients. The ability to perform Live migrations of running instances between hosts is one of the things they want to see and I had setup the environment to support this.

Live migrations had been working fine for over a week when it finally decided to throw errors this morning.

The error on the command line when trying to do a live migration:

Normally this would happen if the hosts running nova-compute had different CPU types, but in this case they are all identical (Dell C6320 nodes).

Checked the CPU map in /usr/share/libvirt/cpu_map.xml and the CPU is listed.

Since the CPU’s are the same on all nodes it’s obviously the lookup of that CPU type that fails. So, I tried to disable the check by editing /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py. This resulted in the error disappearing but my instances staying put on whatever host they were originally running on. Not much better.

Finally I started modifying the /etc/nova/nova.conf files on both the controller node and the nova-compute nodes. The changes that fixed it were as follows:

Old setting:

New setting:

Old setting:

New setting:

I also have the following settings which may or may not matter in this case:

After restarting nova on both the controller and all three compute nodes, live migrations are working fine again. Not sure why they stopped in the first place, but at least this seems to have done the job.

Checking instances for each node:

Performing the migration:

Verifying that the instance has moved from node2 to node4:

The keystone CLI is deprecated in favor of python-openstackclient.

UPDATE: It turns out that installing the new client can cause issues with Keystone. I found this out the hard way yesterday when it failed during a demo, preventing authentication from the command line. After a few hours troubleshooting it turns out Apache (httpd.service) and Keystone (openstack-keystone.service) were clashing. I was unable to fix this regardless of updating each of these services config files to separate them out. Finally guessed it might be the last package I installed that was the cause. After removing python-openstackclient and rebooting the controller node the issue was fixed.

Original post
===============
In OpenStack Kilo the Depreciation message for the Keystone CLI will be displayed whenever using invoking the keystone command. “

To move to the new python-openstackclient, simply install it. On RHEL7.1:

After that it will be available as the command “openstack”. It can be invoked in interactive mode just by typing “openstack” or directly from the command line to get information. For example, to list users:
Old Keystone CLI: “


New Openstack CLI: “

To be more similar to the output of the old command issue “

” to get the extra fields.

You may also want to update the script “openstack-status” so it uses the new client. To do so, please:
1. Edit

with your favorite editor
2. Replace the old command with the new one (around line 227) like so:

The new CLI can do a lot more of course. For a full list of commands please refer to the below (executed with “openstack” + command):