Raspberry Pi as Amazon S3 file uploader

Putting the Raspberry Pi to work uploading files to the Amazon S3 backup vault. Much more energy efficient compared to keeping the PC running for the same job.

Amazing how many uses this little SoC has . I’m ending up with a pile of microSD cards for all its identities 🙂


Corsair Carbide Air 540 mod

The gaming PC at home had started getting a bit old and it was time to start overclocking. That way it’d be possible to squeeze a few more months out of the machine before the inevitable upgrade. Of course with a stock Intel CPU cooler it quickly overheated. It reached 98 degrees before I had a chance to power it down 🙂

So, a trip to Akihabara got me a fairly cheap Lepa AquaChanger240 but in my hurry to get a cooler I had underestimated the size of this monster. The thickness of the cooler with fans mounted is about 5.5cm – no way it would fit in the PC case, even though it’s a full tower.

As a result I found myself looking for a new case. Overclocking clearly has its consequences. Yet another trip to Akihabara resulted in the beautiful and spacious Corsair Carbide Air 540. Awesome to look at from both the outside and inside thanks to the ease of which cables can be kept hidden in the second chamber.

Since I wanted to go with green LEDs to light it up it made sense to give it a paint job at the same time. The normally black grilles on top and front are now a bright green. The case is extremely easy to disassemble which helped a lot in removing the parts for painting.

Corsair Air 540 mod - The Hulk - 01

Corsair Air 540 mod - The Hulk - 02    Corsair Air 540 mod - The Hulk - 03

The Lepa CPU cooler works fantastically well. The processor – an Intel i7 2600k, has been clocked from the base 3.4 up to it’s current 4.6 Ghz. Even under severe stress testing the temperature stays in the 50’s.

Cloud storage for photo backups

At home we have a 4-disk QNAP box as a file server which hosts photos dating back to the 1990s. Up to recently it was backed up over eSATA to external drives, but it was never a good solution. The QNAP box does offer cloud backup, but I don’t want to be dependent on somebody else’s proprietary way of copying data to the cloud. So, yesterday I finally got hacking on a Python script to back the whole thing up to a cloud provider.

After looking at Amazon S3, Google Cloud Platform and BLOB storage in Microsoft Azure (which I use frequently at work) I finally went with S3 as it has the option to automatically shift data to the ultra-low-cost Glacier service after a set time. There are good tutorials to get started for those who are interested here: http://boto.cloudhackers.com/en/latest/s3_tut.html

Amazon recommends splitting files larger than 100Mb prior to upload and Boto can be used with file splitting as well.

Prior to this it was necessary to encrypt all data to ensure it wasn’t easily accessible by any third party. Not that I expect anyone to have an interest in some family photos, but anyway. To make sure it would be possible even for my wife to decrypt the data I went with 7za since it simply creates zip files encrypted with AES. Encrypting is as easy as:

7za a EncryptedFile.zip FileToEncrypt -tzip -mem=AES256 -mx9 -pSomePassword

I may post the actual backup script here as well once it’s been through a few revisions, but it’s too rough for publication right now.

Nova live migration fails with “Migration pre-check error: CPU doesn’t have compatibility.”

This week I’m hosting a hands-on OpenStack training for some clients. The ability to perform Live migrations of running instances between hosts is one of the things they want to see and I had setup the environment to support this.

Live migrations had been working fine for over a week when it finally decided to throw errors this morning.

The error on the command line when trying to do a live migration:

ERROR (BadRequest): Migration pre-check error: CPU doesn't have compatibility.
internal error: Unknown CPU model Haswell-noTSX
Refer to http://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult (HTTP 400) (Request-ID: req-227fd8fb-eba4-4f40-b707-bb31569ed14f)

Normally this would happen if the hosts running nova-compute had different CPU types, but in this case they are all identical (Dell C6320 nodes).

Checked the CPU map in /usr/share/libvirt/cpu_map.xml and the CPU is listed.

    <model name='Haswell'>
      <model name='Haswell-noTSX'/>
      <feature name='hle'/>
      <feature name='rtm'/>

Since the CPU’s are the same on all nodes it’s obviously the lookup of that CPU type that fails. So, I tried to disable the check by editing /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py. This resulted in the error disappearing but my instances staying put on whatever host they were originally running on. Not much better.

Finally I started modifying the /etc/nova/nova.conf files on both the controller node and the nova-compute nodes. The changes that fixed it were as follows:

Old setting:

New setting:

Old setting:

New setting:

I also have the following settings which may or may not matter in this case:


After restarting nova on both the controller and all three compute nodes, live migrations are working fine again. Not sure why they stopped in the first place, but at least this seems to have done the job.

Checking instances for each node:

[root@c6320-n1 ~(keystone_admin)]# 
[root@c6320-n1 ~(keystone_admin)]# for i in {2..4}; do nova hypervisor-servers c6320-n$i; done
| ID                                   | Name              | Hypervisor ID | Hypervisor Hostname |
| aaac652f-65d9-49e4-aea2-603fc2db26c3 | instance-0000009c | 1             | c6320-n2            |
| ID | Name | Hypervisor ID | Hypervisor Hostname |
| ID | Name | Hypervisor ID | Hypervisor Hostname |

Performing the migration:
[root@c6320-n1 ~(keystone_admin)]# nova live-migration aaac652f-65d9-49e4-aea2-603fc2db26c3 c6320-n4

Verifying that the instance has moved from node2 to node4:

[root@c6320-n1 ~(keystone_admin)]# for i in {2..4}; do nova hypervisor-servers c6320-n$i; done
| ID | Name | Hypervisor ID | Hypervisor Hostname |
| ID | Name | Hypervisor ID | Hypervisor Hostname |
| ID                                   | Name              | Hypervisor ID | Hypervisor Hostname |
| aaac652f-65d9-49e4-aea2-603fc2db26c3 | instance-0000009c | 3             | c6320-n4            |

The keystone CLI is deprecated in favor of python-openstackclient.

UPDATE: It turns out that installing the new client can cause issues with Keystone. I found this out the hard way yesterday when it failed during a demo, preventing authentication from the command line. After a few hours troubleshooting it turns out Apache (httpd.service) and Keystone (openstack-keystone.service) were clashing. I was unable to fix this regardless of updating each of these services config files to separate them out. Finally guessed it might be the last package I installed that was the cause. After removing python-openstackclient and rebooting the controller node the issue was fixed.

Original post
In OpenStack Kilo the Depreciation message for the Keystone CLI will be displayed whenever using invoking the keystone command. “DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient.

To move to the new python-openstackclient, simply install it. On RHEL7.1:
yum install -y python-openstackclient.noarch

After that it will be available as the command “openstack”. It can be invoked in interactive mode just by typing “openstack” or directly from the command line to get information. For example, to list users:
Old Keystone CLI: “keystone user-list
New Openstack CLI: “openstack user list

To be more similar to the output of the old command issue “openstack user list --long” to get the extra fields.

You may also want to update the script “openstack-status” so it uses the new client. To do so, please:
1. Edit /usr/bin/openstack-status with your favorite editor
2. Replace the old command with the new one (around line 227) like so:

#keystone user-list
openstack user list --long

The new CLI can do a lot more of course. For a full list of commands please refer to the below (executed with “openstack” + command):

aggregate add host      ip fixed remove               server rescue
aggregate create        ip floating add               server resize
aggregate delete        ip floating create            server resume
aggregate list          ip floating delete            server set
aggregate remove host   ip floating list              server show
aggregate set           ip floating pool list         server ssh
aggregate show          ip floating remove            server suspend
availability zone list  keypair create                server unlock
backup create           keypair delete                server unpause
backup delete           keypair list                  server unrescue
backup list             keypair show                  server unset
backup restore          limits show                   service create
backup show             module list                   service delete
catalog list            network create                service list
catalog show            network delete                service show
command list            network list                  snapshot create
complete                network set                   snapshot delete
compute agent create    network show                  snapshot list
compute agent delete    object create                 snapshot set
compute agent list      object delete                 snapshot show
compute agent set       object list                   snapshot unset
compute service list    object save                   token issue
compute service set     object show                   token revoke
console log show        project create                usage list
console url show        project delete                usage show
container create        project list                  user create
container delete        project set                   user delete
container list          project show                  user list
container save          project usage list            user role list
container show          quota set                     user set
ec2 credentials create  quota show                    user show
ec2 credentials delete  role add                      volume create
ec2 credentials list    role create                   volume delete
ec2 credentials show    role delete                   volume list
endpoint create         role list                     volume set
endpoint delete         role remove                   volume show
endpoint list           role show                     volume type create
endpoint show           security group create         volume type delete
extension list          security group delete         volume type list
flavor create           security group list           volume type set
flavor delete           security group rule create    volume type unset
flavor list             security group rule delete    volume unset
flavor set              security group rule list
flavor show             security group set
flavor unset            security group show
help                    server add security group
host list               server add volume
host show               server create
hypervisor list         server delete
hypervisor show         server image create
hypervisor stats show   server list
image create            server lock
image delete            server migrate
image list              server pause
image save              server reboot
image set               server rebuild
image show              server remove security group
ip fixed add            server remove volume

RHEL / Red Hat – Package does not match intended download.

Currently installing a few C6320 servers with RHEL7.1 to create an OpenStack demo cluster. Since all servers need almost identical setups I wrote some Expect scripts but unfortunately didn’t put the script runtime timeout high enough. This resulted in the connection to one of the servers being interrupted in the middle of a “yum update -y”.

When trying to run the update again it failed with: “[Errno -1] Package does not match intended download. Suggestion: run yum –enablerepo=rhel-7-server-rpms clean metadata” “Trying other mirror.”

Unfortunately, running the suggested “clean metadata” didn’t fix the problem. Instead, the fix turned out to be a simple “yum clean all” 🙂