Archive for Tech

Restore XenServer VM snapshots from the CLI / command line

Taking a snapshot from the command line is trivial. Restoring the snapshot is not. From the XenCenter GUI application it’s easy of course, but sometimes you need to automate things. In this case I have an environment for testing which needs to be reverted to the same state after each test.

Each VM has only a single snapshot. Matching the snapshot UUID with the UUID of the virtual machine isn’t easy, but can be done by extracting the parameter called “children” from the snapshot.

To list the snapshots with their respective VM UUID’s, I use the following (i and j variables for snapshot UUID and VM UUID respectively):

[root@XenServer42 ~]# for i in `xe snapshot-list  | grep uuid | awk '{print$5}'`; do export j=$(xe snapshot-param-get uuid=$i  param-name=children); echo "VM UUID: $j - Snapshot UUID: $i"; done
VM UUID: 9211f2a2-4624-4254-543f-b6a99cce7760 - Snapshot UUID: 89ed788c-987e-75ad-9d72-84b2d06486de
VM UUID: 92bbf92c-57df-a6c3-fab2-366573ea3f29 - Snapshot UUID: 23da7e91-19d6-07d3-5fb3-818b834d6883
VM UUID: 3dd5209b-77cc-923a-d0da-4fb7aa013498 - Snapshot UUID: c91e63c6-6fd7-e49e-1a80-6215fecdda10
VM UUID: 29a28e86-dd60-0727-487c-12743b833a6b - Snapshot UUID: 4d21112b-b92d-4b1e-16c2-d1bcfb2d1a0b
VM UUID: 6bc1fd07-cfa7-6b00-dc90-dfe2f228db9c - Snapshot UUID: 44abca3c-eadc-c74f-2095-9c6198681305
VM UUID: c4348bab-cf34-217c-85be-3661a0e5cb60 - Snapshot UUID: af1859ae-675b-6a3d-f688-a0af65baba13

To restore the snapshots:
for i in `xe snapshot-list  | grep uuid | awk '{print$5}'`; do export j=$(xe snapshot-param-get uuid=$i  param-name=children); xe snapshot-revert uuid=$j snapshot-uuid=$i; done

NOTE: This works if there’s only ONE snapshot per VM and you want to restore them all. Otherwise more complex scripting is required to filter out the ones you need. It’s more than enough for our test systems though.

Some basic XenServer VM management from the CLI / command line

List some VMs we care about:
================================================
xe vm-list | grep na | grep -v Xen | awk '{print $4}' | sort -n

Take snapshot of the VMs
================================================
for i in `xe vm-list | grep na | grep -v Xen | awk '{print $4}' | sort -n`; do echo "Snapshotting $i"; xe vm-snapshot new-name-label="BASE vGPU IMAGE" vm=$i; done

Start the VMs
================================================
for i in `xe vm-list | grep na | grep -v Xen | awk '{print $4}' | sort -n`; do echo $i; xe vm-start vm=$i; done

Stop the VMs
================================================
for i in `xe vm-list | grep na | grep -v Xen | awk '{print $4}' | sort -n`; do echo $i; xe vm-shutdown vm=$i; done

Shutdown – Take snapshot – Start sequence:
================================================
On one line:
for i in `xe vm-list | grep na | grep -v Xen | awk '{print $4}' | sort -n`; do xe vm-shutdown vm=$i; echo "Snapshotting $i"; xe vm-snapshot new-name-label="BASE vGPU IMAGE" vm=$i; xe vm-start vm=$i; done

For use in a short shell script (prettier formatting):

#!/bin/sh

for i in `xe vm-list | grep na | grep -v Xen | awk '{print $4}' | sort -n`
do 
    xe vm-shutdown vm=$i
    echo "Snapshotting $i"
    xe vm-snapshot new-name-label="BASE vGPU IMAGE" vm=$i
    xe vm-start vm=$i
done

Import VMs from directory:
================================================
for i in *.bkp; do xe vm-import filename=$i sr-uuid=`pvscan | grep Local | awk '{print $4}' | sed 's/-/ /' | awk '{print $2}'` preserve=true; done

Export VMs to current directory:
================================================
for i in `xe vm-list | grep na | grep -v Xen | awk '{print $4}' | sort -n`;do echo $i; xe vm-export filename=$i.bkp vm=$i; done

Error when installing the vSphere6.0 appliance: The file D:\vcsa-setup.html is not in a folder shared with the host and cannot be opened by the host.

When trying to launch the vSphere6.0 appliance installer I just got the following “The file D:\vcsa-setup.html is not in a folder shared with the host and cannot be opened by the host.”

vcsa-setup.html error

After having tried and received the same error after copying the ISO locally, emptying the contents into a folder, mounting it via vCenter5.5 to my VM, etc I simply dragged and dropped the file onto Firefox. That was it – it works.

Funny how these simple things can end up wasting time …

Mount encrypted QNAP disk (crypto_LUKS) on an external computer

If one attempts to mount a QNAP disk encrypted with LUKS without unlocking it first, the following error will be shown:
mount: unknown filesystem type 'crypto_LUKS'

To mount and read data from a disk encrypted with crypto_LUKS (for example from a QNAP backup), use cryptsetup as per the below:

Install cryptsetup if not already installed:
sudo apt-get install cryptsetup

Unlock the disk (in this case /dev/sdb1 – adjust based on the device you wish to unlock). Select a good name for the unlocked disk. In this case we use “cryptodisk” but any name will work:

sudo cryptsetup luksOpen /dev/sdb1 cryptodisk
Enter passphrase for /dev/sdb1: 

The disk will be listed under /dev/mapper/. In this case /dev/mapper/cryptodisk:
sudo mount /dev/mapper/cryptodisk /home/user/mount/usb/

Now data can be accessed as normal via the mount point /home/user/mount/usb/

To unmount, do the following:
sudo umount /home/user/mount/usb

Finally lock the disk:
sudo cryptsetup luksClose cryptodisk

Stopping X windows / dropping to single-user mode

Installing the NVIDIA driver on Linux requires dropping to a command prompt without X windows running in the background.

Switch to root user:

su -
Password:

Stop the window manager:
/etc/init.d/gdm3 stop

After this X windows will stop and a command prompt will be displayed. The option to enter the root password will be given. Enter it and then install the driver as root. To install the driver, do one of the following:

Option 1:
sh ./NVIDIA-Linux-x86_64-346.72.run

Option 2:

chmod 755 NVIDIA-Linux-x86_64-346.72.run
./NVIDIA-Linux-x86_64-346.72.run

When done, start X windows again with:
/etc/init.d/gdm3 start

Connect ISO file to server using RACADM

Using VNC to connect to the iDRAC of a server is a great way to avoid Oracles eternal Java upgrades, security flaws, etc. At the other hand the Java viewer allows for mapping of ISO files when installing the server OS. That’s not an option with VNC, so what to do? Easy: Launch the VNC session and map the ISO file separately from the command line:

Check status of virtual media:

C:\Users\jonas>racadm -r 192.168.0.120 -u root -p calvin remoteimage -s
Remote File Share is Disabled
UserName
Password
ShareName

Mount the image:

C:\Users\jonas>racadm -r 192.168.0.120 -u root -p calvin remoteimage -c -u user@domain.local -p demo -l //192.168.0.121/ISO/MSDN/Win2012R2-JP-EVAL.ISO
Remote Image is now Configured

Verify that the image is connected:

C:\Users\jonas>racadm -r 192.168.0.120 -u root -p calvin remoteimage -s
Remote File Share is Enabled
UserName
Password
ShareName //192.168.0.121/ISO/MSDN/Win2012R2-JP-EVAL.ISO

Disconnect the image:

C:\Users\jonas>racadm -r 192.168.0.120 -u root -p calvin remoteimage -d
Disable Remote File Started. Please check status using -s
option to know Remote File Share is ENABLED or DISABLED.

Power control of VMs on ESXi using a script on the command line

Automating VM power control over the command line can be very useful. Especially to simplify test scenarios when tens or hundreds of VM’s are used. A couple of simple examples below.

Unfortunately the commands for powering on and off are completely different. This means we have two separate scripts for each task:

Powering on:
List all VM’s so we can find the ones we want:

[root@vGPU-ESXi60-01:~] vim-cmd vmsvc/getallvms 
Vmid         Name                                  File                                 Guest OS          Version             Annotation           
1      vcenter60           [datastore1] vcenter60/vcenter60.vmx                   sles11_64Guest          vmx-08    VMware vCenter Server Appliance
13     VMW-vGPU-02         [datastore1] VMW-vGPU-02/VMW-vGPU-02.vmx               windows7_64Guest        vmx-11                                   
14     VMW-vGPU-01         [datastore1] VMW-vGPU-01/VMW-vGPU-01.vmx               windows7_64Guest        vmx-11                                   
15     Win2012R2-vGPU      [datastore1] Win2012R2-vGPU/Win2012R2-vGPU.vmx         windows8Server64Guest   vmx-11                                   
3      HorizonView6.1_CS   [datastore1] HorizonView6.1_CS/HorizonView6.1_CS.vmx   windows8Server64Guest   vmx-11                                   
[root@vGPU-ESXi60-01:~] 

Grep for the ones we are interested in:

[root@vGPU-ESXi60-01:~] vim-cmd vmsvc/getallvms | grep vGPU
13     VMW-vGPU-02         [datastore1] VMW-vGPU-02/VMW-vGPU-02.vmx               windows7_64Guest        vmx-11                                   
14     VMW-vGPU-01         [datastore1] VMW-vGPU-01/VMW-vGPU-01.vmx               windows7_64Guest        vmx-11                                   
15     Win2012R2-vGPU      [datastore1] Win2012R2-vGPU/Win2012R2-vGPU.vmx         windows8Server64Guest   vmx-11                                   
[root@vGPU-ESXi60-01:~] 

Print only their IDs:

[root@vGPU-ESXi60-01:~] vim-cmd vmsvc/getallvms | grep vGPU | awk '{print $1}'
13
14
15
[root@vGPU-ESXi60-01:~] 

Because we’re pedantic – sort them in correct order – just in case:

[root@vGPU-ESXi60-01:~] vim-cmd vmsvc/getallvms | grep vGPU | awk '{print $1}' | sort -n
13
14
15
[root@vGPU-ESXi60-01:~] 

Run a loop to power them on:

[root@vGPU-ESXi60-01:~] for i in `vim-cmd vmsvc/getallvms | grep vGPU | awk '{print $1}' | sort -n`
> do
> echo "Powering on number $i"
> vim-cmd vmsvc/power.on $i
> done

Powering off:
List all the processes:
By default esxcli lists the information on separate lines which makes scripting close to impossible. Therefore we use the –formatter option to list the output in CSV format.

[root@vGPU-ESXi60-01:~] esxcli --formatter=csv vm process list
ConfigFile,DisplayName,ProcessID,UUID,VMXCartelID,WorldID,
/vmfs/volumes/55015477-1d4dd911-786f-b083feca800f/vcenter60/vcenter60.vmx,vcenter60,0,56 4d eb 29 84 8f 47 7d-4a 9f fe 80 3e b7 3d 59,1380839,1380840,
/vmfs/volumes/55015477-1d4dd911-786f-b083feca800f/VMW-vGPU-01/VMW-vGPU-01.vmx,VMW-vGPU-01,0,42 22 22 66 b1 22 29 73-c2 e6 ce b6 44 74 b4 24,1382270,1382271,
/vmfs/volumes/55015477-1d4dd911-786f-b083feca800f/HorizonView6.1_CS/HorizonView6.1_CS.vmx,HorizonView6.1_CS,0,42 22 43 6b 80 77 c1 56-ac 2c 07 42 d3 74 bb e3,1382369,1382370,
/vmfs/volumes/55015477-1d4dd911-786f-b083feca800f/Win2012R2-vGPU/Win2012R2-vGPU.vmx,Win2012R2-vGPU,0,42 22 c2 1b f2 b1 2c f2-d8 55 da df 9a 26 15 71,1382460,1382461,
[root@vGPU-ESXi60-01:~]

To enable AWK we replace spaces with % (for example) and commas with spaces:

[root@vGPU-ESXi60-01:~] esxcli --formatter=csv vm process list | sed 's/ /%/g' | sed 's/,/ /g'
ConfigFile DisplayName ProcessID UUID VMXCartelID WorldID 
/vmfs/volumes/55015477-1d4dd911-786f-b083feca800f/vcenter60/vcenter60.vmx vcenter60 0 56%4d%eb%29%84%8f%47%7d-4a%9f%fe%80%3e%b7%3d%59 1380839 1380840 
/vmfs/volumes/55015477-1d4dd911-786f-b083feca800f/VMW-vGPU-01/VMW-vGPU-01.vmx VMW-vGPU-01 0 42%22%22%66%b1%22%29%73-c2%e6%ce%b6%44%74%b4%24 1382270 1382271 
/vmfs/volumes/55015477-1d4dd911-786f-b083feca800f/HorizonView6.1_CS/HorizonView6.1_CS.vmx HorizonView6.1_CS 0 42%22%43%6b%80%77%c1%56-ac%2c%07%42%d3%74%bb%e3 1382369 1382370 
/vmfs/volumes/55015477-1d4dd911-786f-b083feca800f/Win2012R2-vGPU/Win2012R2-vGPU.vmx Win2012R2-vGPU 0 42%22%c2%1b%f2%b1%2c%f2-d8%55%da%df%9a%26%15%71 1382460 1382461 
[root@vGPU-ESXi60-01:~]

Grep for the VMs we want to power off:

[root@vGPU-ESXi60-01:~] esxcli --formatter=csv vm process list | sed 's/ /%/g' | sed 's/,/ /g' | grep vGPU
/vmfs/volumes/55015477-1d4dd911-786f-b083feca800f/VMW-vGPU-01/VMW-vGPU-01.vmx VMW-vGPU-01 0 42%22%22%66%b1%22%29%73-c2%e6%ce%b6%44%74%b4%24 1382270 1382271 
/vmfs/volumes/55015477-1d4dd911-786f-b083feca800f/Win2012R2-vGPU/Win2012R2-vGPU.vmx Win2012R2-vGPU 0 42%22%c2%1b%f2%b1%2c%f2-d8%55%da%df%9a%26%15%71 1382460 1382461 
[root@vGPU-ESXi60-01:~]

Print only the world-id’s:

[root@vGPU-ESXi60-01:~] esxcli --formatter=csv vm process list | sed 's/ /%/g' | sed 's/,/ /g' | grep vGPU | awk '{print $6}'
1382271
1382461
[root@vGPU-ESXi60-01:~]

Run it in a loop:

[root@vGPU-ESXi60-01:~] for i in `esxcli --formatter=csv vm process list | sed 's/ /%/g' | sed 's/,/ /g' | grep vGPU | awk '{print $6}'`
> do
> esxcli vm process kill --type=soft --world-id=$i
> done
[root@vGPU-ESXi60-01:~]

Windows 2012R2 – Extend disk: “There is not enough space available on the disk(s) to complete this operation.”

I needed to extend the main storage of my fileserver this morning. While VMware happily extended the storage volume for the VM when I asked it to, Windows 2012 R2 was not so helpful. Luckily this is easily fixed.

In Disk Manager (diskmgmt.msc) make sure the disk to be extended is set to be “Dynamic”. If it is, simply Re-scan the disks. Now it can be extended just fine. Screenshots below:

Error when extending disk:

Windows-2012R2-disk-extension-01

Rescan disks:

Windows-2012R2-disk-extension-02

Disk extended to use the extra space:

Windows-2012R2-disk-extension-03

How to SSH using a public key instead of password

When accessing remote systems using SSH it can be handy to use RSA keys rather than having to enter a password every time. Especially handy if doing automation using Ansible or similar tools. Here’s how to do it:

Generate the key pair
One public and one private key will be created. The private key is kept securely on the client system. The public key is copied to the target server. The passphrase is optional. It helps secure they key if the private key is compromised. In this example we skip entering a passphrase.


jonas@nyx:~$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/jonas/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/jonas/.ssh/id_rsa.
Your public key has been saved in /home/jonas/.ssh/id_rsa.pub.
The key fingerprint is:
68:1f:bd:d2:80:3e:ad:fa:f0:eb:c0:2f:a2:23:8d:5a jonas@nyx
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|       o .       |
|      + S .      |
|   . o o + .     |
| oE + o + o      |
|+.o .= o .       |
|=o ..=B.         |
+-----------------+
jonas@nyx:~$ 

Copy the key to the remote system
We now copy over the public key to the remote system. Note that we need to enter the password to get the key copied. This is what we’re trying to fix. Note that we copy the .pub public key to a new name: “authorized_keys” in the .ssh directory for your user on the remote system. For example “/home/jonas/.ssh/authorized_keys”.


jonas@nyx:~$ scp ~/.ssh/id_rsa.pub 192.168.56.102:.ssh/authorized_keys
jonas@192.168.56.102's password: 
id_rsa.pub                                                 100%  391     0.4KB/s   00:00    
jonas@nyx:~$ 

Verify the solution
Repeat the SCP command but this time copy the public key to a random name to verify that SSH/SCP can be done without entering a password:


jonas@nyx:~$ scp ~/.ssh/id_rsa.pub 192.168.56.102:.ssh/test_file
id_rsa.pub                                                 100%  391     0.4KB/s   00:00    
jonas@nyx:~$ 

…and with Ansible
Below we finally compare pinging a host with RSA key auth enabled vs. server with password login only. Predictably one succeeds and one fails.


jonas@nyx:~$ ansible -m ping all 
192.168.56.102 | success >> {
    "changed": false, 
    "ping": "pong"
}

192.168.56.101 | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
jonas@nyx:~$ 

View, Create, Delete virtual RAID volumes with RACADM on an FC630 server (Dell 13G)

SSH to the iDRAC of the machine:

jonas@nyx:~$ ssh root@192.168.0.2
root@192.168.0.2's password:

Enter RACADM:
/admin1-> racadm

Check for existing RAID volumes:

racadm>>storage get vdisks
racadm storage get vdisks
Disk.Virtual.0:RAID.Integrated.1-1

Check ID:s of the physical disks and the controller:

racadm>>storage get pdisks
racadm storage get pdisks
Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1
Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1
Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1
Disk.Bay.3:Enclosure.Internal.0-1:RAID.Integrated.1-1
Disk.Bay.4:Enclosure.Internal.0-1:RAID.Integrated.1-1
Disk.Bay.5:Enclosure.Internal.0-1:RAID.Integrated.1-1
Disk.Bay.6:Enclosure.Internal.0-1:RAID.Integrated.1-1
Disk.Bay.7:Enclosure.Internal.0-1:RAID.Integrated.1-1

racadm>>storage get controllers
racadm storage get controllers
RAID.Integrated.1-1

Create the RAID volume:
In this case RAID6 with read-ahead and write-back switched on


racadm>>racadm storage createvd:RAID.Integrated.1-1 -rl r6 -wp wb -rp ra -name SSDVOL2 -pdkey:Disk.Bay.4:Enclosure.Internal.0-1:RAID.Integrated.1-1,Disk.Bay.5:Enclosure.Internal.0-1:RAID.Integrated.1-1,Disk.Bay.6:Enclosure.Internal.0-1:RAID.Integrated.1-1,Disk.Bay.7:Enclosure.Internal.0-1:RAID.Integrated.1-1

racadm storage createvd:RAID.Integrated.1-1 -rl r6 -wp wb -rp ra -name SSDVOL2 -pdkey:Disk.Bay.4:Enclosure.Internal.0-1:RAID.Integrated.1-1,Disk.Bay.5:Enclosure.Internal.0-1:RAID.Integrated.1-1,Disk.Bay.6:Enclosure.Internal.0-1:RAID.Integrated.1-1,Disk.Bay.7:Enclosure.Internal.0-1:RAID.Integrated.1-1
RAC1040 : Successfully accepted the storage configuration operation.
To apply the configuration operation, create a configuration job, and then restart the server.
To create the required commit and reboot jobs, run the jobqueue command.
For more information about the jobqueue command, enter the RACADM command "racadm help jobqueue".

Schedule the job:

racadm>>jobqueue create RAID.Integrated.1-1
racadm jobqueue create RAID.Integrated.1-1
RAC1024: Successfully scheduled a job.
Verify the job status using "racadm jobqueue view -i JID_xxxxx" command.
Commit JID = JID_222873363294
racadm>>

Execute the job by powercycling the server:
racadm>>serveraction powercycle

racadm serveraction powercycle
Server power operation successful

Verify RAID volume creation after job has completed:
racadm>>storage get vdisks

racadm storage get vdisks
Disk.Virtual.0:RAID.Integrated.1-1
Disk.Virtual.1:RAID.Integrated.1-1