Tutorial for deploying and configuring VMware HCX in both on-premises and VMware Cloud on AWS with service mesh creation and L2 extension

Deploying HCX (VMware Hybrid Cloud Extensions) is considered to be complex and difficult by most. It doesn’t help that it’s usually one of those things you’d only do once so it doesn’t pay to spend a lot of effort to learn. However, as with everything it’s not hard once you know how to do it. This video aims to show how to deploy HCX both in VMC (VMware Cloud on AWS) and in the on-premises DC or lab.

It uses both the method of creating the service mesh over the internet as well as how to create it over a private connection, like DX (AWS Direct Connect) or a VPN.

A VPN cannot be used for L2 Extension if it is terminated on the VMC SDDC. In this tutorial I’ll use a VPN which is terminated on an AWS TGW which is in turn peered with a VTGW connected to the SDDC we’re attaching to.

Video chapters

  1. Switching vCenter to private IP and deploying HCX Cloud in VMC: https://youtu.be/ho2DY-TP-SA?t=43
  2. Initial SDDC firewall configuration: https://youtu.be/ho2DY-TP-SA?t=97
  3. Switching HCX to private IP and adding HCX firewall rules: https://youtu.be/ho2DY-TP-SA?t=405
  4. Downloading and deploying HCX for the on-prem DC side: https://youtu.be/ho2DY-TP-SA?t=585
  5. Adding HCX license, linking on-prem HCX with vCenter: https://youtu.be/ho2DY-TP-SA?t=740
  6. HCX site pairing between HCX Connector and HCX Cloud: https://youtu.be/ho2DY-TP-SA?t=959
  7. Creating HCX Network and Compute profiles: https://youtu.be/ho2DY-TP-SA?t=1011
  8. Choice: Deploy service mesh over public IP or private IP: https://youtu.be/ho2DY-TP-SA?t=1374
  9. Deploy service mesh over public IP: https://youtu.be/ho2DY-TP-SA?t=1399
  10. Live migrating a VM to AWS: https://youtu.be/ho2DY-TP-SA?t=1679
  11. Deploy service mesh over private IP (DX, VPN to TGW): https://youtu.be/ho2DY-TP-SA?t=1789

Some architecture diagrams for reference

Connecting all over the public internet is one method
The best performance may be had over a dedicated DX Private VIF to the SDDC
Separating the management traffic over a VPN while doing the L2 Extension over the internet is a bit of a hybrid
For the setup used in the tutorial I use a VPN to a TGW which is peered with a VTGW

Build log for a Dactyl Manuform split, ergonomic keyboard

There are many cool custom keyboard builds and I wanted to give it a go. This will document the process, parts and steps.

Part list

Some parts can be purchased and some are 3D printed. I printed these at home but there are places to order 3D printed parts from as well

  • 3D printed keyboard body (downloaded from Thingiverse or for the brave, generated)
  • Arduino Pro Micro x2
  • Keycaps
  • Key switches
  • 3.5mm audio jack part x2
  • 3.5mm audio cable
  • Micro-USB to USB-A cable
  • Diodes (1 / key) model 1N4148
  • Copper wire (I used 22AWG with a few different colors for easy separation)
  • XH connectors (to avoid soldering directly onto the pins of the Arduino)
  • Screws or M3 self-tapping inserts to attach the under plate
  • Model paint

Tools used

  • Soldering iron
  • Solder
  • Flux (really important for clean soldering joints)
  • Mechanical helping arms
  • Razor knife
  • Wire stripper
  • Black marker
  • XH connector crimping tool (I use an IWISS SN-2549)
  • 3D printer (if you want to print yourself)
  • Breadboard (for testing and as a jig for soldering the Arduino pins)

3D printing the keyboard body

The STL files for 3D printing the body and covers underneath can be found on Thingiverse here: https://www.thingiverse.com/thing:2666676

I printed in white PLA using the 0.2mm quality preset on a Prusa i3 MK3s

Trying out the fit of a few silver cherry compatible switches after having painted purple using Tamiya TS-24 model paint

Soldering the diodes

For the wiring I used the diagram by Nick Green shown here.

For each row I use the diodes own wires as connectors between the keys. Solder the brown side of the diode to the key and use the black-side wire to hook up to the next key.

Soldering the vertical connectors

To connect the keys on the vertical side I use AWG22 copper wire with different colors to keep them separate more easily. AWG24 might have been better but this is what I had available at home.

I start by laying out the wire over the keys and then using a permanent marker to mark where they should have the insulation removed

Then I use a wire stripper to remove the insulation where the wire is marked. That way we can connect the same wire to multiple keys without having to cut the wire. The exposed part of the wire can also be pushed down over the key pin to get it to stay put while being soldered into place

Soldering largely done! Having some helping “hands” is highly recommended

Soldering pins to the Arduino Pro Micro

To make soldering of the pins easier I simply push them into a breadboard for support

Attaching the wires to the Arduino

To avoid the hassle of soldering each Arduino pin to the keyboard wires, and also to make it easy to replace the Arduino / wires if required, I use a crimping tool and some XH connectors.

Once the wires are attached to the XH connectors they can easily be connected to the pins of the Arduino. Some velcro keeps everything nice and tidy.

Adding the 3.5mm audio jacks

I solder VCC and GND wires to the black 3.5mm audio module and attach them to the corresponding pins on the Arduino using another XH connector. The data pin attaches to D3.

The two halves can now be connected using the 3.5mm audio cable (gray in the picture)

Programming the keyboard

QMK is used for the programming. The getting started guide can be found here: https://docs.qmk.fm/#/newbs_getting_started

The QMK firmware can be cloned from GitHub here: https://github.com/qmk/qmk_firmware

Keyboard layout: This is a handwired Dactyl Manuform 5×6, so the files for modifying the key layout and function can be found here: https://github.com/qmk/qmk_firmware/tree/master/keyboards/handwired/dactyl_manuform/5×6

Please adjust to the model of keyboard you are building if different from this.

After having created a custom layout (or if you just use one of the pre-existing ones), attach the keyboard over the micro-USB to USB-A cable to the computer and program it with:

qmk flash -kb <path-to-your-kbd-type> -km <your-kbd-layout>

For example

qmk flash -kb handwired/dactyl_manuform/5x6 -km jwr

QMK will compile and then flash your Arduino. When prompted, reset the Arduino by by shortcutting the RST and GND pins.

After having programmed the left side of the keyboard, just attach the other side and repeat the process.

LED lighting: Adding background lighting is likely the next thing I’ll do. There is good documentation for this here: https://github.com/samhocevar-forks/qmk-firmware/blob/master/docs/feature_rgblight.md

Conclusion

All done! Hope that was helpful and will assist with your own build!

Mikrotik VPN to AWS VPC

Quick (?) steps for connecting a Mikrotik router in an on-premises lab or DC to an AWS VPC using a VPN. All commands done over AWS CLI and Mikrotik CLI.

Note: The values for tunnel IP addresses and secrets etc. can be found in your VPN configuration file (downloaded later). Please don’t use the ones in this guide or an IT fairy will jump to her death from a VAX system in some remote DC. The values used here are already invalid as the resources have been deleted by the time of writing. Do think of the fairies though.

Architecture diagram

In this case the Mikrotik is not directly attached to the internet. It goes via an ISP router. If your setup is the same, please configure port forwarding for ESP, UDP port 500 and UDP port 4500 from the ISP public interface to the Mikrotik router as per the diagram.

If the Mikrotik is directly attached to the internet please open the firewall ports accordingly for ESP and UDP 500 / 4500.

AWS-side configuration

Creating the VGW (Virtual Private Gateway but called vpn-gateway on the CLI). I used 65011 here for the AWS-side ASN but feel free to use something different as long as it is supported

jonas@frantic-aerobics:~$ aws ec2 create-vpn-gateway --type ipsec.1 --amazon-side-asn 65011 | jq
{
  "VpnGateway": {
    "State": "available",
    "Type": "ipsec.1",
    "VpcAttachments": [],
    "VpnGatewayId": "<your-vgw-id>",
    "AmazonSideAsn": 65011
  }
}
jonas@frantic-aerobics:~$

Verify the ID of the AWS VPC you want to connect to

jonas@frantic-aerobics:~$ aws ec2 describe-vpcs | jq
{
  "Vpcs": [
    {
      "CidrBlock": "172.31.0.0/16",
      "DhcpOptionsId": "dopt-d9bcfeb0",
      "State": "available",
      "VpcId": "<your-vpc-id>",
      "OwnerId": "111222333444555",
      "InstanceTenancy": "default",
      "CidrBlockAssociationSet": [
        {
          "AssociationId": "vpc-cidr-assoc-fdf9af94",
          "CidrBlock": "172.31.0.0/16",
          "CidrBlockState": {
            "State": "associated"
          }
        }
      ],
      "IsDefault": true
    }
  ]
}
jonas@frantic-aerobics:~$

Attach VGW to VPC

jonas@frantic-aerobics:~$ aws ec2 attach-vpn-gateway --vpn-gateway-id <your-vgw-id> --vpc-id <your-vpc-id> | jq
{
  "VpcAttachment": {
    "State": "attaching",
    "VpcId": "<your-vpc-id>"
  }
}

Verify that attachment is successful

jonas@frantic-aerobics:~$ aws ec2 describe-vpn-gateways --vpn-gateway-id <your-vgw-id> | jq
{
  "VpnGateways": [
    {
      "State": "available",
      "Type": "ipsec.1",
      "VpcAttachments": [
        {
          "State": "attached",
          "VpcId": "<your-vpc-id>"
        }
      ],
      "VpnGatewayId": "<your-vgw-id>",
      "AmazonSideAsn": 65011,
      "Tags": []
    }
  ]
}
jonas@frantic-aerobics:~$

Create the CGW (register your public IP in AWS basically). I used 65010 here for the on-prem ASN but feel free to use something different as long as it is supported

jonas@frantic-aerobics:~$ curl icanhazip.com
<your-onprem-public-ip>
jonas@frantic-aerobics:~$
jonas@frantic-aerobics:~$ aws ec2 create-customer-gateway --type ipsec.1 --public-ip <your-onprem-public-ip> --bgp-asn 65010 | jq
{
  "CustomerGateway": {
    "BgpAsn": "65010",
    "CustomerGatewayId": "<your-cgw-id>",
    "IpAddress": "<your-onprem-public-ip>",
    "State": "available",
    "Type": "ipsec.1",
    "Tags": []
  }
}
jonas@frantic-aerobics:~$

Create the VPN connection

jonas@frantic-aerobics:~$ aws ec2 create-vpn-connection --type ipsec.1 --customer-gateway-id <your-cgw-id> --vpn-gateway-id <your-vgw-id>
{
    "VpnConnection": {
        "CustomerGatewayConfiguration": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<vpn_connection id=\"<your-vpn-connection-id>\">\n  <cus
..... <shortened for brevity>
                    "OutsideIpAddress": "15.152.99.137",
                    "TunnelInsideCidr": "169.254.19.152/30",
                    "PreSharedKey": "<tunnel-1-secret-or-key>"
                }
            ]
        },
        "Routes": [],
        "Tags": []
    }
}
jonas@frantic-aerobics:~$

Download the router configuration from the AWS console. Navigate to VPC and select Site-to-site VPN connection on the left-hand list. Pick the connection we just created and download the config as a text file

That’s it. The AWS side is done for now. We’ll need to add return routes from the VPC to the on-prem networks later but for now we can continue on to the Mikrotik configuration

Mikrotik configuration

Open the downloaded router configuration text file and SSH to the Mikrotik router. I use RouterOS 6.49.6 for this guide (latest at time of writing). An AWS VPN uses two tunnels. We have to configure both but will disable one of them later. Mikrotik doesn’t support dual active tunnels to AWS.

Create the IP addresses for the VPN tunnels. Search from the top of the file and look for “Customer gateway Inside Address”. The first 169.254.x.x IP will be for Tunnel 0. A second IP will be listed further down for Tunnel 1. We use a /30 subnet mask for the tunnel IPs.

Use your router outside interface. Mine is “sfp-sfpplus1” for this example

[admin@MikroTik] > ip address add address=169.254.88.206/30 interface=sfp-sfpplus1
[admin@MikroTik] > ip address add address=169.254.19.154/30 interface=sfp-sfpplus1
[admin@MikroTik] >
[admin@MikroTik] > ip address print
Flags: X - disabled, I - invalid, D - dynamic
 #   ADDRESS            NETWORK         INTERFACE
 0   ;;; defconf
     192.168.2.254/24   192.168.2.0     bridge
 1   10.42.0.254/24     10.42.0.0       vl420
 2   10.70.1.254/24     10.70.1.0       vl701
 3   10.70.2.254/24     10.70.2.0       vl702
 4   10.80.0.254/24     10.80.0.0       vl800
 5   10.70.3.254/24     10.70.3.0       vl703
 6 D 192.168.0.3/24     192.168.0.0     sfp-sfpplus1
 7   169.254.88.206/30  169.254.88.204  sfp-sfpplus1
 8   169.254.19.154/30  169.254.19.152  sfp-sfpplus1
[admin@MikroTik] >

Add the IPsec peers

[admin@MikroTik] > ip ipsec peer add address=15.152.91.202 local-address=192.168.0.3 name=AWS-VPN-Peer-0
[admin@MikroTik] > ip ipsec peer add address=15.152.99.137 local-address=192.168.0.3 name=AWS-VPN-Peer-1

Add the IPsec identities (secrets for the two tunnels)

[admin@MikroTik] > ip ipsec identity add peer=AWS-VPN-Peer-0 secret=<tunnel-0-secret-or-key>
[admin@MikroTik] > ip ipsec identity add peer=AWS-VPN-Peer-1 secret=<tunnel-1-secret-or-key>

Add new or update the default IPsec profile and proposal

[admin@MikroTik] > ip ipsec profile set [ find default=yes ] dh-group=modp1024 dpd-interval=10s dpd-maximum-failures=3 enc-algorithm=aes-128 lifetime=8h
[admin@MikroTik] >
[admin@MikroTik] > ip ipsec proposal set [ find default=yes ] enc-algorithm=aes-128 lifetime=1h
[admin@MikroTik] >

Update the BGP instance settings

[admin@MikroTik] > routing bgp instance set default as=65010 redistribute-connected=yes redistribute-static=yes router-id=<your-onprem-public-ip>

Add the VPN tunnel BGP Peers (one will be disabled later)

[admin@MikroTik] > routing bgp peer add hold-time=30s keepalive-time=10s name=BGP-AWS-VPN-Peer-0 remote-address=169.254.88.205 remote-as=65011
[admin@MikroTik] > routing bgp peer add hold-time=30s keepalive-time=10s name=BGP-AWS-VPN-Peer-1 remote-address=169.254.19.153 remote-as=65011
[admin@MikroTik] >

Add any networks you wish to advertise to the VPC over the VPN

[admin@MikroTik] > routing bgp network add network=192.168.2.0/24
[admin@MikroTik] > routing bgp network add network=10.70.1.0/24
[admin@MikroTik] > routing bgp network add network=10.70.2.0/24
[admin@MikroTik] > routing bgp network add network=10.70.3.0/24
[admin@MikroTik] >

Set the firewall rules. One for the VPN tunnel CIDR range and one for the VPC CIDR (172.31.0.0/16 in this example)

[admin@MikroTik] > ip firewall nat add action=accept chain=srcnat dst-address=169.254.0.0/16
[admin@MikroTik] > ip firewall nat add action=accept chain=srcnat dst-address=172.31.0.0/16

View the NAT rules

[admin@MikroTik] > ip firewall nat print
Flags: X - disabled, I - invalid, D - dynamic
 0    chain=srcnat action=masquerade out-interface-list=WAN

 1    chain=srcnat action=accept dst-address=169.254.0.0/16

 2    chain=srcnat action=accept dst-address=172.31.0.0/16
[admin@MikroTik] >

This won’t do. The WAN rule need to come last. Change the order using the “move” command

[admin@MikroTik] > ip firewall nat move 1 0
[admin@MikroTik] > ip firewall nat print
Flags: X - disabled, I - invalid, D - dynamic
 0    chain=srcnat action=accept dst-address=169.254.0.0/16

 1    chain=srcnat action=masquerade out-interface-list=WAN

 2    chain=srcnat action=accept dst-address=172.31.0.0/16
[admin@MikroTik] > ip firewall nat move 2 1
[admin@MikroTik] > ip firewall nat print
Flags: X - disabled, I - invalid, D - dynamic
 0    chain=srcnat action=accept dst-address=169.254.0.0/16

 1    chain=srcnat action=accept dst-address=172.31.0.0/16

 2    chain=srcnat action=masquerade out-interface-list=WAN
[admin@MikroTik] >

Create IPsec policies for the two VPN tunnels

[admin@MikroTik] > ip ipsec policy add dst-address=169.254.88.205 src-address=169.254.88.206 proposal=default peer=AWS-VPN-Peer-0 tunnel=yes
[admin@MikroTik] > ip ipsec policy add dst-address=169.254.19.153 src-address=169.254.19.154 proposal=default peer=AWS-VPN-Peer-1 tunnel=yes

Now the tunnel status should have changed to up. Verify from the AWS CLI

jonas@frantic-aerobics:~$ aws ec2 describe-vpn-connections | jq

Disable one of the tunnels

[admin@MikroTik] > routing bgp peer print
Flags: X - disabled, E - established
 #  INSTANCE     REMOTE-ADDRESS     REMOTE-AS
 0 E default     169.254.88.205     65011
 1 E default     169.254.19.153     65011
[admin@MikroTik] >
[admin@MikroTik] > routing bgp peer disable numbers=1
[admin@MikroTik] >
[admin@MikroTik] > routing bgp peer print
Flags: X - disabled, E - established
 #  INSTANCE     REMOTE-ADDRESS     REMOTE-AS
 0 E default     169.254.88.205     65011
 1 X default     169.254.19.153     65011
[admin@MikroTik] >

Add the final IPsec policy for the VPC network CIDR. Be sure to pick the tunnel Peer (0 or 1) which is still up.

[admin@MikroTik] > ip ipsec policy add dst-address=172.31.0.0/16 src-address=0.0.0.0/0 proposal=default peer=AWS-VPN-Peer-0 tunnel=yes
[admin@MikroTik] >

That’s it. Good job. The Mikrotik is now fully configured. All that is left is to add a return route to the on-premises networks from the VPC

Access the routing table for your VPC subnet and add return routes pointing to your VGW

Configuration complete. Time to test with a ping (be sure your security group for your EC2 instances have the correct ports open of course)

All works perfectly fine. Enjoy your new VPN!

Migrate VMware VMs from an on-prem DC to VMware Cloud on AWS (VMC) using Veeam Backup and Replication

When migrating from an on-premises DC to VMware Cloud on AWS it is usually recommended to use Hybrid Cloud Extension (HCX) from VMware. However, in some cases the IT team managing the on-prem DC is already using Veeam for backup and want to use their solution also for the migration.

They may also prefer Veeam over HCX as HCX often requires professional services assistance for setup and migration planning. In addition, since HCX is primarily a tool for migrations, the customer is unlikely to have had experience setting it up in the past and while it is an excellent tool there is a learning curve to get started.

Migrating with Veeam vs. Migrating with HCX

Veeam Backup & RecoveryVMware Hybrid Cloud Extension (HCX)
Licensed (non-free) solutionFree with VMware Cloud on AWS
Arguably easy to set up and configureArguably challenging to set up and configure
Can do offline migrations of VMs, single or in bulkCan do online migrations (no downtime), offline migrations, bulk migrations and online migrations in bulk (RAV), etc.
Can not do L2 extensionCan do L2 extension of VLANs if they are connected to a vDS
Can be used for backup of VMs after they have been migratedIs primarily used for migration. Does not have backup functionality
Support for migrating from older on-prem vSphere environmentsAt time of writing, full support for on-prem vSphere 6.5 or newer. Limited support for vSphere 6.0 up to March 12th 2023

What we are building

This guide covers installing and configuring a single Veeam Backup and Recovery installation in the on-prem VMware environment and linking it to both vCenter on-prem as well as in VMware Cloud on AWS. Finally we do an offline migration of a VM to the cloud to prove it that it works.

Prerequisites

The guide assumes the following is already set up and available

  • On-premises vSphere environment with admin access (7.0 used in this example)
  • Windows Server VM to be used for Veeam install
    • Min spec here
    • Windows Server 2019 was used for this guide
    • Note: I initially used 2 vCPU, 4GB RAM and 60 GB HDD for my Veeam VM but during the first migration the entire thing stalled and wouldn’t finish. After changing to 4 vCPU, 32Gb RAM and 170 GB HDD the migration finished quickly and with no errors. Recommend to assign as much resources as is practical to the Veeam VM to facilitate and speed up the migration
  • One VMware Cloud on AWS (VMC) Software Defined Datacenter (SDDC)
  • Private IP connectivity to the VMC SDDC
    • Use Direct Connect (DX) or VPN but it must be private IP connectivity or it won’t work
    • For this setup I used a VPN to a TGW, then a peering to a VMware Transit Connect (VTGW) which had an attachment to the SDDC, but any private connectivity setup will be OK
  • A test VM to use for migration

Downloading and installing Veeam

Unless you already have a licensed copy, sign up for a trial license and then download Veeam Backup and Recovery from here. Version 11.0.1.1216 used in this guide.

In your on-premises vSphere environment, create or select a Windows Server VM to use for the Veeam installation. The VM spec used for this install are as follows:

Run the install with default settings (next, next, next, etc.)

Register the on-prem vCenter in Veeam

Navigate to “Inventory” at the bottom left, then “Virtual Infrastructure” and click “Add Server” to register the on-prem vCenter server

Listing VMs in the on-prem vSphere environment after the vCenter server has been registered in the Veeam Backup & Recovery console

Switching on-prem connectivity to VMware Cloud on AWS SDDC to use private IP addresses

For this setup there is a VPN from the on-premises DC to the SDDC (via a TGW and VTGW in this case) but the SDDC FQDN is still configured to return the public IP address. Let’s verify by pinging the FQDN

Switching the SDDC to return the private IP is easy. In the VMware Cloud on AWS web console, navigate to “Settings” and flip the IP to return from public to private

Ping the vCenter FQDN again to verify that private IP is returned by DNS and that we can ping it successfully over the VPN

All looks good. The private IP is returned. Time to register the VMware Cloud on AWS vCenter instance in the Veeam console

Registering the VMC vCenter instance with Veeam

Just use the same method as used when adding the on-premises vCenter server: Navigate to “Inventory” at the bottom left, then “Virtual Infrastructure” and click “Add Server” to register the on-prem vCenter server with Veeam

Note: If the SDDC vCenter had not been switched to use a private IP there will be an error in listing the data stores. Subsequently when migrating a VM the target data store won’t be listed and the migration can’t be started

After adding the VMware Cloud on AWS SDDC vCenter the resource pools will be visible in the Veeam console

Now both vSphere environments are registered. Time to migrate a VM to the cloud!

Migrating a VM to VMware Cloud on AWS

Below is both a video and a series of screenshots describing the migration / replication job creation for the VM.

Creating some test files on the source VM to be migrated

Navigate to “Inventory” using the bottom left menu, click the on-premises vCenter server / Cluster and locate a VM to migrate in the on-premises DC VM inventory. Right-click the VM to migrate and create a replication job

When selecting the target for the replication, be sure to expand the VMware cloud on AWS cluster and select one of the ESXi servers. Selecting the cluster is not enough to list up the required resources, like storage volumes

Since VMC is a managed environment we want to select the customer-side of the storage, folder and resource pools

If you checked the box for remapping the network is even possible to select a target VLAN for the VM to be connected to on the cloud side!

Select to start the “Run the job when I click finish” and move to the “Home” tab to view the “Running jobs”

The migration of the test VM finished in less than 9 minutes

In the vCenter client for VMware Cloud on AWS we can verify that the replicated VM is present

After logging in and listing the files we can verify that the VM is not only working but also have the test files present in the home directory

Thank you for reading! Hopefully this has provided an easy-to-understand summary of the steps required for a successful migration / replication of VMs to VMC using Veeam

Creating an Amazon AMI2 Linux VM in vSphere for use as a golden image in Terraform deployments

With CentOS being less than attractive to use now when Red Hat has changed how it is updated, the Amazon AMI2 Linux distribution can be an excellent alternative.

However, when deploying an Amazon AMI2 on vSphere for the first time there are a few hoops to jump through. This video shows how to create a golden image and deploy it with Terraform in less than 15 minutes