Quickly generate the on-prem commands to connect a Mikrotik switch / router running RouterOS to an existing AWS S2S VPN.

Problem statement

For infrequent VPN connectivity between on-prem labs / data centers and AWS it doesn’t make sense to have a permanent VPN connection up 24/7. However, configuring the on-premises Mikrotik router each time is time consuming and error-prone when done manually.

Functionality

This Python script connects to AWS using boto3, reads the details for the first VPN connection it can find and then generates the commands required to set up:

  • Inside IP addresses for the VPN tunnel
  • IPsec proposal settings
  • IPset profile settings
  • IPsec peers
  • IPsec secrets
  • BGP peers
  • BGP networks to advertise
  • Firewall setting
  • etc.

After the commands are generated, simply copy and paste into a Mikrotik CLI window over SSH or similar and the connection will come up in a couple of minutes.

Prerequisites

This script only handles the on-prem side of the connectivity. It assumes the following is already in place at the AWS side:

  • VPC with subnets
  • CGW (Customer Gateway)
  • VGW (Virtual Private Gateway) which is attached to the VPC
  • VPN connection configured to use the VGW and CGW

If you require information on how to set up the AWS-side components, please refer to this blog post: https://jonamiki.com/2022/05/04/mikrotik-vpn-to-aws-vpc/

Script download

Please refer to this GitHub page for the script itself:

https://github.com/jonas-werner/aws-vpn-mikrotik-config-generator/tree/main

Example of running the script

The AWS side has been configured but IPSEC and BGP are both down

Running the script generates the commands required to connect the Mikrotik to AWS

Copy and paste the generated commands into the Mikrotik CLI

After a couple of minutes, IPSEC is up and routes are dynamically shared over BGP

More information

For more information, including how to set up the AWS VPN configuration and a more detailed explanation of the manual steps to configure the Mikrotik router, please refer to this blog post: https://jonamiki.com/2022/05/04/mikrotik-vpn-to-aws-vpc/

Mikrotik VPN to AWS VPC

Quick (?) steps for connecting a Mikrotik router in an on-premises lab or DC to an AWS VPC using a VPN. All commands done over AWS CLI and Mikrotik CLI.

Note: The values for tunnel IP addresses and secrets etc. can be found in your VPN configuration file (downloaded later). Please don’t use the ones in this guide or an IT fairy will jump to her death from a VAX system in some remote DC. The values used here are already invalid as the resources have been deleted by the time of writing. Do think of the fairies though.

Architecture diagram

In this case the Mikrotik is not directly attached to the internet. It goes via an ISP router. If your setup is the same, please configure port forwarding for ESP, UDP port 500 and UDP port 4500 from the ISP public interface to the Mikrotik router as per the diagram.

If the Mikrotik is directly attached to the internet please open the firewall ports accordingly for ESP and UDP 500 / 4500.

AWS-side configuration

Creating the VGW (Virtual Private Gateway but called vpn-gateway on the CLI). I used 65011 here for the AWS-side ASN but feel free to use something different as long as it is supported

jonas@frantic-aerobics:~$ aws ec2 create-vpn-gateway --type ipsec.1 --amazon-side-asn 65011 | jq
{
  "VpnGateway": {
    "State": "available",
    "Type": "ipsec.1",
    "VpcAttachments": [],
    "VpnGatewayId": "<your-vgw-id>",
    "AmazonSideAsn": 65011
  }
}
jonas@frantic-aerobics:~$

Verify the ID of the AWS VPC you want to connect to

jonas@frantic-aerobics:~$ aws ec2 describe-vpcs | jq
{
  "Vpcs": [
    {
      "CidrBlock": "172.31.0.0/16",
      "DhcpOptionsId": "dopt-d9bcfeb0",
      "State": "available",
      "VpcId": "<your-vpc-id>",
      "OwnerId": "111222333444555",
      "InstanceTenancy": "default",
      "CidrBlockAssociationSet": [
        {
          "AssociationId": "vpc-cidr-assoc-fdf9af94",
          "CidrBlock": "172.31.0.0/16",
          "CidrBlockState": {
            "State": "associated"
          }
        }
      ],
      "IsDefault": true
    }
  ]
}
jonas@frantic-aerobics:~$

Attach VGW to VPC

jonas@frantic-aerobics:~$ aws ec2 attach-vpn-gateway --vpn-gateway-id <your-vgw-id> --vpc-id <your-vpc-id> | jq
{
  "VpcAttachment": {
    "State": "attaching",
    "VpcId": "<your-vpc-id>"
  }
}

Verify that attachment is successful

jonas@frantic-aerobics:~$ aws ec2 describe-vpn-gateways --vpn-gateway-id <your-vgw-id> | jq
{
  "VpnGateways": [
    {
      "State": "available",
      "Type": "ipsec.1",
      "VpcAttachments": [
        {
          "State": "attached",
          "VpcId": "<your-vpc-id>"
        }
      ],
      "VpnGatewayId": "<your-vgw-id>",
      "AmazonSideAsn": 65011,
      "Tags": []
    }
  ]
}
jonas@frantic-aerobics:~$

Create the CGW (register your public IP in AWS basically). I used 65010 here for the on-prem ASN but feel free to use something different as long as it is supported

jonas@frantic-aerobics:~$ curl icanhazip.com
<your-onprem-public-ip>
jonas@frantic-aerobics:~$
jonas@frantic-aerobics:~$ aws ec2 create-customer-gateway --type ipsec.1 --public-ip <your-onprem-public-ip> --bgp-asn 65010 | jq
{
  "CustomerGateway": {
    "BgpAsn": "65010",
    "CustomerGatewayId": "<your-cgw-id>",
    "IpAddress": "<your-onprem-public-ip>",
    "State": "available",
    "Type": "ipsec.1",
    "Tags": []
  }
}
jonas@frantic-aerobics:~$

Create the VPN connection

jonas@frantic-aerobics:~$ aws ec2 create-vpn-connection --type ipsec.1 --customer-gateway-id <your-cgw-id> --vpn-gateway-id <your-vgw-id>
{
    "VpnConnection": {
        "CustomerGatewayConfiguration": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<vpn_connection id=\"<your-vpn-connection-id>\">\n  <cus
..... <shortened for brevity>
                    "OutsideIpAddress": "15.152.99.137",
                    "TunnelInsideCidr": "169.254.19.152/30",
                    "PreSharedKey": "<tunnel-1-secret-or-key>"
                }
            ]
        },
        "Routes": [],
        "Tags": []
    }
}
jonas@frantic-aerobics:~$

Download the router configuration from the AWS console. Navigate to VPC and select Site-to-site VPN connection on the left-hand list. Pick the connection we just created and download the config as a text file

That’s it. The AWS side is done for now. We’ll need to add return routes from the VPC to the on-prem networks later but for now we can continue on to the Mikrotik configuration

Mikrotik configuration

Open the downloaded router configuration text file and SSH to the Mikrotik router. I use RouterOS 6.49.6 for this guide (latest at time of writing). An AWS VPN uses two tunnels. We have to configure both but will disable one of them later. Mikrotik doesn’t support dual active tunnels to AWS.

Create the IP addresses for the VPN tunnels. Search from the top of the file and look for “Customer gateway Inside Address”. The first 169.254.x.x IP will be for Tunnel 0. A second IP will be listed further down for Tunnel 1. We use a /30 subnet mask for the tunnel IPs.

Use your router outside interface. Mine is “sfp-sfpplus1” for this example

[admin@MikroTik] > ip address add address=169.254.88.206/30 interface=sfp-sfpplus1
[admin@MikroTik] > ip address add address=169.254.19.154/30 interface=sfp-sfpplus1
[admin@MikroTik] >
[admin@MikroTik] > ip address print
Flags: X - disabled, I - invalid, D - dynamic
 #   ADDRESS            NETWORK         INTERFACE
 0   ;;; defconf
     192.168.2.254/24   192.168.2.0     bridge
 1   10.42.0.254/24     10.42.0.0       vl420
 2   10.70.1.254/24     10.70.1.0       vl701
 3   10.70.2.254/24     10.70.2.0       vl702
 4   10.80.0.254/24     10.80.0.0       vl800
 5   10.70.3.254/24     10.70.3.0       vl703
 6 D 192.168.0.3/24     192.168.0.0     sfp-sfpplus1
 7   169.254.88.206/30  169.254.88.204  sfp-sfpplus1
 8   169.254.19.154/30  169.254.19.152  sfp-sfpplus1
[admin@MikroTik] >

Add the IPsec peers

[admin@MikroTik] > ip ipsec peer add address=15.152.91.202 local-address=192.168.0.3 name=AWS-VPN-Peer-0
[admin@MikroTik] > ip ipsec peer add address=15.152.99.137 local-address=192.168.0.3 name=AWS-VPN-Peer-1

Add the IPsec identities (secrets for the two tunnels)

[admin@MikroTik] > ip ipsec identity add peer=AWS-VPN-Peer-0 secret=<tunnel-0-secret-or-key>
[admin@MikroTik] > ip ipsec identity add peer=AWS-VPN-Peer-1 secret=<tunnel-1-secret-or-key>

Add new or update the default IPsec profile and proposal

[admin@MikroTik] > ip ipsec profile set [ find default=yes ] dh-group=modp1024 dpd-interval=10s dpd-maximum-failures=3 enc-algorithm=aes-128 lifetime=8h
[admin@MikroTik] >
[admin@MikroTik] > ip ipsec proposal set [ find default=yes ] enc-algorithm=aes-128 lifetime=1h
[admin@MikroTik] >

Update the BGP instance settings

[admin@MikroTik] > routing bgp instance set default as=65010 redistribute-connected=yes redistribute-static=yes router-id=<your-onprem-public-ip>

Add the VPN tunnel BGP Peers (one will be disabled later)

[admin@MikroTik] > routing bgp peer add hold-time=30s keepalive-time=10s name=BGP-AWS-VPN-Peer-0 remote-address=169.254.88.205 remote-as=65011
[admin@MikroTik] > routing bgp peer add hold-time=30s keepalive-time=10s name=BGP-AWS-VPN-Peer-1 remote-address=169.254.19.153 remote-as=65011
[admin@MikroTik] >

Add any networks you wish to advertise to the VPC over the VPN

[admin@MikroTik] > routing bgp network add network=192.168.2.0/24
[admin@MikroTik] > routing bgp network add network=10.70.1.0/24
[admin@MikroTik] > routing bgp network add network=10.70.2.0/24
[admin@MikroTik] > routing bgp network add network=10.70.3.0/24
[admin@MikroTik] >

Set the firewall rules. One for the VPN tunnel CIDR range and one for the VPC CIDR (172.31.0.0/16 in this example)

[admin@MikroTik] > ip firewall nat add action=accept chain=srcnat dst-address=169.254.0.0/16
[admin@MikroTik] > ip firewall nat add action=accept chain=srcnat dst-address=172.31.0.0/16

View the NAT rules

[admin@MikroTik] > ip firewall nat print
Flags: X - disabled, I - invalid, D - dynamic
 0    chain=srcnat action=masquerade out-interface-list=WAN

 1    chain=srcnat action=accept dst-address=169.254.0.0/16

 2    chain=srcnat action=accept dst-address=172.31.0.0/16
[admin@MikroTik] >

This won’t do. The WAN rule need to come last. Change the order using the “move” command

[admin@MikroTik] > ip firewall nat move 1 0
[admin@MikroTik] > ip firewall nat print
Flags: X - disabled, I - invalid, D - dynamic
 0    chain=srcnat action=accept dst-address=169.254.0.0/16

 1    chain=srcnat action=masquerade out-interface-list=WAN

 2    chain=srcnat action=accept dst-address=172.31.0.0/16
[admin@MikroTik] > ip firewall nat move 2 1
[admin@MikroTik] > ip firewall nat print
Flags: X - disabled, I - invalid, D - dynamic
 0    chain=srcnat action=accept dst-address=169.254.0.0/16

 1    chain=srcnat action=accept dst-address=172.31.0.0/16

 2    chain=srcnat action=masquerade out-interface-list=WAN
[admin@MikroTik] >

Create IPsec policies for the two VPN tunnels

[admin@MikroTik] > ip ipsec policy add dst-address=169.254.88.205 src-address=169.254.88.206 proposal=default peer=AWS-VPN-Peer-0 tunnel=yes
[admin@MikroTik] > ip ipsec policy add dst-address=169.254.19.153 src-address=169.254.19.154 proposal=default peer=AWS-VPN-Peer-1 tunnel=yes

Now the tunnel status should have changed to up. Verify from the AWS CLI

jonas@frantic-aerobics:~$ aws ec2 describe-vpn-connections | jq

Disable one of the tunnels

[admin@MikroTik] > routing bgp peer print
Flags: X - disabled, E - established
 #  INSTANCE     REMOTE-ADDRESS     REMOTE-AS
 0 E default     169.254.88.205     65011
 1 E default     169.254.19.153     65011
[admin@MikroTik] >
[admin@MikroTik] > routing bgp peer disable numbers=1
[admin@MikroTik] >
[admin@MikroTik] > routing bgp peer print
Flags: X - disabled, E - established
 #  INSTANCE     REMOTE-ADDRESS     REMOTE-AS
 0 E default     169.254.88.205     65011
 1 X default     169.254.19.153     65011
[admin@MikroTik] >

Add the final IPsec policy for the VPC network CIDR. Be sure to pick the tunnel Peer (0 or 1) which is still up.

[admin@MikroTik] > ip ipsec policy add dst-address=172.31.0.0/16 src-address=0.0.0.0/0 proposal=default peer=AWS-VPN-Peer-0 tunnel=yes
[admin@MikroTik] >

That’s it. Good job. The Mikrotik is now fully configured. All that is left is to add a return route to the on-premises networks from the VPC

Access the routing table for your VPC subnet and add return routes pointing to your VGW

Configuration complete. Time to test with a ping (be sure your security group for your EC2 instances have the correct ports open of course)

All works perfectly fine. Enjoy your new VPN!

VMware home lab: 6 months with the new setup

In spring of 2021 I wanted a proper VMware lab setup at home. The primary reason was, and still is, having an environment in which to learn and experiment with the latest VMware and AWS solutions. I strongly believe that actual hands-on experience is the gateway to real knowledge, despite how well the documentation may be written.

To that end I went about listing up what would be needed to make this dream of a home lab come true. The lack of space meant that the setup would end up in my bedroom and therefore needed to be quiet. That removed most 2nd hand enterprise servers from the list. Possibly with the exception of the VRTX chassis from Dell, which I would still REALLY want for a home lab, but it’s way to expensive – even 2nd hand.

Requirements:

  • As compatible with the VMware HCL as possible (as-is or via Flings)
  • Quiet (no enterprise servers)
  • Energy efficient
  • Not too big (another nail in the coffin for full-depth 19″ servers)
  • Reasonable performance
  • Ability to run vSAN
  • 10Gbps networking

Server hardware

Initially I considered the Intel NUCs and Skull / Ghost Canyon mini-PCs as these are very popular among home-lab enthusiasts. However, the 10Gbps requirement necessitated a PCIe slot and the models supporting this from Intel are very expensive.

The SuperMicro E300-9D was also on the list but they too tend to get expensive and a bit hard to get on short notice where I live.

Therefore, going with a custom build sounded more and more in line with what would work for this setup. In the end I settled on the below. The list contain all the parts used for the ESXi nodes, minus the network cards which are listed separately in the networking section below.

PartBrandCost (JPY)
MoboASRock Intel H410M-ITX/ac I219V12,980link
CPUCore i5 10400 BOX (6c w. graphics)20,290link
RAMTEAM DDR4 2666Mhz PC4-21300 (2×32)33780link
m.2 cacheWD Black 500Gb SSD M.2-2280 SN7509,580link
2.5″ driveSanDisk 2.5″ SSD Ultra 3D 1TB13,110link
PSUThermaltake Smart 500W -STANDARD4,756link
CaseCooler Master H100 Mini Tower7,023link
Total101,519

Mainboard and case

The choice of mainboard came down to the onboard network chipset. It had to be possible to run the ESXi installer and it won’t work if it can’t find the network. Initially I only had the onboard NIC and no 10Gbps cards. Unfortunately the release of vSphere version 7.x restricted the hardware support significantly. This time I was going to make an AMD build, but most of their mainboards come with Realtek onboard NICs and they are no longer recognized by the ESXi installer. Another consideration was size and expansion options. An ITX formfactor meant that the size of the PC case could be reduced while still having a PCIe slot for a 10Gbps NIC.

The Cooler Master H100 case has a single big fan which makes it pretty quiet. Its small size also makes it an ideal case for this small-footprint lab environment. It even comes with LEDs in the fan which are hooked up to the reset button on the case to switch between colors (or to turn it off completely).

CPU

Due to the onboard NIC support the build was restricted to an Intel CPU. Gen 11 had been released but Gen 10 CPUs were still perfectly fine and could be had for less money. Obviously, there was no plan to add a discreet GPU so the CPU also had to come with built-in graphics. The Core i5 10400 seemed to meet all criteria while having a good cost / performance balance.

Memory

The little ASRock H410M-ITX/ac mainboard supports up to 64Gb of RAM and I filled it up from the start. One can never have too much RAM. With three nodes we get a total of 192Gb which will be sufficient for most tasks. Likely there will come a day later when a single workload (looking at you NSX-T!!) will require more. This is the only area which I feel could become a limitation soon. For that day I’ll likely have to add a box with more memory specifically for covering that workload.

Storage

A vSAN environment was one of the goals for the lab and with an NVME PCIe SSD as the cache tier and a 2.5″ drive as the capacity tier this was accomplished. It was a bit scary ordering these parts without knowing if they would be recognized in vCenter as usable for vSAN, but in the end there was no issue at all. They were all recognized immediately and could be assigned to the vSAN storage pool.

For the actual ESXi install I was going to use a USB disk initially but ended up re-using some old 2.5″ and 3.5″ spinning rust drives for the hypervisor install. These are not part of the cost calculation above as I just used whatever was laying around at home. The cost of these is negligible though.

Performance of the vSAN cluster isn’t too bad for using consumer hardware 🙂

Network hardware

To ensure vSAN performance and to support the 10Gbps internet router uplink a 10Gbps managed switch was required. Copper ports become very expensive so SFP+ would be the way to go. Mikrotik has a good 8+1 port switch / router in their CRS309-1G-8S+IN model. In the end this was a good fit for the home lab because not only does it have 8x 10Gbps SFP+ ports, it is also fanless and the software support several advanced features, like BGP.

I’m still happy with the choice 6 months later. It’s a great switch but it took a while to get used to it. Most of us probably come from a Cisco or Juniper background. The configuration for the Mikrotik is completely different and won’t be intuitive for the majority of users.

CRS309-1G-8S+IN

On the server side I wanted something which would be guaranteed to work with ESXi, so a 10Gbps card which is on the HCL was a must. Intel has a lot of cards on the list and their X520 series can be found pretty easily. In the end I got three X520-DP2 (dual port) cards and they have worked perfectly so far.

There is also a 1Gbps managed Dell x1026p switch to allow for additional networking options with NSX-T. With the Mikrotik 10Gbps switch there the Dell switch is more an addition for corner cases. It does help when attaching other devices which doesn’t support 10Gbps though.

The Mikrotik has a permanent VPN connection to an AWS Transit Gateway and from there to various VPCs and sometimes the odd VMware Cloud on AWS SDDC.

Installation media etc.

These servers still require custom installation media to be created for the installation to work. Primarily for the onboard Intel networking and the USB network Fling. An explanation for how to create custom media can be found here.

vCenter is hosted on an NFS share from a separate server. This is done so it could be on shared storage for the cluster while simultaneously being separate from the vSAN while the environment is being built.

ESXi is installed over PXE to allow for fully automated installations.

Conclusion

That’s it – a fully functional VMware lab. Quiet and with reasonably high performance. Also, RGB LEDs adds at least 20% extra performance – a bit like red paint on a sports car 😉