Recently published blog post showing the pros and cons for migrating virtual machines using SnapMirror and also some ideas about how to use a similar methodology for disaster recovery purposes
Access the blog on the AWS official web page here:

Note: This blog post is part of the 2022 edition of the vExpert Japan Advent Calendar series for the 9th of December.
Migration from an on-premises environment to VMware Cloud on AWS can be done in a variety of ways. The most commonly used (and also recommended) method is Hybrid Cloud Extensions – HCX. However, if VMs are stored on a NetApp ONTAP appliance in the on-prem environment, the volume the VMs reside on can easily be copied to the cloud using SnapMirror. Once copied, the volume can be mounted to VMware Cloud on AWS and the VMs imported. This may be a useful method of migration provided some downtime is acceptable.
Tip: If you are just testing things out, NetApp offers a downloadable virtual ONTAP appliance which can be deployed with all features enabled for 60 days.
The peering relationship between NetApp ONTAP on-prem and in FSxN requires private connectivity. The diagram shows Direct Connect, but a VPN terminating at the TGW can also be used
This video shows all the steps outlined previously with the exception of creating the FSxN file system – although that is a very simple process and hardly worth covering in detail regardless
Open SSH sessions to both the on-premises ONTAP array and FSxN. The FSxN username will be “fsxadmin”. If not known, the password can be (re)set through the “Actions” menu under “Update file system” after selecting the FSxN file system in the AWS Console.
Step 1: [FSxN] Create the file system in AWS
The steps for this are straight-forward and already covered in detail here: https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/getting-started-step1.html
Step 2: [FSxN] Create the target volume
Note that the volume is listed as “DP” for Data Protection. This is required for SnapMirror.
FsxId0e4a2ca9c02326f50::> vol create -vserver svm-fsxn-multi-az-2 -volume snapmirrorDest -aggregate aggr1 -size 200g -type DP -tiering-policy all
[Job 1097] Job succeeded: Successful
FsxId0e4a2ca9c02326f50::>
FsxId0e4a2ca9c02326f50::>
FsxId0e4a2ca9c02326f50::> vol show
Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
svm-fsxn-multi-az-2
onprem_vm_volume_clone
aggr1 online RW 40GB 36.64GB 3%
svm-fsxn-multi-az-2
snapmirrorDest
aggr1 online DP 200GB 200.0GB 0%
svm-fsxn-multi-az-2
svm_fsxn_multi_az_2_root
aggr1 online RW 1GB 972.1MB 0%
8 entries were displayed.
FsxId0e4a2ca9c02326f50::>
FsxId0e4a2ca9c02326f50::>
Step 3a: [On-prem] Create the cluster peering relationship
Get the intercluster IP addresses from the on-prem environment
JWR-ONTAP::> network interface show -role intercluster
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
JWR-ONTAP
Intercluster-IF-1
up/up 10.70.1.121/24 JWR-ONTAP-01 e0a true
Intercluster-IF-2
up/up 10.70.1.122/24 JWR-ONTAP-01 e0b true
2 entries were displayed.
Step 3b: [FSxN] Create the cluster peering relationship
FsxId0e4a2ca9c02326f50::> cluster peer create -address-family ipv4 -peer-addrs 10.70.1.121, 10.70.1.122
Notice: Use a generated passphrase or choose a passphrase of 8 or more characters. To ensure the authenticity of the peering relationship, use a phrase or sequence of characters that would be hard to guess.
Enter the passphrase:
Confirm the passphrase:
Notice: Now use the same passphrase in the "cluster peer create" command in the other cluster.
FsxId0e4a2ca9c02326f50::> cluster peer show
Peer Cluster Name Cluster Serial Number Availability Authentication
------------------------- --------------------- -------------- --------------
JWR-ONTAP 1-80-000011 Available ok
Step 3c: [FSxN] Create the cluster peering relationship
Get the intercluster IP addresses from the FSxN environment
FsxId0e4a2ca9c02326f50::> network interface show -role intercluster
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
FsxId0e4a2ca9c02326f50
inter_1 up/up 172.16.0.163/24 FsxId0e4a2ca9c02326f50-01
e0e true
inter_2 up/up 172.16.1.169/24 FsxId0e4a2ca9c02326f50-02
e0e true
2 entries were displayed.
Step 3d: [On-prem] Create the cluster peering relationship
Use the same passphrase as when using the cluster peer create command on the FSxN side in Step 3b
JWR-ONTAP::> cluster peer create -address-family ipv4 -peer-addrs 172.16.0.163, 172.16.0.163
Step 4a: [FSxN] Create the Storage VM (SVM) peering relationship
FsxId0e4a2ca9c02326f50::> vserver peer create -vserver svm-fsxn-multi-az-2 -peer-vserver svm0 -peer-cluster JWR-ONTAP -applications snapmirror -local-name onprem
Info: [Job 145] 'vserver peer create' job queued
Step 4b: [On-prem] Create the Storage VM (SVM) peering relationship
After the peer accept command completes, verify the relationship using “vserver peer show-all”.
JWR-ONTAP::> vserver peer accept -vserver svm0 -peer-vserver svm-fsxn-multi-az-2 -local-name fsxn-peer
Step 5a: [FSxN] Create the SnapMirror relationship
FsxId0e4a2ca9c02326f50::> snapmirror create -source-path onprem:vmware -destination-path svm-fsxn-multi-az-2:snapmirrorDest -vserver svm-fsxn-multi-az-2 -throttle unlimited
Operation succeeded: snapmirror create for the relationship with destination "svm-fsxn-multi-az-2:snapmirrorDest".
FsxId0e4a2ca9c02326f50::> snapmirror show
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
onprem:vmware
XDP svm-fsxn-multi-az-2:snapmirrorDest
Uninitialized
Idle - true -
Step 5b: [FSxN] Initialize the SnapMirror relationship
This will start the data copy from on-prem to AWS
FsxId0e4a2ca9c02326f50::> snapmirror initialize -destination-path svm-fsxn-multi-az-2:snapmirrorDest -source-path onprem:vmware
Operation is queued: snapmirror initialize of destination "svm-fsxn-multi-az-2:snapmirrorDest".
FsxId0e4a2ca9c02326f50::> snapmirror show
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
onprem:vmware
XDP svm-fsxn-multi-az-2:snapmirrorDest
Uninitialized
Transferring 0B true 09/20 08:55:05
FsxId0e4a2ca9c02326f50::> snapmirror show
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
onprem:vmware
XDP svm-fsxn-multi-az-2:snapmirrorDest
Snapmirrored
Finalizing 0B true 09/20 08:58:46
FsxId0e4a2ca9c02326f50::> snapmirror show
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
onprem:vmware
XDP svm-fsxn-multi-az-2:snapmirrorDest
Snapmirrored
Idle - true -
FsxId0e4a2ca9c02326f50::>
Step 6: [FSxN] Break the mirror
FsxId0e4a2ca9c02326f50::> snapmirror break -destination-path svm-fsxn-multi-az-2:snapmirrorDest
Operation succeeded: snapmirror break for destination "svm-fsxn-multi-az-2:snapmirrorDest".
Step 7: [FSxN] Add an NFS mount point for the FSxN volume
FsxId0e4a2ca9c02326f50::> volume mount -volume snapmirrorDest -junction-path /fsxn-snapmirror-volume
Step 8: [VMC] Mount the FSxN volume in VMware Cloud on AWS
Step 9: [VMC] Import the VMs into vCenter in VMware Cloud on AWS
This can be done manually as per the screenshot below, or automated with a script
Importing using a Python script (initial release – may have rough edges): https://github.com/jonas-werner/vmware-vm-import-from-datastore/blob/main/registerVm.py
Video on how to use the script can be found here:
Step 10: [VMC] Configure the VM network prior to powering on
That’s all there is to migrating VMs using SnapMirror between on-prem VMware and VMware Cloud on AWS environments. Hopefully this has been useful. Thank you for reading!
Quick (?) steps for connecting a Mikrotik router in an on-premises lab or DC to an AWS VPC using a VPN. All commands done over AWS CLI and Mikrotik CLI.
Note: The values for tunnel IP addresses and secrets etc. can be found in your VPN configuration file (downloaded later). Please don’t use the ones in this guide or an IT fairy will jump to her death from a VAX system in some remote DC. The values used here are already invalid as the resources have been deleted by the time of writing. Do think of the fairies though.
In this case the Mikrotik is not directly attached to the internet. It goes via an ISP router. If your setup is the same, please configure port forwarding for ESP, UDP port 500 and UDP port 4500 from the ISP public interface to the Mikrotik router as per the diagram.
If the Mikrotik is directly attached to the internet please open the firewall ports accordingly for ESP and UDP 500 / 4500.
Creating the VGW (Virtual Private Gateway but called vpn-gateway on the CLI). I used 65011 here for the AWS-side ASN but feel free to use something different as long as it is supported
jonas@frantic-aerobics:~$ aws ec2 create-vpn-gateway --type ipsec.1 --amazon-side-asn 65011 | jq
{
"VpnGateway": {
"State": "available",
"Type": "ipsec.1",
"VpcAttachments": [],
"VpnGatewayId": "<your-vgw-id>",
"AmazonSideAsn": 65011
}
}
jonas@frantic-aerobics:~$
Verify the ID of the AWS VPC you want to connect to
jonas@frantic-aerobics:~$ aws ec2 describe-vpcs | jq
{
"Vpcs": [
{
"CidrBlock": "172.31.0.0/16",
"DhcpOptionsId": "dopt-d9bcfeb0",
"State": "available",
"VpcId": "<your-vpc-id>",
"OwnerId": "111222333444555",
"InstanceTenancy": "default",
"CidrBlockAssociationSet": [
{
"AssociationId": "vpc-cidr-assoc-fdf9af94",
"CidrBlock": "172.31.0.0/16",
"CidrBlockState": {
"State": "associated"
}
}
],
"IsDefault": true
}
]
}
jonas@frantic-aerobics:~$
Attach VGW to VPC
jonas@frantic-aerobics:~$ aws ec2 attach-vpn-gateway --vpn-gateway-id <your-vgw-id> --vpc-id <your-vpc-id> | jq
{
"VpcAttachment": {
"State": "attaching",
"VpcId": "<your-vpc-id>"
}
}
Verify that attachment is successful
jonas@frantic-aerobics:~$ aws ec2 describe-vpn-gateways --vpn-gateway-id <your-vgw-id> | jq
{
"VpnGateways": [
{
"State": "available",
"Type": "ipsec.1",
"VpcAttachments": [
{
"State": "attached",
"VpcId": "<your-vpc-id>"
}
],
"VpnGatewayId": "<your-vgw-id>",
"AmazonSideAsn": 65011,
"Tags": []
}
]
}
jonas@frantic-aerobics:~$
Create the CGW (register your public IP in AWS basically). I used 65010 here for the on-prem ASN but feel free to use something different as long as it is supported
jonas@frantic-aerobics:~$ curl icanhazip.com
<your-onprem-public-ip>
jonas@frantic-aerobics:~$
jonas@frantic-aerobics:~$ aws ec2 create-customer-gateway --type ipsec.1 --public-ip <your-onprem-public-ip> --bgp-asn 65010 | jq
{
"CustomerGateway": {
"BgpAsn": "65010",
"CustomerGatewayId": "<your-cgw-id>",
"IpAddress": "<your-onprem-public-ip>",
"State": "available",
"Type": "ipsec.1",
"Tags": []
}
}
jonas@frantic-aerobics:~$
Create the VPN connection
jonas@frantic-aerobics:~$ aws ec2 create-vpn-connection --type ipsec.1 --customer-gateway-id <your-cgw-id> --vpn-gateway-id <your-vgw-id>
{
"VpnConnection": {
"CustomerGatewayConfiguration": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<vpn_connection id=\"<your-vpn-connection-id>\">\n <cus
..... <shortened for brevity>
"OutsideIpAddress": "15.152.99.137",
"TunnelInsideCidr": "169.254.19.152/30",
"PreSharedKey": "<tunnel-1-secret-or-key>"
}
]
},
"Routes": [],
"Tags": []
}
}
jonas@frantic-aerobics:~$
Download the router configuration from the AWS console. Navigate to VPC and select Site-to-site VPN connection on the left-hand list. Pick the connection we just created and download the config as a text file
That’s it. The AWS side is done for now. We’ll need to add return routes from the VPC to the on-prem networks later but for now we can continue on to the Mikrotik configuration
Open the downloaded router configuration text file and SSH to the Mikrotik router. I use RouterOS 6.49.6 for this guide (latest at time of writing). An AWS VPN uses two tunnels. We have to configure both but will disable one of them later. Mikrotik doesn’t support dual active tunnels to AWS.
Create the IP addresses for the VPN tunnels. Search from the top of the file and look for “Customer gateway Inside Address”. The first 169.254.x.x IP will be for Tunnel 0. A second IP will be listed further down for Tunnel 1. We use a /30 subnet mask for the tunnel IPs.
Use your router outside interface. Mine is “sfp-sfpplus1” for this example
[admin@MikroTik] > ip address add address=169.254.88.206/30 interface=sfp-sfpplus1
[admin@MikroTik] > ip address add address=169.254.19.154/30 interface=sfp-sfpplus1
[admin@MikroTik] >
[admin@MikroTik] > ip address print
Flags: X - disabled, I - invalid, D - dynamic
# ADDRESS NETWORK INTERFACE
0 ;;; defconf
192.168.2.254/24 192.168.2.0 bridge
1 10.42.0.254/24 10.42.0.0 vl420
2 10.70.1.254/24 10.70.1.0 vl701
3 10.70.2.254/24 10.70.2.0 vl702
4 10.80.0.254/24 10.80.0.0 vl800
5 10.70.3.254/24 10.70.3.0 vl703
6 D 192.168.0.3/24 192.168.0.0 sfp-sfpplus1
7 169.254.88.206/30 169.254.88.204 sfp-sfpplus1
8 169.254.19.154/30 169.254.19.152 sfp-sfpplus1
[admin@MikroTik] >
Add the IPsec peers
[admin@MikroTik] > ip ipsec peer add address=15.152.91.202 local-address=192.168.0.3 name=AWS-VPN-Peer-0
[admin@MikroTik] > ip ipsec peer add address=15.152.99.137 local-address=192.168.0.3 name=AWS-VPN-Peer-1
Add the IPsec identities (secrets for the two tunnels)
[admin@MikroTik] > ip ipsec identity add peer=AWS-VPN-Peer-0 secret=<tunnel-0-secret-or-key>
[admin@MikroTik] > ip ipsec identity add peer=AWS-VPN-Peer-1 secret=<tunnel-1-secret-or-key>
Add new or update the default IPsec profile and proposal
[admin@MikroTik] > ip ipsec profile set [ find default=yes ] dh-group=modp1024 dpd-interval=10s dpd-maximum-failures=3 enc-algorithm=aes-128 lifetime=8h
[admin@MikroTik] >
[admin@MikroTik] > ip ipsec proposal set [ find default=yes ] enc-algorithm=aes-128 lifetime=1h
[admin@MikroTik] >
Update the BGP instance settings
[admin@MikroTik] > routing bgp instance set default as=65010 redistribute-connected=yes redistribute-static=yes router-id=<your-onprem-public-ip>
Add the VPN tunnel BGP Peers (one will be disabled later)
[admin@MikroTik] > routing bgp peer add hold-time=30s keepalive-time=10s name=BGP-AWS-VPN-Peer-0 remote-address=169.254.88.205 remote-as=65011
[admin@MikroTik] > routing bgp peer add hold-time=30s keepalive-time=10s name=BGP-AWS-VPN-Peer-1 remote-address=169.254.19.153 remote-as=65011
[admin@MikroTik] >
Add any networks you wish to advertise to the VPC over the VPN
[admin@MikroTik] > routing bgp network add network=192.168.2.0/24
[admin@MikroTik] > routing bgp network add network=10.70.1.0/24
[admin@MikroTik] > routing bgp network add network=10.70.2.0/24
[admin@MikroTik] > routing bgp network add network=10.70.3.0/24
[admin@MikroTik] >
Set the firewall rules. One for the VPN tunnel CIDR range and one for the VPC CIDR (172.31.0.0/16 in this example)
[admin@MikroTik] > ip firewall nat add action=accept chain=srcnat dst-address=169.254.0.0/16
[admin@MikroTik] > ip firewall nat add action=accept chain=srcnat dst-address=172.31.0.0/16
View the NAT rules
[admin@MikroTik] > ip firewall nat print
Flags: X - disabled, I - invalid, D - dynamic
0 chain=srcnat action=masquerade out-interface-list=WAN
1 chain=srcnat action=accept dst-address=169.254.0.0/16
2 chain=srcnat action=accept dst-address=172.31.0.0/16
[admin@MikroTik] >
This won’t do. The WAN rule need to come last. Change the order using the “move” command
[admin@MikroTik] > ip firewall nat move 1 0
[admin@MikroTik] > ip firewall nat print
Flags: X - disabled, I - invalid, D - dynamic
0 chain=srcnat action=accept dst-address=169.254.0.0/16
1 chain=srcnat action=masquerade out-interface-list=WAN
2 chain=srcnat action=accept dst-address=172.31.0.0/16
[admin@MikroTik] > ip firewall nat move 2 1
[admin@MikroTik] > ip firewall nat print
Flags: X - disabled, I - invalid, D - dynamic
0 chain=srcnat action=accept dst-address=169.254.0.0/16
1 chain=srcnat action=accept dst-address=172.31.0.0/16
2 chain=srcnat action=masquerade out-interface-list=WAN
[admin@MikroTik] >
Create IPsec policies for the two VPN tunnels
[admin@MikroTik] > ip ipsec policy add dst-address=169.254.88.205 src-address=169.254.88.206 proposal=default peer=AWS-VPN-Peer-0 tunnel=yes
[admin@MikroTik] > ip ipsec policy add dst-address=169.254.19.153 src-address=169.254.19.154 proposal=default peer=AWS-VPN-Peer-1 tunnel=yes
Now the tunnel status should have changed to up. Verify from the AWS CLI
jonas@frantic-aerobics:~$ aws ec2 describe-vpn-connections | jq
Disable one of the tunnels
[admin@MikroTik] > routing bgp peer print
Flags: X - disabled, E - established
# INSTANCE REMOTE-ADDRESS REMOTE-AS
0 E default 169.254.88.205 65011
1 E default 169.254.19.153 65011
[admin@MikroTik] >
[admin@MikroTik] > routing bgp peer disable numbers=1
[admin@MikroTik] >
[admin@MikroTik] > routing bgp peer print
Flags: X - disabled, E - established
# INSTANCE REMOTE-ADDRESS REMOTE-AS
0 E default 169.254.88.205 65011
1 X default 169.254.19.153 65011
[admin@MikroTik] >
Add the final IPsec policy for the VPC network CIDR. Be sure to pick the tunnel Peer (0 or 1) which is still up.
[admin@MikroTik] > ip ipsec policy add dst-address=172.31.0.0/16 src-address=0.0.0.0/0 proposal=default peer=AWS-VPN-Peer-0 tunnel=yes
[admin@MikroTik] >
That’s it. Good job. The Mikrotik is now fully configured. All that is left is to add a return route to the on-premises networks from the VPC
Access the routing table for your VPC subnet and add return routes pointing to your VGW
Configuration complete. Time to test with a ping (be sure your security group for your EC2 instances have the correct ports open of course)
All works perfectly fine. Enjoy your new VPN!
With CentOS being less than attractive to use now when Red Hat has changed how it is updated, the Amazon AMI2 Linux distribution can be an excellent alternative.
However, when deploying an Amazon AMI2 on vSphere for the first time there are a few hoops to jump through. This video shows how to create a golden image and deploy it with Terraform in less than 15 minutes