In this video we cover the steps for migration from VMC to NC2 over a VMware Transit Connect (VTGW) and a Transit Gateway (TGW). We briefly also cover the routing through these entities as well as the actual VM migration using Nutanix Move 5.2
Migrating VMs from VMware Cloud on AWS (VMC) to Nutanix Cloud Clusters on AWS (NC2)
Summary
In this blog post we explore two ways to use Nutanix Move 5.3 to migrate Virtual Machines from an existing VMware Cloud on AWS (VMC) environment to Nutanix Cloud Clusters on AWS (NC2). This is done while preserving both IP and MAC addresses of the VMs being migrated.
The most straight forward method is to deploy NC2 into the Connected VPC. This is a VPC which is attached at time of deployment of the VMware Cloud on AWS environment and is owned by the customer. Alternatively, we can deploy NC2 into a completely separate VPC and connect to the VMware Cloud on AWS cluster through a VMware Transit Connect (VTGW).
Architecture overview
The two methods are illustrated below. Method 1 is recommended due to the ease of setup, simple networking and no data transfer charges. However, care need to be taken to ensure there is no overlap with any existing resources deployed into the Connected VPC. For example by creating new private subnets in the Connected VPC specifically for the NC2 deployment.
Method 2 covers migrating via a VMware Transit Connect (VTGW). Although it has additional routing considerations, this is also a fully viable option. In this example we peer the VTGW with a normal customer-controlled AWS Transit Gateway (TGW). Note that with Method 2 the VTGW can also connect directly to a VPC without the need for a TGW, but this will limit the routing options for the customer.
It’s important to keep in mind that both options can migrate VMs from VMC without changing IP or MAC addresses. Neither option require L2 Extension of user VM networks. This underlines the ease of which a migration like this can be done. There are of course some caveats. Refer to the VM networking section below for more detail.
VM networking
The whole migration can be done without L2 extension of VM networks. On the VMware Cloud on AWS side Virtual Machines in VMC are connected to overlay networks created with NSX-T using the VMC management console. These are represented by the “10.3.0.0/24” network in this example. The same CIDR ranges can be created as overlay networks by using Nutanix Flow on the NC2 cluster. Thereby, when VMs are migrated from VMC to NC2, they don’t need to change their IP or MAC addresses.
Note that if L2 Extension is not used, there is no communication between the overlay networks in NC2 and the overlay networks in VMC. Therefore, plan the migration so that VMs which need to communicate are moved together.
Also note that Flow does not advertise the overlay networks into the VPC route table. As long as VMware Cloud on AWS is attached to the connected VPC, the routes for the VMs will point to the active ENI created during the VMC cluster deployment. Destroying the VMC cluster will remove these ENIs and the corresponding routes from the Connected VPC route table.
Migration tool: Move
The Nutanix Move migration tool has with the recent 5.3 release added support for migrations from VMware Cloud on AWS. In this example, Move is deployed into the NC2 cluster. Both the VMC and the NC2 environments have been registered with Move and the inventories of both show up and are available for migration. More details in the Move deployment section below.
Method 1: Deploy NC2 into the Connected VPC
If there is enough space to deploy NC2 into the already existing Connected VPC in the customer account, this is the easiest and most straight-forward option. Connectivity and routing between the Connected VPC and the VMware Cloud on AWS environment is already configured as part of the VMC deployment. Do make sure that the CIDR ranges of any existing subnets are sufficient for deployment of NC2 and that there aren’t already resources deployed into those subnets which could interfere with the NC2 components. If the VPC CIDR range has space for new subnets, consider creating new private subnets to hold the NC2 deployment.
- Benefits
- No need to create new VPC and subnets
- VPC is already connected to VMC and routing is configured
- Data transfer is free of charge
- High link / data transfer speed
- Drawbacks
- VPC may already be fully populated with resources
- VPC may not have the correct CIDR ranges for NC2
Method 1: Steps to deploy
Simply sign up for NC2 or start a trial and deploy through the NC2 deployment wizard. Select the latest version – 6.8, to get Flow overlay networking and the centralized management through Prism Central included.
Note that NC2 can only be deployed into private AWS subnets and that internet connectivity need to be present. If no direct internet connectivity is available, proxy support is also available through the deployment wizard.
VMware Cloud on AWS automatically updates the default route table in the connected VPC with the routes to vCenter, ESXi and the user networks. However, if NC2 is deployed into a subnet which doesn’t use the default route table those routes won’t be present. Ensure the subnet NC2 is deployed into is updated with the routes to the VMware Cloud on AWS environment. Particularly the management subnet which holds vCenter and the ESXi hosts. Also, if necessary, update the security group on the active VMC ENI to allow access from the NC2 subnet.
After the NC2 cluster is deployed, follow the steps further down in this article to open the VMC firewall for vCenter and ESXi, deploy Move 5.3 on top of NC2 and register both the NC2 cluster and the VMC vCenter instance.
Method 2: Deploy NC2 into a separate VPC and migrate through a VTGW
If deploying NC2 into the Connected VPC is not possible or desirable, there is another option available. VMware Cloud on AWS supports creating a VMware Transit Connect (VTGW). The VTGW is a VMware-controlled Transit Gateway (TGW) – basically a regional cloud router. The VTGW can in turn be attached either directly to another VPC or peered with a customer controlled TGW. The TGW can then be attached to one or several VPCs of the customers choosing. Do keep cross-AZ and cross-region charges in mind when planning the architecture so that they can be minimized or avoided.
- Benefits
- Once set up, migration is straight forward
- The customer can use any VPC for the NC2 deployment, including a new one
- High link / data transfer speed
- Drawbacks
- Routing requires additional steps and knowledge
- Data transfer is charged (roughly 2 cents/Gb)
- Although this example use a TGW and a VTGW, data transfer charges do not end up being doubled. The peering attachment does not incur data transfer charges unless they go across AZs or regions.
- (V)TGW attachments are charged (roughly 7 cents/h in ap-northeast-1)
Method 2: Steps to deploy: Create the VTGW
Unless already present, go to the AWS console and deploy a Transit Gateway (TGW) in the same region as VMC. Then, in the VMware Cloud on AWS management console, go to “SDDC groups” and deploy a new VTGW. Once deployed, navigate to the “External TGW” tab, click “Add TGW” and enter the account number and the ID of the customer TGW to connect to as well as the regions to use.
In the “Routes” box, enter the CIDR range of the VPC which NC2 is to be deployed into. In this example, “10.90.0.0/16”.
This will advertise the NC2 VPC CIDR to VMware Cloud on AWS and also create a peering attachment invitation from the VMware AWS account to the customer AWS account.
Method 2: Steps to deploy: Configure the TGW
The invitation to add the peering attachment can be accepted through “Transit Gateway Attachments” in the AWS console in the customer account.
While here, take the opportunity to add an attachment to the VPC into which NC2 will be deployed.
In the customer AWS account, navigate to the Transit Gateway route table section, select the route table for the TGW peered with VMC and add the routes for the VMC networks. In this case “10.2.0.0/16”, “10.3.0.0/16” and “10.4.0.0/16”. Note that these are added as Static routes.
In addition we have the “10.90.0.0/16” network added via the VPC attachment. There is no need to add static routes for this network as it is propagated automatically.
Method 2: Steps to deploy: Configure the routes to the VMC cluster in the NC2 VPC
The final step for the routing is to add the routes for VMware Cloud on AWS into the route table(s) of the VPC which NC2 is to be deployed into. In our example, “10.2.0.0/16”, “10.3.0.0/16” and “10.4.0.0/16” are added as routes via the TGW attachment. “10.90.0.0/16” is our local network and there is a quad-zero route to the internet via a NAT GW.
This concludes the setup steps specific to Method 2. Please continue with the firewall settings, NC2 cluster deployment and Move installation below.
Firewall settings: Allow Move to access the VMC vCenter and ESXi hosts
Move requires access to the VMC vCenter instance and the ESXi hosts in order to migrate virtual machines. Through the VMware Cloud on AWS console, add a Management Gateway firewall rule to allow the NC2 VPC to access these resources.
- Add the NC2 VPC CIDR range to an MGW inventory group
- Navigate to “Networking & Security”
- Click “Groups” under “Inventory”
- Click “Management Groups” to edit the groups pertaining to the MGW
- Add or modify a group and add the CIDR range of the VPC which NC2 is deployed into
- To modify an existing group, click the 3-dot menu on the left of the group and select “Edit”
- Add the MGW inventory group to the MGW firewall rules
- Navigate to “Networking & Security”
- Click “Gateway Firewall” under “Security”
- Click “Management Gateway” to edit rules for the MGW
- Update the rules for ESXi and vCenter by adding the MGW inventory group containing the NC2 VPC CIDR range with the action “Allow”
Deploy the NC2 cluster
Sign up for NC2 or start a trial and deploy through the NC2 deployment wizard. Select the latest version – 6.8, to get Flow overlay networking and the centralized management through Prism Central included. When asked, select the VPC and the private subnets desired. In this case subnets in the “10.90.0.0/16” VPC.
After the NC2 cluster is deployed, deploy Move 5.3 on top of NC2 and register both the NC2 cluster and vCenter from the VMC environment.
Install Move and register NC2 and VMC
Download Move 5.3 from the download page: https://portal.nutanix.com/page/downloads?product=move
Follow the Move manual to deploy Move 5.3 on the NC2 cluster in AWS: https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Move
VDDK upload: After Move is deployed, add the NC2 and VMC environments. After adding the VMC environment it will prompt for a VDDK file. This file can be downloaded from the VMware support site. The version used in this example is: “VMware-vix-disklib-7.0.3-19513565.x86_64”. Please use the Linux version.
Migrate the VMs to NC2
If IP retention is desired, use Flow in Prism Central to create an overlay VPC and subnet which matches the CIDR range of the NSX-T subnet in VMC from which the VMs will be migrated. In this example “10.3.0.0/24” is used.
Now the only thing remaining is to create a Migration Plan in Move where VMC is the source and NC2 is the target. Ensure to select the correct target network to ensure IP retention works as expected.
Wrap up
This has been an example of the steps required for migrating Virtual Machines from VMware Cloud on AWS (VMC) to Nutanix Cloud Clusters (NC2) without changing the IP or MAC addresses of the migrated VMs. For more information or for a demo, please reach out to your Nutanix representative or partner.
Additional resources
Migrating VMs from on-premises vSphere to VMware Cloud on AWS using NetApp SnapMirror
Note: This blog post is part of the 2022 edition of the vExpert Japan Advent Calendar series for the 9th of December.
Migration from an on-premises environment to VMware Cloud on AWS can be done in a variety of ways. The most commonly used (and also recommended) method is Hybrid Cloud Extensions – HCX. However, if VMs are stored on a NetApp ONTAP appliance in the on-prem environment, the volume the VMs reside on can easily be copied to the cloud using SnapMirror. Once copied, the volume can be mounted to VMware Cloud on AWS and the VMs imported. This may be a useful method of migration provided some downtime is acceptable.
Tip: If you are just testing things out, NetApp offers a downloadable virtual ONTAP appliance which can be deployed with all features enabled for 60 days.
Prerequisites
- Since SnapMirror is a licensed feature, please make sure a license is available on the on-prem environment. FSx for NetApp ONTAP includes SnapMirror functionality
- SnapMirror only works between a limited range of ONTAP versions. Verify that the on-prem array is compatible with FSxN. The version of FSxN at the time of writing is “NetApp Release 9.11.1P3”. Verify your version (“version” command from CLI) and compare with the list for “SnapMirror DR relationships” provided by NetApp here: https://docs.netapp.com/us-en/ontap/data-protection/compatible-ontap-versions-snapmirror-concept.html#snapmirror-synchronous-relationships
- Ensure the FSxN ENIs have a security group assigned allowing ICMP and TCP (in and outbound) on ports 11104 and 11105
Outline of steps
- Create an FSx for NetApp ONTAP (FSxN) file system
- Create a target volume in FSxN
- Set up cluster peering between on-prem ONTAP and FSxN
- Set up Storage VM (SVM) peering between on-prem ONTAP and FSxN
- Configure SnapMirror and Initialize the data sync
- Break the mirror (we’ll show deal with the 7 years of bad luck in a future blog post)
- Add an NFS mount point for the FSxN volume
- Mount the volume on VMware Cloud on AWS
- Import the VMs into vCenter
- Configure network for the VMs
Architecture diagram
The peering relationship between NetApp ONTAP on-prem and in FSxN requires private connectivity. The diagram shows Direct Connect, but a VPN terminating at the TGW can also be used
Video of the process
This video shows all the steps outlined previously with the exception of creating the FSxN file system – although that is a very simple process and hardly worth covering in detail regardless
Commands
Open SSH sessions to both the on-premises ONTAP array and FSxN. The FSxN username will be “fsxadmin”. If not known, the password can be (re)set through the “Actions” menu under “Update file system” after selecting the FSxN file system in the AWS Console.
Step 1: [FSxN] Create the file system in AWS
The steps for this are straight-forward and already covered in detail here: https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/getting-started-step1.html
Step 2: [FSxN] Create the target volume
Note that the volume is listed as “DP” for Data Protection. This is required for SnapMirror.
FsxId0e4a2ca9c02326f50::> vol create -vserver svm-fsxn-multi-az-2 -volume snapmirrorDest -aggregate aggr1 -size 200g -type DP -tiering-policy all
[Job 1097] Job succeeded: Successful
FsxId0e4a2ca9c02326f50::>
FsxId0e4a2ca9c02326f50::>
FsxId0e4a2ca9c02326f50::> vol show
Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
svm-fsxn-multi-az-2
onprem_vm_volume_clone
aggr1 online RW 40GB 36.64GB 3%
svm-fsxn-multi-az-2
snapmirrorDest
aggr1 online DP 200GB 200.0GB 0%
svm-fsxn-multi-az-2
svm_fsxn_multi_az_2_root
aggr1 online RW 1GB 972.1MB 0%
8 entries were displayed.
FsxId0e4a2ca9c02326f50::>
FsxId0e4a2ca9c02326f50::>
Step 3a: [On-prem] Create the cluster peering relationship
Get the intercluster IP addresses from the on-prem environment
JWR-ONTAP::> network interface show -role intercluster
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
JWR-ONTAP
Intercluster-IF-1
up/up 10.70.1.121/24 JWR-ONTAP-01 e0a true
Intercluster-IF-2
up/up 10.70.1.122/24 JWR-ONTAP-01 e0b true
2 entries were displayed.
Step 3b: [FSxN] Create the cluster peering relationship
FsxId0e4a2ca9c02326f50::> cluster peer create -address-family ipv4 -peer-addrs 10.70.1.121, 10.70.1.122
Notice: Use a generated passphrase or choose a passphrase of 8 or more characters. To ensure the authenticity of the peering relationship, use a phrase or sequence of characters that would be hard to guess.
Enter the passphrase:
Confirm the passphrase:
Notice: Now use the same passphrase in the "cluster peer create" command in the other cluster.
FsxId0e4a2ca9c02326f50::> cluster peer show
Peer Cluster Name Cluster Serial Number Availability Authentication
------------------------- --------------------- -------------- --------------
JWR-ONTAP 1-80-000011 Available ok
Step 3c: [FSxN] Create the cluster peering relationship
Get the intercluster IP addresses from the FSxN environment
FsxId0e4a2ca9c02326f50::> network interface show -role intercluster
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
FsxId0e4a2ca9c02326f50
inter_1 up/up 172.16.0.163/24 FsxId0e4a2ca9c02326f50-01
e0e true
inter_2 up/up 172.16.1.169/24 FsxId0e4a2ca9c02326f50-02
e0e true
2 entries were displayed.
Step 3d: [On-prem] Create the cluster peering relationship
Use the same passphrase as when using the cluster peer create command on the FSxN side in Step 3b
JWR-ONTAP::> cluster peer create -address-family ipv4 -peer-addrs 172.16.0.163, 172.16.0.163
Step 4a: [FSxN] Create the Storage VM (SVM) peering relationship
FsxId0e4a2ca9c02326f50::> vserver peer create -vserver svm-fsxn-multi-az-2 -peer-vserver svm0 -peer-cluster JWR-ONTAP -applications snapmirror -local-name onprem
Info: [Job 145] 'vserver peer create' job queued
Step 4b: [On-prem] Create the Storage VM (SVM) peering relationship
After the peer accept command completes, verify the relationship using “vserver peer show-all”.
JWR-ONTAP::> vserver peer accept -vserver svm0 -peer-vserver svm-fsxn-multi-az-2 -local-name fsxn-peer
Step 5a: [FSxN] Create the SnapMirror relationship
FsxId0e4a2ca9c02326f50::> snapmirror create -source-path onprem:vmware -destination-path svm-fsxn-multi-az-2:snapmirrorDest -vserver svm-fsxn-multi-az-2 -throttle unlimited
Operation succeeded: snapmirror create for the relationship with destination "svm-fsxn-multi-az-2:snapmirrorDest".
FsxId0e4a2ca9c02326f50::> snapmirror show
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
onprem:vmware
XDP svm-fsxn-multi-az-2:snapmirrorDest
Uninitialized
Idle - true -
Step 5b: [FSxN] Initialize the SnapMirror relationship
This will start the data copy from on-prem to AWS
FsxId0e4a2ca9c02326f50::> snapmirror initialize -destination-path svm-fsxn-multi-az-2:snapmirrorDest -source-path onprem:vmware
Operation is queued: snapmirror initialize of destination "svm-fsxn-multi-az-2:snapmirrorDest".
FsxId0e4a2ca9c02326f50::> snapmirror show
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
onprem:vmware
XDP svm-fsxn-multi-az-2:snapmirrorDest
Uninitialized
Transferring 0B true 09/20 08:55:05
FsxId0e4a2ca9c02326f50::> snapmirror show
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
onprem:vmware
XDP svm-fsxn-multi-az-2:snapmirrorDest
Snapmirrored
Finalizing 0B true 09/20 08:58:46
FsxId0e4a2ca9c02326f50::> snapmirror show
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
onprem:vmware
XDP svm-fsxn-multi-az-2:snapmirrorDest
Snapmirrored
Idle - true -
FsxId0e4a2ca9c02326f50::>
Step 6: [FSxN] Break the mirror
FsxId0e4a2ca9c02326f50::> snapmirror break -destination-path svm-fsxn-multi-az-2:snapmirrorDest
Operation succeeded: snapmirror break for destination "svm-fsxn-multi-az-2:snapmirrorDest".
Step 7: [FSxN] Add an NFS mount point for the FSxN volume
FsxId0e4a2ca9c02326f50::> volume mount -volume snapmirrorDest -junction-path /fsxn-snapmirror-volume
Step 8: [VMC] Mount the FSxN volume in VMware Cloud on AWS
Step 9: [VMC] Import the VMs into vCenter in VMware Cloud on AWS
This can be done manually as per the screenshot below, or automated with a script
Importing using a Python script (initial release – may have rough edges): https://github.com/jonas-werner/vmware-vm-import-from-datastore/blob/main/registerVm.py
Video on how to use the script can be found here:
Step 10: [VMC] Configure the VM network prior to powering on
Wrap-up
That’s all there is to migrating VMs using SnapMirror between on-prem VMware and VMware Cloud on AWS environments. Hopefully this has been useful. Thank you for reading!
Migrate VMware VMs from an on-prem DC to VMware Cloud on AWS (VMC) using Veeam Backup and Replication
When migrating from an on-premises DC to VMware Cloud on AWS it is usually recommended to use Hybrid Cloud Extension (HCX) from VMware. However, in some cases the IT team managing the on-prem DC is already using Veeam for backup and want to use their solution also for the migration.
They may also prefer Veeam over HCX as HCX often requires professional services assistance for setup and migration planning. In addition, since HCX is primarily a tool for migrations, the customer is unlikely to have had experience setting it up in the past and while it is an excellent tool there is a learning curve to get started.
Migrating with Veeam vs. Migrating with HCX
Veeam Backup & Recovery | VMware Hybrid Cloud Extension (HCX) |
---|---|
Licensed (non-free) solution | Free with VMware Cloud on AWS |
Arguably easy to set up and configure | Arguably challenging to set up and configure |
Can do offline migrations of VMs, single or in bulk | Can do online migrations (no downtime), offline migrations, bulk migrations and online migrations in bulk (RAV), etc. |
Can not do L2 extension | Can do L2 extension of VLANs if they are connected to a vDS |
Can be used for backup of VMs after they have been migrated | Is primarily used for migration. Does not have backup functionality |
Support for migrating from older on-prem vSphere environments | At time of writing, full support for on-prem vSphere 6.5 or newer. Limited support for vSphere 6.0 up to March 12th 2023 |
What we are building
This guide covers installing and configuring a single Veeam Backup and Recovery installation in the on-prem VMware environment and linking it to both vCenter on-prem as well as in VMware Cloud on AWS. Finally we do an offline migration of a VM to the cloud to prove it that it works.
Prerequisites
The guide assumes the following is already set up and available
- On-premises vSphere environment with admin access (7.0 used in this example)
- Windows Server VM to be used for Veeam install
- Min spec here
- Windows Server 2019 was used for this guide
- Note: I initially used 2 vCPU, 4GB RAM and 60 GB HDD for my Veeam VM but during the first migration the entire thing stalled and wouldn’t finish. After changing to 4 vCPU, 32Gb RAM and 170 GB HDD the migration finished quickly and with no errors. Recommend to assign as much resources as is practical to the Veeam VM to facilitate and speed up the migration
- One VMware Cloud on AWS (VMC) Software Defined Datacenter (SDDC)
- Private IP connectivity to the VMC SDDC
- Use Direct Connect (DX) or VPN but it must be private IP connectivity or it won’t work
- For this setup I used a VPN to a TGW, then a peering to a VMware Transit Connect (VTGW) which had an attachment to the SDDC, but any private connectivity setup will be OK
- A test VM to use for migration
Downloading and installing Veeam
Unless you already have a licensed copy, sign up for a trial license and then download Veeam Backup and Recovery from here. Version 11.0.1.1216 used in this guide.
In your on-premises vSphere environment, create or select a Windows Server VM to use for the Veeam installation. The VM spec used for this install are as follows:
Run the install with default settings (next, next, next, etc.)
Register the on-prem vCenter in Veeam
Navigate to “Inventory” at the bottom left, then “Virtual Infrastructure” and click “Add Server” to register the on-prem vCenter server
Listing VMs in the on-prem vSphere environment after the vCenter server has been registered in the Veeam Backup & Recovery console
Switching on-prem connectivity to VMware Cloud on AWS SDDC to use private IP addresses
For this setup there is a VPN from the on-premises DC to the SDDC (via a TGW and VTGW in this case) but the SDDC FQDN is still configured to return the public IP address. Let’s verify by pinging the FQDN
Switching the SDDC to return the private IP is easy. In the VMware Cloud on AWS web console, navigate to “Settings” and flip the IP to return from public to private
Ping the vCenter FQDN again to verify that private IP is returned by DNS and that we can ping it successfully over the VPN
All looks good. The private IP is returned. Time to register the VMware Cloud on AWS vCenter instance in the Veeam console
Registering the VMC vCenter instance with Veeam
Just use the same method as used when adding the on-premises vCenter server: Navigate to “Inventory” at the bottom left, then “Virtual Infrastructure” and click “Add Server” to register the on-prem vCenter server with Veeam
After adding the VMware Cloud on AWS SDDC vCenter the resource pools will be visible in the Veeam console
Now both vSphere environments are registered. Time to migrate a VM to the cloud!
Migrating a VM to VMware Cloud on AWS
Below is both a video and a series of screenshots describing the migration / replication job creation for the VM.
Creating some test files on the source VM to be migrated
Navigate to “Inventory” using the bottom left menu, click the on-premises vCenter server / Cluster and locate a VM to migrate in the on-premises DC VM inventory. Right-click the VM to migrate and create a replication job
When selecting the target for the replication, be sure to expand the VMware cloud on AWS cluster and select one of the ESXi servers. Selecting the cluster is not enough to list up the required resources, like storage volumes
Since VMC is a managed environment we want to select the customer-side of the storage, folder and resource pools
If you checked the box for remapping the network is even possible to select a target VLAN for the VM to be connected to on the cloud side!
Select to start the “Run the job when I click finish” and move to the “Home” tab to view the “Running jobs”
The migration of the test VM finished in less than 9 minutes
In the vCenter client for VMware Cloud on AWS we can verify that the replicated VM is present
After logging in and listing the files we can verify that the VM is not only working but also have the test files present in the home directory
Thank you for reading! Hopefully this has provided an easy-to-understand summary of the steps required for a successful migration / replication of VMs to VMC using Veeam