IPアドレスを変更せずにEC2からNC2への移行とAWSの2つのリージョン間でのDRフェイルオーバーを実現

一部のお客様は、AWSでリージョンをまたいだディザスタリカバリ(DR)を必要としていますが、別のリージョンにフェイルオーバーする際にIPアドレスが変更されるという課題に直面することがよくあります。この変更により、DRポリシーで保護されているインスタンス上で実行されているサービスへの外部アクセスが中断される可能性があります。

Nutanix Cloud Clusters (NC2) は、この課題に対応するためのDR機能をビルトインしており、リージョン間でフェイルオーバーする際にもIPアドレスを一貫して維持することができます。さらに、NC2ではCPUのオーバープロビジョニングが可能であるため、NC2への移行後にコンピュートコストを削減できる可能性もあります。ただし、DR中にIPアドレスを保持できるのは、すでにNutanixクラスター上に存在しているワークロードのみであるため、EC2インスタンスは先に移行しておく必要があります。

無料の移行ツールであるNutanix Moveを使用すれば、Amazon EC2からNutanix Cloud Clustersにワークロードを移行することができます。ただし、現時点ではIP保持をサポートしていません。このブログでは、移行中に一貫したIPアドレスを維持するための工夫を紹介しますが、MACアドレスが変更されることには注意してください。その後、NC2はNutanix DRソリューションの一部としてリージョン間フェイルオーバー中にIPアドレスを保持します。それでは始めましょう!

テストで使われていたソフトウエアバージョン

AOS6.10
Prism Centralpc.2024.2

ソリューションアーキテクチャ

この例では、オンプレミスのデータセンター(DC)、DRポリシーでカバーするAWS VPC(別のリージョンにフェイルオーバーしてもIPアドレスが変更されないようにする)にあるEC2インスタンス、そして2つのNC2クラスター(東京リージョンと大阪リージョン)を使用します。この例では、AWSの東京リージョン(ap-northeast-1)をプライマリリージョン、大阪リージョン(ap-northeast-3)をディザスタリカバリの場所として使用します。

オンプレミス環境との接続をDirect Connectで示していますが、このソリューションのテストはすべて各リージョンのTGWに接続されたS2S VPNで行われています。DR間の接続は、クロスリージョンVPCピアリングまたは2つのTransit Gateway (TGW) のピアリングを使用して行うことができます。

ネットワーキング

移行されたEC2インスタンスのIPアドレスを保持するために、Flow Virtual Networking (FVN) を使用して、NC2上にオーバーレイのNATなしオーバーレイネットワークを作成します。このネットワークのCIDR範囲は、EC2インスタンスが接続されている元のサブネットの範囲と一致させます。このオーバーレイネットワークは、東京および大阪のNC2クラスターの両方に作成されます。これにより、VMがフェイルオーバーされても同じCIDR範囲を持つネットワークに接続できます。

オンプレミスDCがこれらのVMにアクセスできるようにするために、プロセス全体でルートテーブルを変更します。これにより、移行されたEC2インスタンスの場所に関係なく、それらを指すルートを維持できます。

Terraform / Open Tofu を使用したVPCとTGWの自動作成

このソリューションを試してみたい場合、VPCやTGW、ルーティングをデプロイするためのTerraform / Open Tofuテンプレートを以下で入手できます:

https://github.com/jonas-werner/aws-dual-region-peered-tgw-with-vpcs-for-nc2-dr

リージョン間接続のためのピアリングタイプの選択

一般的に、以下のように考えられます:

  • VPCピアリング: データ転送コストがやや高いものの、直接的な接続のシンプルさが求められる低トラフィックのシナリオに最適です。
  • TGWピアリング: 高トラフィック環境や複雑なアーキテクチャに適しています。集中管理や低いデータ転送料金が、TGWアタッチメントの追加コストを上回ります。なお、トラフィックは2つのTGWを通過しますが、ピアリングインターフェイスではデータ転送料金は発生しないため、データは1回だけ課金されます(東京リージョンではおおよそ0.02セント/GiB)。

IP保持のための回避策

冒頭で述べたように、Nutanix Move仮想アプライアンスはEC2からNC2への移行に非常に優れていますが、現時点では移行されたワークロードのIPアドレスを保持する機能はありません。この課題を回避するために、以下の手順を実施します:

  1. 移行前
    AWS Systems Manager (SSM) を使用して、移行対象のインスタンス上でPowerShellまたはBashスクリプトを実行します。このスクリプトは、EC2インスタンスのID、ホスト名、およびローカルIPアドレスを取得し、その情報をDynamoDBテーブルに保存します。
  2. 移行中
    Nutanix Moveを使用して、EC2からNC2にインスタンスを移行します。この間にIPアドレスは変更されますが、移行先はEC2インスタンスが接続されていた元のネットワークのCIDR範囲と一致するFVNオーバーレイネットワークです。
  3. 移行後
    DynamoDBに保存した情報をテンプレートとして使用するPythonスクリプトを実行します。このスクリプトは、Nutanix Prism Central APIに接続し、既存のネットワークインターフェイスを削除し、正しい(元の)IPアドレスを持つ新しいインターフェイスを各インスタンスに追加します。

これらの手順を経て、インスタンスがNC2に移行された後は、東京と大阪のリージョン間でのDR設定が簡単に行えるようになります。

このブログに使われているスクリプトは以下の GitHubページからダウンロードできます:

https://github.com/jonas-werner/EC2-to-NC2-with-IP-preservation/tree/main

ステップ1: EC2インスタンスのIPアドレスを取得

このステップでは、EC2からNC2への移行の準備を行います。ネットワーク、ワークロード、および172.30.1.0/24ネットワークへのルートの初期状態は、以下の図の赤い線で示されています。

最初に、EC2インスタンスに関する情報を収集し、その情報をDynamoDBに保存します。効率を重視して、SSM Run Commandを使用してPowerShellスクリプトを実行します。これにより、WindowsおよびLinuxワークロードの両方を1回または2回の操作で簡単に処理できます。この例では、単一のWindows Server 2019 EC2インスタンスをテスト対象として使用します。

まず、この情報を保持するためのDynamoDBテーブルを作成します。このテーブルには特別な要件はなく、SSMがスクリプトを実行する際にアクセスできれば十分です。もちろん、SSMコマンドを実行する際に使用するIAMロールにDynamoDBへのアクセス権を付与する必要があります。そのため、以下のような権限を標準のSSMロールに追加します:

{
    "Sid": "AllowDynamoDBAccess",
    "Effect": "Allow",
    "Action": [
        "dynamodb:PutItem"
    ],
    "Resource": "arn:aws:dynamodb:<your-aws-region>:<your-aws-account>:table/<dynamodb-table-name>"
},

インスタンス名をメタデータから収集するために、EC2コンソールで「インスタンスメタデータにタグを許可」設定を有効化します。この設定は、移行後にNC2でインスタンスを検索する際に「Name」タグをキーとして使用するために重要です。他の方法(たとえばインスタンス名自体を使用する)も可能ですが、このケースではEC2の「Name」タグを使用します。これは、移行後も同じタグがNC2に表示されるためです。

SSM Run Commandを使用して、インスタンス上でスクリプトを実行します。以下のコマンド例を参照してください:

スクリプトの実行後、Windows EC2インスタンスのエントリがDynamoDBに表示されます。これには、インスタンスID、ホスト名、およびIPアドレス(例: 172.30.1.34)が含まれます。このIPアドレスは保持したいアドレスです。

次に、EC2 から NC2 への移行を実行します。

ステップ2: EC2からNC2への移行

次に、EC2からNC2への移行を実施します。この移行では、Nutanix MoveをNC2クラスター上に展開済みである必要があります。また、FVNオーバーレイネットワークを作成しており、そのCIDRはEC2インスタンスが接続されていた元のサブネットと一致していますが、DHCPの範囲は現在そのサブネットで使用されているIPアドレスを避けるよう設定されています。

Moveには、NC2クラスターとAWS環境が移行元および移行先として設定されています。

「Missing Permissions」という警告が表示される場合がありますが、これはAWS IAMポリシーで、EC2への移行を許可していないためです。しかし、EC2からの移行のみを行う場合、この警告は無視して構いません。必要なIAMポリシーの詳細は、Moveのマニュアルをご確認ください。

移行後、VMは異なるIPアドレスを持つようになります(移行先のFVNサブネットのDHCP範囲から取得されます)。

次のステップで、Pythonスクリプトを使用して元のIPアドレスに戻す処理を行います。

ステップ3: EC2インスタンスが元々持っていたIPアドレスに戻す

次に、Pythonスクリプトを実行して、DynamoDBに保存されたインスタンス名を参照し、それをNC2のVM名と照合します。その後、Prism Central APIを使用して既存のネットワークインターフェイスを削除し、新しいネットワークインターフェイスを追加します。この新しいインターフェイスには、元の静的IPアドレスが設定されます。

このスクリプトはGitHubからダウンロードできます。スクリプトを実行するには、Prism Centralのユーザー名とパスワードを環境変数としてエクスポートしてください。また、Prism CentralのIPアドレス、使用するサブネット名、AWSリージョン、DynamoDBテーブル名をお使いの環境に合わせて更新してください。

スクリプトを実行すると、VMが元のIPアドレスを取得したことを確認できます。ただし、このプロセスではNICが置き換えられるため、IPアドレスは同じですが、MACアドレスは変更されています。

EC2からNC2への移行後の東京リージョンでのルーティング

VMがNC2上に存在するようになったため、トラフィックが元のEC2インスタンスではなく、このVMに向かうようルーティングを更新する必要があります。これは、EC2 VPCをTGWから切断し、NC2 VPCを指すようにTGWに静的ルートを追加することで実現します。このサブネットはすでにDXGWの「許可されたプレフィックス」として存在するはずなので、この部分は変更する必要はありません。

赤で強調されたアタッチメントは、172.30.1.0/24サブネットへのアクティブルートを示しており、現在はNC2 VPCを指すように変更されています。このサブネットはFVNのNo-NATサブネットであるため、NC2 VPCのルートテーブルに表示されます。

移行作業のまとめ

これでEC2インスタンスはNC2に移行されました。IPアドレスは保持されており、AWSとオンプレミスDC間のルーティングが更新されたため、オンプレミスのユーザーは、通常通り移行されたインスタンスにアクセスできます。実際、移行のメンテナンスウィンドウ、NC2でのVMの電源オン、およびルーティングの切り替えを除けば、これらのユーザーは元のEC2インスタンスが別のプラットフォームで動作するようになったことに気付くことはほとんどありません

東京と大阪のNC2クラスター間でのDR構成

ここまでで、東京リージョンのNC2クラスターにEC2インスタンスを移行し、IPアドレスを保持した状態でオンプレミス環境と通信できるようになりました。次に行うのは、東京と大阪の2つのNutanixクラスター間で災害復旧(DR)構成を設定することです。DRはNutanixの標準機能として組み込まれているため、この設定は非常に簡単です。Prism Centralインスタンスをリンクし、大阪側でもFVNオーバーレイネットワークを作成して、フェイルオーバー後も同じCIDR範囲を使用できるようにします。

災害復旧機能を有効にした後、Prism Centralを使用してDRプランを簡単に作成できます。

DRプランを作成する際、東京ネットワーク上のVMが大阪DRサイト上の対応するネットワークにフェイルオーバーするように設定します。

最後に、東京のNC2クラスターから大阪のNC2クラスターにVMをフェイルオーバーします。

フェイルオーバー後、VMが大阪で正常に起動していることを確認できるだけでなく、期待通りにIPアドレスが保持されていることも確認できます。

東京から大阪へのルーティング更新

東京から大阪へのフェイルオーバーが完了したら、172.30.1.0/24ネットワークを指すルートを更新し、大阪を指すようにTGWの設定を変更します。これにより、以下のようなネットワーク構成になります。

大阪TGWでは、172.30.0.0/16ネットワークを大阪のNC2 VPCに向けた静的ルートを作成します。

また、東京TGWの静的ルートも更新し、ローカルのNC2 VPCを指すルートを大阪へのピアリング接続に変更します。

結果とまとめ

これらのルーティング変更が適用されることで、オンプレミスのデータセンターからのユーザーは、同じIPアドレスを使用して同じVMにアクセスし続けることが可能になります。この一貫性は、EC2からNC2への移行、および東京リージョンから大阪リージョンへの災害復旧計画に基づいたフェイルオーバー後も維持されます。

このソリューションにより、AWS上で動作していたワークロードがNC2上で動作するようになり、その後もIPアドレスを変更することなく運用を継続できます。これにより、ユーザーにとっての影響を最小限に抑えつつ、DR計画を実現できます。

ぜひこのソリューションを試してみてください。また、この種のソリューションに興味がある場合は、Nutanixの担当者にお問い合わせください。お読みいただきありがとうございました!

リンク

Migrate from EC2 to NC2 and perform DR failover between two AWS regions without changing IP addresses

Some customers require cross-region disaster recovery (DR) in AWS but often face the challenge of changing IP addresses during a failover to another region. This change can disrupt external access to services running on instances covered by the DR policy.

Nutanix Cloud Clusters (NC2) address this challenge with built-in DR functionality that ensures IP addresses remain consistent during failovers between regions. Bonus: It is possible to over-provision CPU on NC2, so it may actually be possible to save on compute costs after the migration to NC2. However it can only retain IP addresses during DR for workloads which are already residing on a Nutanix cluster, so we have to migrate the EC2 instances first.

The free Nutanix Move migration tool can migrate workloads from Amazon EC2 to Nutanix Cloud Clusters, though it currently lacks support for IP retention. In this blog we use some creative workarounds to maintain consistent IPs throughout the migration, although note that MAC addresses will change. NC2 then retains the IPs during regional failovers as part of the Nutanix DR solution. Let’s dive in!

Software versions used during testing

AOS6.10
Prism Centralpc.2024.2

Solution architecture

In this case we have an on-premises datacenter (DC), an AWS VPC with EC2 instances which we want to have covered by a DR policy (so they can fail over to another region without changing IP addresses) and finally two NC2 clusters – one in the primary region and one in a separate region for DR purposes. We use Tokyo (ap-northeast-1) as the primary AWS region and Osaka (ap-northeast-3) as the disaster recovery location in this example.

Overview of solution architecture. Click to embiggen.

We illustrate connectivity to the on-premises environment by using Direct Connect. Note that all the testing of this solution has been done with S2S VPN attached to the TGW’s in each region. Peering between the two DR locations can be done by using cross-region VPC peering or peering of two Transit Gateways (TGW).

Networking

To retain the IP addresses of the migrated EC2 instances we use Flow Virtual Networking (FVN) to create overlay no-NAT overlay networks on NC2 with a CIDR range which matches that of the original subnet the EC2 instances are connected to. We create this overlay network in both Tokyo and Osaka NC2 clusters so that we can later fail over the VMs and have them attach to a network with the same CIDR range.

To ensure the on-premises DC is able to access the VMs we modify the route tables throughout the process. That way we maintain routes which point to the migrated EC2 instances, regardless of where they are located.

Automating the VPC and TGW creation with Terraform / Open Tofu

In the case you’d like to try this out yourself, the Terraform / Open Tofu templates for deploying the VPC’s, TGW’s and the routing for these can be found on GitHub here:

https://github.com/jonas-werner/aws-dual-region-peered-tgw-with-vpcs-for-nc2-dr

When to use which peering type for inter-region connectivity

Generally it can be said that VPC Peering is better for lower traffic scenarios or when the simplicity of direct peering is desirable, despite slightly higher data transfer costs incurred for VPC peering.

TGW Peering is more cost-efficient for high-traffic environments or complex architectures, where the centralized management and lower data transfer rates outweigh the additional costs per TGW attachment. Note that although traffic passes through two TGW’s, the peering interface doesn’t incur data transfer charges so the data is only charged once (roughly 0.02 cents / GiB in the Tokyo region).

The workaround for IP retention

As mentioned in the introduction, while the Nutanix Move virtual appliance is very capable at migrating from EC2 to NC2, it is at time of writing unable to retain the IP addresses of the workloads it migrates. To work around this we do the following:

  1. Prior to the migration we use AWS Systems Manager (SSM) to run a PowerShell or Bash script on the instances to be migrated. The script captures the EC2 instances ID, hostname and local IP address and stores that information into a DynamoDB table for use later
  2. We perform the migration from EC2 to NC2 using Nutanix Move. The IP address will change although we migrate the instance to a Flow Virtual Networking (FVN) overlay network with the same CIDR range as the original network the EC2 instances are connected to.
  3. We run a Python script which uses the DymamoDB information as a template and then connects to the Nutanix Prism Central API. It then removes the existing network interface and adds a new one with the correct (original) IP address to each of the migrated instances.

Once the instances are migrated to NC2 the process of configuring DR between Tokyo and Osaka regions is trivial.

You can download the PowerShell and Python scripts used in this blog on GitHub:

https://github.com/jonas-werner/EC2-to-NC2-with-IP-preservation/tree/main

Step 1: Capture IP addresses of the EC2 instances

In this step we prepare for the migration from EC2 to NC2. The initial state of the network, the workloads and the route to the 172.30.1.0/24 network is as illustrated by the red line in the below diagram.

To start with we gather information about the EC2 instances and store that info in DynamoDB. In the name of efficiency we use the SSM Run command to execute the PowerShell script. This makes it easy to get this done in a single go (or two “goes” if we do both Windows and Linux workloads). We test with a single Windows 2019 Server EC2 instance in this example.

First create a DynamoDB table to hold this information. Nothing special is required for this table as long as it is accessible to SSM as it runs the script. We need to give the IAM role used when running SSM commands access to DynamoDB of course, so we add the following permissions to the standard SSM role:

        {
            "Sid": "AllowDynamoDBAccess",
            "Effect": "Allow",
            "Action": [
                "dynamodb:PutItem"
            ],
            "Resource": "arn:aws:dynamodb:<your-aws-region>:<your-aws-account>:table/<dynamodb-table-name>"
        },

In order to collect the instance name from meta data we enable the “Allow tags in instance metadata” setting in the EC2 console. This is important as we will use the “Name” tag in EC2 as the Key to look up the instance in NC2 post-migration. Of course other methods could be used – most obviously the name of the instance itself. However in this case we use the EC2 name tag, as this is also how the VM will show up in NC2 post migration.

The we execute the script on our instances through the SSM Run command as follows

After execution we can see an entry for our Windows EC2 instance showing its instance ID, hostname and IP address: 172.30.1.34. This is the IP we want to retain.

That’s all for this section. Next we perform the migration from EC2 to NC2.

Step 2: Migrating from EC2 to NC2

For the migration we have deployed Nutanix Move on the NC2 cluster. We have also created an FVN overlay no-NAT network with the same CIDR as the subnet the EC2 instance is connected to, although the DHCP range is set to avoid any of the IPs currently used by instances on that subnet.

Move has the NC2 cluster and the AWS environment added in as migration sources / targets.

It complains about “missing permissions” but this is because we have only given it permission to migrate FROM EC2, not TO EC2. Since that is all we want to do, this is fine. Please refer to the Move manual for details on the AWS IAM policy required depending on your use case.

Post migration the VM will have a different IP address (taken from the DHCP range on the FVN subnet it is connected to).

We use a Python script in the next section to revert the IP address to what it was while running as an EC2 instance.

Step 3: Revert the IP address to match what the EC2 instance had originally

Now we execute a Python script which will look up the instance name in DynamoDB, match it with the VM name in NC2 and then remove and re-create the network interface using the Prism Central API. The new interface will have the original IP configured as a static address.

The script can be downloaded from GitHub here. Please export the Prism Central username and password as environment variables to run the script. Also update the Prism Central IP and the subnet name to match the one used in your environment as well as the AWS region and the DynamoDB table name.

After running the script we can now verify that the VM has received its original IP address. Note that since we have replaced the NIC in this process, the IP is the same as before, but the MAC address will have changed.

Routing after migration from EC2 to NC2 in Tokyo

Now that the VM exists on NC2 we need to update our routing to ensure that traffic is directed to this VM and not the original EC2 instance (which has now been shut down by Move after the migration).

To do this we disconnect the EC2 VPC from the TGW and add the subnet as a static route in the TGW, this time pointing to the NC2 VPC rather than the EC2 VPC. The subnet should already exist as an “Allowed prefix” on the DXGW, so that part can be left as-is.

The attachments highlighted in red shows the active route to the 172.30.1.0/24 subnet, which has now been changed to point to the NC2 VPC. Since the subnet is a FVN no-NAT subnet it will show up in the NC2 VPC route table.

Wrapping up the migration part

Now our EC2 instance has been migrated to NC2. Its IP address is intact and since we have updated the routing between AWS and the on-premises DC, the on-prem users can access the migrated instances just like they normally would. In fact, apart from the maintenance window for the migration, VM power-up on NC2 and the routing switch, they are unlikely to notice that their former EC2 instance is now running on another platform.

Configuring DR between the Tokyo and Osaka NC2 clusters

At this point all we have left to do is set up the DR configuration between the two Nutanix clusters in Tokyo and Osaka. Since DR is a built-in component, this is very straight forward. We link the two Prism Central instances and of course create the FVN overlay network on the Osaka side as well to ensure we can keep the same CIDR range also after failover.

After enabling Disaster Recovery we can easily create the DR plan through Prism Central

When we create the DR plan we set the VMs on the Tokyo network to fail over to its equivalent on the Osaka DR site

Finally we proceed to fail over our VM from NC2 in Tokyo to NC2 in Osaka

After failing over we can confirm that the VM is not only powered up in Osaka, but that it has also retained the IP address, as expected.

Updating the routing to point to Osaka rather than Tokyo

After failing over from Tokyo to Osaka we need to also update the routes pointing to the 172.30.1.0/24 network by modifying the TGW in Osaka. From a diagram perspective it will look like follows.

On the Osaka TGW we create a static route to the 172.30.0.0/16 network pointing to the Osaka NC2 VPC

We also update the static route on the Tokyo TGW which points to the local NC2 VPC and instead set it to point to the peering connection to Osaka

Results and wrap-up

With these routing changes implemented it is now possible for users on the on-premises DC to continue to access the very same VMs with the very same IP addresses. This possible is even after those workloads have been migrated from EC2 to NC2 in Tokyo and then further having been failed over with a DR plan from Tokyo region to Osaka region.

Hope that was helpful! Please reach out to your local Nutanix representative for discussions if this type of solution is of interest. Thank you for reading!

Links

Adding an AWS Elastic Load Balancer (ELB) to direct traffic between web servers on Nutanix Cloud Clusters (NC2)

One of the great things about running a virtualized infrastructure on NC2 on AWS is the close proximity to all the cloud native services. One of those highly useful services is the AWS ELB or Elastic Load Balancer.

In this post we show how to get floating IP addresses from the VPC in which NC2 is located and to assign them to a number of web servers running as VMs on NC2. Then we create a Load Balancer target group and finally we create an Application Load Balancer (ALB) and attach it to the target group.

Architecture

In this blog post we only cover the deployment of the web servers and the load balancer, however, Route 53 can also be leveraged for DNS and AWS WAF for security and DDOS protection purposes as illustrated below

Preparing some web servers

We first deploy a few test web servers. In this case the wonderfully named Jammy Jellyfish edition of Ubuntu Server as a cloud image. Feel free to download the image from here:

https://cloud-images.ubuntu.com/releases/22.04/release-20240319/ubuntu-22.04-server-cloudimg-amd64.img

Prism Central makes it very easy to deploy multiple VMs in one go.

When deploying, make sure to use the cloud-init script to set the password and any other parts to make the VM usable after 1st boot:

#cloud-config
password: Password1
chpasswd: { expire: False }
ssh_pwauth: True

Now we have our VMs ready. I’ve installed the Apache web server to serve pages (apt install apache2) but feel free to use whatever works best in your setup.

I used the following index.html code to show the server ID

ubuntu@ubuntu:~$ cat /var/www/html/index.html
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Welcome</title>
    <style>
        body {
            display: flex;
            justify-content: center;
            align-items: center;
            height: 100vh;
            margin: 0;
            font-family: Arial, sans-serif;
            background-color: #f0f0f0;
        }
        .message-container {
            max-width: 300px;
            padding: 20px;
            text-align: center;
            background-color: #ffffff;
            border: 1px solid #cccccc;
            border-radius: 10px;
            box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
        }
    </style>
</head>
<body>
    <div class="message-container">
        <script>
            // Fetch the hostname
            document.write("Welcome to Server 1");
        </script>
    </div>
</body>
</html>

Configure floating IP addresses for the web servers

Next we request a few floating IP addresses from the VPC which NC2 is deployed into and then assign one IP each to our web servers. Luckily Prism Central makes also this very easy to do – in a single step! From “Compute & Storage”, select “Floating IPs” under “Network & Security”:

After assigning the IPs we can see that each VM have both an internal and an external IP address, where the “external” IP comes from the AWS VPC CIDR range

Creating an AWS LB target group

Next we create a target group for the AWS ALB which we deploy in the next step. The LB target group simply contain the Floating IP addresses we just assigned as well as a health check for the web root of these web servers.

We create an “IP address” target group and set the health check to be HTTP, port 80 and the path as “/” or the web root.

We then add the Floating IP addresses we created previously

Create the Application Load Balancer (ALB)

Finally we create the ALB and assign it to our target group

Test of the ALB

Now we’re all done and can access our ALB to see if it shows balances between the NC2 VMs as expected.

We’re getting a different web server each time we refresh the page – all good!

That’s all for now. Hope that was helpful and thank you for reading!

Quickly generate the on-prem commands to connect a Mikrotik switch / router running RouterOS to an existing AWS S2S VPN.

Problem statement

For infrequent VPN connectivity between on-prem labs / data centers and AWS it doesn’t make sense to have a permanent VPN connection up 24/7. However, configuring the on-premises Mikrotik router each time is time consuming and error-prone when done manually.

Functionality

This Python script connects to AWS using boto3, reads the details for the first VPN connection it can find and then generates the commands required to set up:

  • Inside IP addresses for the VPN tunnel
  • IPsec proposal settings
  • IPset profile settings
  • IPsec peers
  • IPsec secrets
  • BGP peers
  • BGP networks to advertise
  • Firewall setting
  • etc.

After the commands are generated, simply copy and paste into a Mikrotik CLI window over SSH or similar and the connection will come up in a couple of minutes.

Prerequisites

This script only handles the on-prem side of the connectivity. It assumes the following is already in place at the AWS side:

  • VPC with subnets
  • CGW (Customer Gateway)
  • VGW (Virtual Private Gateway) which is attached to the VPC
  • VPN connection configured to use the VGW and CGW

If you require information on how to set up the AWS-side components, please refer to this blog post: https://jonamiki.com/2022/05/04/mikrotik-vpn-to-aws-vpc/

Script download

Please refer to this GitHub page for the script itself:

https://github.com/jonas-werner/aws-vpn-mikrotik-config-generator/tree/main

Example of running the script

The AWS side has been configured but IPSEC and BGP are both down

Running the script generates the commands required to connect the Mikrotik to AWS

Copy and paste the generated commands into the Mikrotik CLI

After a couple of minutes, IPSEC is up and routes are dynamically shared over BGP

More information

For more information, including how to set up the AWS VPN configuration and a more detailed explanation of the manual steps to configure the Mikrotik router, please refer to this blog post: https://jonamiki.com/2022/05/04/mikrotik-vpn-to-aws-vpc/

Running containerized game servers with EKS Anywhere on Nutanix

Summary

Did you ever wish you could host your own multi-player gaming servers with k8s? In that case I have great news, because in this post we’re covering how to deploy online multi-player game server containers with EKS Anywhere on Nutanix infrastructure.

Components

The game in question is Xonotic, which is a classic, fast-paced, multi-player shooting game based on the Quake engine. To deploy it we use Agones – a platform for running, scaling and orchestrate multi-player gaming containers on k8s.

Agones, in turn, goes on top of EKS Anywhere which runs on a Nutanix cluster. In this case we have deployed a cluster in our Phoenix DC, which is also linked with a Nutanix NC2 cluster on AWS.

Nutanix clusters on-prem in Phoenix and in AWS are managed holistically through the Nutanix Prism Central management console. K8s management is done by registering the EKS Anywhere with the EKS service in AWS. K8s node management is done through SSM or Amazon Systems Manager.

Architecture

The two Nutanix clusters (on-prem and cloud) are linked via a Direct Connect line and can be managed holistically using private networking.

The gaming components are managed through standard the k8s toolset while EKS in the AWS console is used for monitoring of the cluster.

The k8s nodes run as virtual machines on Nutanix and each have the SSM (AWS Systems Manager) agent installed. This makes it possible to monitor the VMs, do patch management and even remote connectivity through the AWS console

Disclaimer: Inventory data from SSM can be sent to an S3 bucket, converted with Athena and then displayed in graphical form through Amazon Quicksight, as is shown to the right in the diagram. This guide doesn’t go through those particular steps, but they are well documented on the Amazon website.

Overview of the architecture both on-prem and in the cloud. Pardon the Japanese script here and there -this was created for a Japanese event after all

The EKS management and SSM connectivity is done to public AWS endpoints in this case, so it goes over the internet. It would also have been possible to do this over private networking through the DX connection, but I don’t have the IAM privileges to create anything new in the account NC2 is running in.

Overview of steps

The following steps will be covered while building the environment

StepGoalTask
1Holistic Nutanix cluster managementPrism Central download and configuration
2Building EKS Anywhere node image #1Download and deploy Ubuntu 22.04 image
3Building EKS Anywhere node image #2Create VM and follow image-builder steps
4EKS Anywhere deploymentRun EKS Anywhere installer from Ubuntu
5k8s cluster managementRegister EKS Anywhere with AWS
6k8s node managementInstall SSM agent on EKS Anywhere nodes
7Game platform orchestrationInstallation of Agones platform
8Game server deploymentCreation of Xonotic pods through Agones
9Good-old fun!Install the Xonotic client and go fragging!

Step 1: Prism Central download and configuration

If not yet deployed, download and deploy Prism Central through the Prism Element UI as per the below.

Once Prism Central has been deployed, reset the admin password over the CLI. SSH to the Prism Central IP using the user “nutanix” and the password “nutanix/4u”. Reset the password using:

ncli user reset-password user-name=admin password=yourpassword

On each cluster, register with Prism Central through the Prism Element UI. Once registered, the clusters show up in Prism Central as per the below

Step 2: Download and deploy Ubuntu 22.04 image

From the Ubuntu website, copy the URL to the Ubuntu image (Jammy Jellyfish) from here:

https://cloud-images.ubuntu.com/releases/22.04/release/

Be sure to pick the ubuntu-22.04-server-cloudimg-amd64.img and not the disk-kvm image as the kvm image won’t boot.

Add it as a DISK image from URL in the Nutanix web UI:

Step 3: Create VM and follow image-builder steps

Use the image to create a new Ubuntu VM.

  • I used 2 CPUs with 2 cores each and 8 Gb of RAM.
  • Delete the CD-ROM drive
  • Add a NIC.
  • Set the disk to clone from the image created above
  • Set boot mode to UEFI rather than the default BIOS.

When given the option to add a Custom Script, add a cloud init snippet as per the below to enable SSH (with password rather than key) and set your password

#cloud-config
password: vrySekrPaswd1%
chpasswd: { expire: False }
ssh_pwauth: True

Disk update: You’ll note that the disk created from the image is very small – just 2.2 Gb. Changing the disk size during VM creation isn’t supported, but it can be made larger afterwards. Just accept the size for now and update the disk size once the VM shows up in the inventory. I set mine to 50 Gb.

Don’t start the VM just yet. First we want to add a serial port to the VM through the Nutanix CLI. SSH to any CVM in the cluster and issue the below command:

acli vm.serial_port_create <VM_NAME> index=0 type=kNull

In my case that looks like

nutanix@NTNX-16SM6B260127-C-CVM:10.11.22.31:~$ acli vm.serial_port_create ubuntu-k8s-image-builder index=0 type=kNull
VmUpdate: pending
VmUpdate: complete
nutanix@NTNX-16SM6B260127-C-CVM:10.11.22.31:~$

Now we can power on the VM and log in over SSH using the password set through the cloud init script.

For the image creation, there are official instructions from AWS here. However, I found that some of the official steps needed to be modified to work. Please refer to the below if you want to do the same as I used when creating the image:

As user “ubuntu” on the image-builder VM:

sudo adduser image-builder
sudo usermod -aG sudo image-builder
sudo su - image-builder

Now we have switched to the “image-builder” user and continue the process:

sudo apt install python3-pip -y
pip3 install --user ansible-core==2.15.9
export PATH=$PATH:/home/image-builder/.local/bin
cd /tmp
BUNDLE_MANIFEST_URL=$(curl -s https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$EKSA_RELEASE_VERSION\").bundleManifestUrl")
IMAGEBUILDER_TARBALL_URI=$(curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[0].eksD.imagebuilder.uri")
curl -s $IMAGEBUILDER_TARBALL_URI | tar xz ./image-builder
sudo install -m 0755 ./image-builder /usr/local/bin/image-builder
cd

Now a “which image-builder” should show the binary installed in /usr/local/bin as per the below

image-builder@ubuntu:~$ which image-builder
/usr/local/bin/image-builder

Also, a “which ansible” should show it installed in the local user directory as follows:

image-builder@ubuntu:~$ which ansible
/home/image-builder/.local/bin/ansible

image-builder@ubuntu:~$ ansible --version
ansible [core 2.15.9]

Now we create the “nutanix.json” file to enable image-builder to use our Prism Central to create our k8s image

cat > nutanix.json
{
  "nutanix_cluster_name": "YOUR-CLUSTER-NAME",
  "nutanix_subnet_name": "YOUR-SUBNET",
  "nutanix_endpoint": "PRISM-CENTRAL-IP",
  "nutanix_insecure": "true",
  "nutanix_port": "PRISM-CENTRAL-PORT (DEFAULT 9440)",
  "nutanix_username": "admin",
  "nutanix_password": "PRISM-CENTRAL-PASSWORD"
}

Now we are ready to execute image-builder and create the image. In this case we create an image for version 1.28. This will take around 10 minutes to complete

image-builder build --os ubuntu --hypervisor nutanix --release-channel 1-28 --nutanix-config nutanix.json

Once complete you should be greeted with the following message:

In the list of images in Prism, the new k8s image will show up as follows

Step 4: EKS Anywhere deployment

Now we have an image and are ready to start deploying EKS Anywhere …. well, almost. First we create a configuration file which then is used as the template for the deployment.

Install Docker:

Official instructions can be found here: https://docs.docker.com/engine/install/ubuntu/

After the install, add the “image-builder” user to the docker group so we can use docker without “sudo”

sudo usermod -aG docker image-builder

Log out and back in again for the group change to take effect, then test as follows

image-builder@ubuntu:~$ docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

Install eksctl, eksctl-anywhere and kubectl

Official instructions can be found here: https://anywhere.eks.amazonaws.com/docs/getting-started/install/

Create a cluster-config.yaml file

eksctl anywhere generate clusterconfig AWESOME-CLUSTER-NAME --provider nutanix > cluster-config.yaml

Edit the cluster-config.yaml file to adjust to your local environment

Update the cluster-config.yaml file created in the previous step to point to the Prism Central environment you’d like to use. For a lab environment you may also want to disable TLS certificate check. Official instructions for how to modify the file are listed here.

The entire file is too long to upload here, but the fields I’ve modified are:

KeyValue
controlPlaneConfiguration.count3
controlPlaneConfiguration.endpoint.hostFloating IP to use for control plane VM
kubernetesVersion1.28 (to match with the image built)
workerNodeGroupConfigurations.count3
spec.endpointPrism Central IP / FQDN
spec.insecuretrue (spec.insecure is a new entry)
spec.cluster.namePrism Element cluster name
spec.image.nameName of k8s image created earlier
spec.subnet.nameName of subnet to use for k8s nodes
spec.users.name.sshAuthorizedKeysCopy and paste your RSA SSH key here

Quick screenshot showcasing the addition of the “spec.insecure” parameter for lab clusters without a valid SSH/TLS cert:

Deploying the cluster

First export the credentials to Prism Central as per the below:

image-builder@ubuntu:~$ export EKSA_NUTANIX_USERNAME="admin"
image-builder@ubuntu:~$ export EKSA_NUTANIX_PASSWORD="PRISM-CENTRAL-PWD"

Now are are ready to deploy with the cluster-config.yaml file as our template

eksctl anywhere create cluster -f cluster-config.yaml

After a successful deployment, the below will be shown:

Get the credentials for your cluster

export KUBECONFIG=${PWD}/YOUR-CLUSTER-NAME/YOUR-CLUSTER-NAME-eks-a-cluster.kubeconfig

List the k8s nodes and their IP addresses

kubectl get nodes -o wide

Step 5: Register EKS Anywhere with AWS

First we need the aws cli installed and configured with an access key and a secret access key

sudo apt install awscli

Then configure with your credentials

aws configure

Generate the EKS connector configuration files

eksctl register cluster --name YOUR-CLUSTER-NAME --provider EKS_ANYWHERE --region YOUR-REGION

This will generate three yaml files like so

image-builder@ubuntu:~/EKS-A$ ls -1
eks-connector-clusterrole.yaml
eks-connector-console-dashboard-full-access-group.yaml
eks-connector.yaml
image-builder@ubuntu:~/EKS-A$

Then apply the configuration files with kubectl

kubectl apply -f CONFIG-FILE

Access the AWS console, navigate to Amazon Elastic Kubernetes Service and verify that the cluster shows up as it should. Ensure you are in the region you selected when generating the yaml config files

Click the cluster name , go to the Resources tab and select Pods. Here you filter to find the Xonotic game pods as per the below

Step 6: Install SSM agent on EKS Anywhere nodes

With the AWS Systems Manager agent installed, it is possible to monitor the k8s cluster nodes, get their software inventory, do patch management, remote access and various other things

The first thing we do is to create a managed node activation in the AWS Console. Navigate to Systems Manager and select Hybrid Activations. Be sure to pick the right region.

Save the resulting activation details as they are used when registering installing the SSM agent and registering the the k8s nodes

For the agent installation and registration, please follow the official guide here:

https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html

Once the agent is installed and registered, the nodes will show up in SSM as per the below

Step 7: Installation of Agones platform

First of all, install helm so we can use helm charts

sudo snap install helm --classic

Add the Agones repo, update and install

helm repo add agones https://agones.dev/chart/stable
helm repo update
helm install v1.38.0 --namespace agones-system --create-namespace agones/agones

Verify the Agones install by listing the pods

kubectl --namespace agones-system get pods -o wide

Step 8: Creation of Xonotic pods through Agones

Now when we have the Agones game orchestration platform running, all we need to do is to deploy the actual game containers into it thusly

kubectl apply -f https://raw.githubusercontent.com/googleforgames/quilkin/main/examples/agones-xonotic-sidecar/sidecar.yaml

We can now list the game servers

kubectl get gameservers

Step 9: Install the Xonotic client and go fragging!

The Xonotic game client is available for download here:

https://xonotic.org/download/

Mac users on Apple silicon can install the client with Brew

brew install xonotic

Start the client, select multi-player and add the IP and port of your game server

Play the game!

Closing

This has been a somewhat lengthy guide on configuring k8s on top of Nutanix with the intent of running containerized game servers. Hopefully it has been informative. Originally I wanted to expand the section on SSM and other features on managing the cluster through AWS but thought this blog post was long enough already. Perhaps those areas will be worth re-visiting later on if there’s interest.

Happy Fragging!

Links and References

  • https://cloud-images.ubuntu.com/releases/22.04/release/
  • https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000CshJCAS
  • https://image-builder.sigs.k8s.io/capi/providers/nutanix.html
  • https://anywhere.eks.amazonaws.com/docs/osmgmt/artifacts/#building-node-images
  • https://anywhere.eks.amazonaws.com/docs/getting-started/nutanix/
  • https://anywhere.eks.amazonaws.com/docs/getting-started/nutanix/nutanix-spec/
  • https://devopscube.com/eks-anywhere-cluster/
  • https://googleforgames.github.io/quilkin/main/book/quickstarts/agones-xonotic-sidecar.html