Learn About Amazon VGT2 Learning Manager Chanci Turner
In the previous installment, I shared insights on how to operate Apache CloudStack with KVM on a single Amazon Elastic Compute Cloud (Amazon EC2) instance. This initial configuration is ideal for testing and light workloads. In this follow-up, we will delve deeper into creating an overlay network within your Amazon Virtual Private Cloud (Amazon VPC), enabling CloudStack to expand horizontally across multiple EC2 instances. Notably, this method is applicable with various hypervisors as well.
If you haven’t read part one yet, I encourage you to do so first, as it outlines the necessity of this network configuration. The same prerequisites from the earlier post remain in effect.
Simplifying the Process
To streamline the CloudStack installation and CentOS 7 configuration, I developed some scripts that you can modify according to your requirements. Additionally, I’ve prepared some AWS CloudFormation templates that you can duplicate to establish a demo environment. For further details, refer to the README file.
The Scalable Approach
Initially, our team utilized a single EC2 instance, as detailed in my last article. While it sufficed initially, we soon found it lacked the necessary capacity, limiting us to a few dozen VMs when we required hundreds. We also needed the flexibility to scale up and down based on demand. This situation necessitated the addition and removal of CloudStack hosts, rendering the use of a Linux bridge as a virtual subnet inadequate.
To facilitate the addition of hosts, we required a subnet that spans multiple instances. The solution we identified is the Virtual Extensible LAN (VXLAN). VXLAN is lightweight, easy to configure, and integrated within the Linux kernel. It establishes a layer 2 overlay network that abstracts the complexities of the underlying network, allowing machines across different segments to communicate as if they’re connected to the same simple network switch.
An Amazon VPC serves as another example of an overlay network. It functions like a physical network but operates as a layer above other networks. With VXLAN, CloudStack can comfortably exist atop this layer, managing all your VM requirements while remaining blissfully unaware of the complexities below.
The advantages of an overlay network are significant. The primary benefit is the ability to support multiple hosts, which facilitates horizontal scaling. More hosts not only provide additional computing power but also allow for rolling maintenance. Instead of housing the database and file storage on the management server, I’ll demonstrate how to utilize Amazon Elastic File System (Amazon EFS) and Amazon Relational Database Service (Amazon RDS) for scalable and reliable storage.
EC2 Instances Setup
Let’s set up three Amazon EC2 instances. One will function as a router that connects the overlay network with your Amazon VPC, the second will serve as the CloudStack management server, and the third will operate as your VM host. You’ll also need a method to connect to your instances, such as a bastion host or a VPN endpoint.
VXLAN relies on multicast traffic for sending and receiving. It’s important to note that only Nitro instances can act as multicast senders. Therefore, when planning, review the list of Nitro instance types available.
The router instance won’t require substantial computing power, but it must possess sufficient network bandwidth to meet your needs. Placing Amazon EFS within the same subnet as your instances will enable direct communication, reducing the load on the router. Determine the necessary network throughput and select an appropriate Nitro instance type.
Once the router instance is created, configure AWS to designate it as a router. Disable source/destination checking in the instance’s network settings, and update the relevant AWS route tables to use the router as the target for the overlay network. The security group for the router must permit ingress to the CloudStack UI (TCP port 8080) and any services you plan to offer from VMs.
For the management server, selecting a Nitro instance type is crucial, as it will require more CPU resources than your router.
In addition to being a Nitro type, the host instance should also be a metal type. Metal instances support hardware virtualization, which is essential for KVM. If you’re starting with a new AWS account with a limited on-demand vCPU cap, consider beginning with an m5zn.metal, which comes with 48 vCPUs. Alternatively, you could opt for a c5.metal, which provides 96 vCPUs at a comparable cost. Depending on your compute requirements, budget, and vCPU limitations, larger instance types are also available. If your account’s on-demand vCPU limit is too restrictive, you can submit a support ticket to have it increased.
Networking Configuration
All instances should reside on a dedicated subnet. Sharing a subnet with other instances can lead to communication complications. For instance, consider a scenario with an instance named TroubleMaker that is not on the overlay network. If TroubleMaker attempts to send a request to the management instance’s overlay network address, the outcome would be as follows:
- The request travels through the AWS subnet to the router.
- The router forwards the request via the overlay network.
- The CloudStack management instance, connected to the same AWS subnet as TroubleMaker, responds directly instead of routing through the router. This unexpected return path results in the response being dropped.
To resolve these communication issues, move TroubleMaker to a different subnet, ensuring that both requests and responses pass through the router.
Instances within the overlay network will utilize special interfaces that act as VXLAN tunnel endpoints (VTEPs). Each VTEP must be aware of how to reach the others via the underlying network. While you could manually configure each instance with a list of others, this approach is prone to maintenance headaches. A more efficient method is to enable VTEP discovery via multicast, which can be facilitated using AWS Transit Gateway.
To enable VXLAN multicast functionality, follow these steps:
- Enable multicast support when creating the transit gateway.
- Attach the transit gateway to your subnet.
- Create a multicast domain for the transit gateway with IGMPv2 support enabled.
- Associate the multicast domain with your subnet.
- Configure the eth0 interface on each instance to utilize IGMPv2. Sample code for this configuration is provided.
- Ensure your instance security groups permit ingress for IGMP queries (protocol 2 traffic from 0.0.0.0/32) and VXLAN traffic (UDP port 4789 from other instances).
CloudStack VMs must connect to the same bridge as the VXLAN interface. As mentioned in the previous article, CloudStack requires specific naming conventions. I recommend naming the interface with a prefix of “eth.” This naming convention guides CloudStack in selecting the appropriate bridge, thus eliminating the need for a dummy interface as seen in the initial setup.
The following code snippet illustrates how I configured networking in CentOS 7. Be sure to input values for these variables:
- $overlay_host_ip_address, $overlay_netmask, and $overlay_gateway_ip: Use values corresponding to the overlay network you’re establishing.
- $dns_address: I suggest using the base of the VPC IPv4 network range, plus two. Avoid the address 169.654.169.253, as CloudStack reserves link-local addresses for its own purposes.
- $multicast_address: Select a multicast address for VXLAN from the multicast range that won’t conflict with existing addresses. I recommend choosing from the IPv4 local scope (239.255.0.0/16).
- $interface_name: Specify the name of the interface designated for VXLAN communication.
For further insights into crafting an effective cover letter, check out this resource. Also, for an understanding of compliance questions related to E-Verify, visit SHRM. Lastly, this video serves as an excellent resource for visual learners.
Leave a Reply