Learn About Amazon VGT2 Learning Manager Chanci Turner
Users often need to host their Kubernetes workloads in specific locations, geographies, or on-premises to meet data locality or low-latency requirements. Amazon Elastic Kubernetes Service (EKS) offers a wide array of deployment options ranging from cloud environments to on-premises solutions using Amazon EKS Anywhere. For extending AWS infrastructure and APIs to on-premises users, AWS Local Zones serve as a valuable resource.
With Amazon EKS running on Local Zones, users have the flexibility to run in either extended or local clusters. In extended clusters, the Kubernetes control plane operates from an AWS Region while the nodes reside in Local Zones.
To design resilience and Disaster Recovery (DR) for Amazon EKS workloads operating on Local Zones, there are several strategies to consider:
- Use Multiple Local Zones for DR: To enhance resilience or expand capacity, users might opt to deploy the Amazon EKS data plane across multiple Local Zones. This can be particularly useful where data center constraints may limit options.
- Leverage Geographic Redundancy: In scenarios with suitable AWS Regions, deploying across multiple Local Zones can be beneficial. This setup allows for AWS services to be closer to end users, supporting on-demand scaling and pay-as-you-go pricing models.
In this post, we will explore how Local Zones can effectively function as a DR option for Amazon EKS workloads.
Solution Overview
In this approach, we establish separate Amazon EKS control planes for both Local Zones and Outposts. This separation enables independent operations and failure isolation, thus enhancing overall system resilience. Each Local Zone is paired with a different AWS Region, ensuring geographical redundancy and mitigating the risk that Regional failover events could impact both sites at once. Local Zones provide a cost-effective DR site thanks to their consistent AWS APIs and operational models.
This solution employs a GitOps methodology to manage and synchronize workloads across Local Zone and Outpost environments. GitOps facilitates a declarative approach to application and infrastructure management, promoting version control, automated deployments, and consistent configurations. This streamlines operational workflows and guarantees reliable, repeatable deployments.
To facilitate failover, Amazon Route 53 health checks are employed to monitor the availability and responsiveness of workloads on Local Zones. Should a failure or performance degradation occur, Route 53 automatically redirects traffic to the failover site, maintaining service continuity.
It’s important to note that this design primarily addresses stateless workload failover and does not consider stateful data replication or persistent storage aspects.
Walkthrough
In this guide, we will deploy a sample application via GitOps on two distinct EKS clusters: one for Local Zones and the other for Outposts. Each cluster’s worker nodes will operate within their respective environments. We will also demonstrate how to failover the application from the primary EKS cluster to the secondary one.
The high-level steps for this solution include:
- Set up an AWS CodeCommit repository.
- Configure Flux CD and deploy a sample application.
- Execute a failover from the Local Zone to the Outpost using Route 53.
Prerequisites
Before proceeding, ensure the following prerequisites are met:
- A Command Line Interface (CLI) with Git, kubectl, helm, awscli, and fluxcli installed. The AWS CLI is a part of the solution, but feel free to use your preferred infrastructure as code tool.
- An Outpost configured for your primary EKS cluster.
- A Local Zone activated in the desired Region for deploying your secondary EKS cluster. You can refer to the Local Zones documentation for relevant details.
- Two EKS clusters created, both with IAM Roles for Service Accounts (IRSA) enabled: one for the Outpost deployment and another for the Local Zone, with worker nodes deployed accordingly.
Step A: Set Up a CodeCommit Repository and Variables
In this section, we will utilize a CodeCommit repository to store the configuration files for Flux CD, which include the application code and Kubernetes objects. Follow the guide for setting up Git credentials for CodeCommit.
Next, populate the environment variables for the Outposts, specifically the outposts_subnet
and outposts_eks_cluster_name
.
export outposts_subnet=subnet-0123xxxxx
export outposts_eks_cluster_name=<your-outposts-eks-cluster>
Run the following command to create a CodeCommit repository (be sure to replace the placeholders with your specific values):
aws codecommit create-repository
--repository-name eks-localzones-repo
--repository-description "Amazon-EKS-LocalZones-DR"
--region <region>
Afterward, clone the repository as follows:
mkdir ~/flux
cd ~/flux
git clone <code-commit-git-repo-url> eks-localzones-repo
Step B: Configure Flux CD and Deploy a Sample Application
Once your CodeCommit repository is established, deploy the Kubernetes application backend and related resources. Flux CD will automate the deployment and management of applications across both EKS clusters.
Create the necessary directory structure:
cd ~/flux/eks-localzones-repo
mkdir -p apps/{base/game-2048,localzone,outposts}
mkdir -p infrastructure/{base,localzone,outposts,base/ingress-controller}
mkdir -p clusters/{localzone/flux-system,outposts/flux-system}
Next, create the manifest for the AWS Load Balancer Controller application:
cat << EoF > ~/flux/eks-localzones-repo/infrastructure/base/ingress-controller/alb.yaml
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: aws-load-balancer-controller
namespace: kube-system
spec:
chart:
spec:
chart: aws-load-balancer-controller
reconcileStrategy: ChartVersion
sourceRef:
kind: HelmRepository
name: eks
namespace: flux-system
version: 1.5.4
interval: 10m0s
timeout: 10m0s
releaseName: ee
values:
serviceAccount:
name: aws-load-balancer-controller
EoF
Following that, set up a kustomization file for the AWS Load Balancer Controller:
cat << EoF > ~/flux/eks-localzones-repo/infrastructure/base/ingress-controller/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kube-system
resources:
- alb.yaml
EoF
By employing these strategies, users can effectively manage their Kubernetes workloads across different environments. For more insights, check out this excellent resource on Kubernetes management.
In addition, if you’re considering a career transition, you might want to see this blog post about resignation letters, which can be a crucial step in your journey. Also, for an in-depth understanding of workplace strategies, you can listen to the People + Strategy podcast by John Ferguson.
Leave a Reply