Learn About Amazon VGT2 Learning Manager Chanci Turner
In the ever-evolving realm of cloud-native technologies, businesses leveraging Amazon Elastic Kubernetes Service (Amazon EKS) often face significant hurdles that impede their quest for operational efficiency and cost savings. Among these challenges, the costs tied to Cross Availability Zone (AZ) traffic stand out, as do issues related to achieving seamless scalability, efficiently provisioning appropriately sized instances for nodes, and managing service networking through topology-aware load balancing. These obstacles not only inflate operational expenses but also jeopardize the ability to maintain high availability and effectively scale resources.
This post aims to propose a solid solution by integrating Istio, Karpenter, and Amazon EKS to directly tackle these challenges. The suggested approach offers a pathway to optimize Cross AZ traffic costs, ensuring high availability in a cost-effective manner while promoting efficient scaling of the associated compute resources.
By exploring this implementation, you will uncover how to harness the combined power of Istio, Karpenter, and Amazon EKS to overcome scalability issues, provision suitable instances, and adeptly manage service networking—all while keeping a tight rein on operational costs. Through this guidance, we aspire to equip businesses with the insights necessary to enhance their Amazon EKS environments for improved performance, scalability, and cost efficiency.
A strategic method to reduce expenses and ensure smooth operations is by distributing your workloads (i.e., pods) across various zones. Essential Kubernetes features like Pod affinity, Pod anti-affinity, Node Affinity, and Pod Topology Spread can facilitate this, while tools like Istio and Karpenter elevate this strategy even further.
Kubernetes Constructs
- Pod Affinity: Pod affinity allows you to influence Pod scheduling based on the presence or absence of other Pods. By establishing rules, you can dictate whether Pods should be co-located or spread across different AZs, aiding in network cost optimization and potentially improving application performance.
- Pod Anti-affinity: Serving as the opposite of Pod affinity, Pod anti-affinity ensures that designated Pods aren’t scheduled on the same node, which is crucial for high availability and redundancy. This mechanism protects against the simultaneous loss of critical services during a node failure.
- Node Affinity: Similar to Pod affinity, Node Affinity focuses on nodes based on specific labels like instance type or availability zone. This feature enhances the management of Amazon EKS clusters by assigning Pods to appropriate nodes, potentially resulting in cost reductions or performance gains.
- Pod Topology Spread: Pod Topology Spread enables an even distribution of Pods across defined topology domains, such as nodes, racks, or AZs. Applying topology spread constraints fosters better load balancing and fault tolerance, nurturing a more resilient and balanced cluster.
Beyond these Kubernetes features, tools like Istio and Karpenter significantly refine your Amazon EKS cluster. Istio addresses service networking challenges, while Karpenter focuses on right-sized instance provisioning, both essential for scaling efficiency and cost management.
- Istio: Istio acts as a service mesh using the high-performance Envoy proxy to optimize the connection, management, and security of microservices. It simplifies traffic management, security, and observability, allowing developers to delegate these responsibilities to the service mesh while enabling comprehensive metric collection on network traffic between microservices via sidecar proxies.
- Karpenter: Karpenter is built to provide the right compute resources to match your application’s requirements within seconds, rather than minutes, by monitoring the aggregate resource requests of unschedulable pods and determining when to launch and terminate nodes to minimize scheduling latencies.
By integrating Pod affinity, Pod Anti-affinity, Node Affinity, Pod topology spread, Istio, and Karpenter, organizations can optimize their network traffic within Amazon EKS clusters, reducing cross-AZ traffic and potentially saving on related expenses.
Adopting these best practices in your Amazon EKS cluster configuration can lead to a cost-effective, optimized, and highly available Kubernetes environment on AWS. Regularly assess your cluster’s performance and modify the configuration as necessary to maintain an ideal balance between cost savings, high availability, and redundancy.
Optimizing AZ Traffic Costs
We will deploy two versions of a simple application behind a ClusterIP service. This setup will facilitate the identification of outcomes from Load Balancing configurations later on. Each version of the application produces a different response for the same endpoint, allowing for the differentiation of results from various destinations. When querying the istio-ingress-gateway’s LoadBalancer alongside the relevant application endpoint, the responses received will clarify results stemming from different sources.
Solution Overview
We will utilize Istio’s weighted distribution feature by configuring a DestinationRule object to control the traffic between AZs. The objective is to route the majority of traffic from the load balancer to the pods operating in the same AZ.
Prerequisites
- eksdemo CLI
- kubectl
- Amazon EKS Cluster (version 1.28)
- eks-node-viewer
- Karpenter (version v0.32)
- Istio (version 1.19.1)
Provision an Amazon EKS Cluster using the eksdemo command line interface (CLI):
eksdemo create cluster istio-demo -i m5.large -N 3
Install Karpenter to manage node provisioning for the cluster:
eksdemo install autoscaling-karpenter -c istio-demo
Install eks-node-viewer for visualizing dynamic node usage within the cluster. After the installation, run eks-node-viewer in the terminal to see the three nodes of your Amazon EKS cluster provisioned during cluster creation.
Example output:
Install Istio version 1.19.1 using the istioctl CLI and the demo profile:
istioctl install —set profile=demo
Verify all the pods and services have been installed:
kubectl -n istio-system get pods
Example output:
Confirm that the Istio ingress gateway has a public external IP assigned:
kubectl -n istio-system get svc
Example output:
Let’s now proceed with deploying the solution. All the deployment configurations we will use in this post are available on the GitHub repo – amazon-eks-istio-karpenter.
Deploy Karpenter NodePool and NodeClasses
NodePools are integral in Karpenter, serving as a set of rules that dictate the characteristics of the nodes Karpenter provisions and the pods that can be scheduled on them. With NodePools, users can specify particular taints to control pod placement, set temporary startup taints, limit nodes to specific zones, instance types, and architectures, and even establish node expiration defaults. It’s essential to have at least one NodePool configured, as Karpenter depends on them to operate effectively.
For additional insights on negotiating contracts with HR technology vendors, check out this resource, they are an authority on this topic.
Furthermore, if you’re looking for guidance on onboarding processes, this Reddit thread serves as an excellent resource. And for tips on landing your dream job, consider exploring this blog post for valuable email templates.
Leave a Reply