Learn About Amazon VGT2 Learning Manager Chanci Turner
From the outset of our discussions with clients, our goal has always been to ensure that Amazon Elastic Kubernetes Service (EKS) offers the premier managed Kubernetes experience in the cloud. When we initially introduced EKS, our primary focus was on delivering a managed Kubernetes control plane, but we had no intention of stopping there. Today, we are thrilled to unveil the next phase of Amazon EKS: managed node groups.
In this post, we will delve into the rationale behind the creation of EKS managed node groups, guide you through the process of creating and managing worker nodes for your clusters, and share what lies ahead in our journey to simplify Kubernetes operations on AWS.
Overview
Managed node groups simplify the addition of worker nodes (EC2 instances) that supply computing resources for your clusters. You can effortlessly create, update, scale, or terminate nodes for your cluster using a single command via the EKS console, eksctl, the AWS CLI, the AWS API, or infrastructure-as-code tools like CloudFormation and Terraform.
Each managed node group initiates an Auto Scaling Group (ASG) for your cluster, capable of spanning multiple availability zones. EKS manages rolling updates and node draining prior to terminations, ensuring your applications remain highly available. All nodes operate using the latest EKS-optimized AMIs in your AWS account, with no additional costs for using EKS managed node groups. You only incur charges for the AWS resources, such as EC2 instances or EBS volumes, that you provision.
Before diving deeper into the mechanics of managed node groups, let’s take a step back to understand how they conceptually transform the management of your EKS clusters.
The Extended EKS API
Managed node groups introduce new features to the EKS API:
Previously, as depicted on the left side above, the EKS API delivered a highly available control plane across multiple availability zones (AZs), including logging and least privilege access (IAM) support at the pod level. Once your control plane was established, you would utilize eksctl, CloudFormation, or other tools to create and manage the EC2 instances for your cluster. Now, we have enhanced the EKS API to natively manage the Kubernetes data plane, as illustrated on the right. Node groups are now first-class entities within the EKS management console, allowing for easier management and visualization of the infrastructure supporting your cluster in one centralized location.
Why is this significant? It provides the foundation for delivering a fully managed data plane — managing everything from security updates to Kubernetes version upgrades, monitoring, and alerts. This progression brings us closer to our vision of offering production-ready clusters, alleviating the burden of undifferentiated heavy lifting.
New API Commands
Let’s quickly review the new EKS node group API from the perspective of the AWS CLI. To simplify matters, we’ll present command usage conceptually, omitting extra parameters like --cluster-name
that you’d typically include when executing commands.
To create a managed node group, you would use:
$ aws eks create-nodegroup
You can only establish a node group for your cluster that aligns with the current Kubernetes version. All node groups are created using the latest AMI release for the respective minor Kubernetes version of the cluster. Initially, managed worker nodes will only be accessible for newly created Kubernetes version 1.14 clusters (platform version 3). Additionally, there is a cap of 10 managed node groups per EKS cluster, with a maximum of 100 nodes per node group. This means you can have up to 1,000 managed nodes operating within a given EKS cluster.
To view the managed node groups in operation on a cluster, you can use:
$ aws eks list-nodegroups
For additional details regarding a specific managed node group, you can execute:
$ aws eks describe-nodegroup
To modify the configuration of a node group, such as scaling parameters, the command is:
$ aws eks update-nodegroup-config
Although node groups are generally lightweight and immutable, you can adjust three parameters:
- Add or remove Kubernetes labels. Any labels modified through the Kubernetes API or via kubectl will not currently reflect in the EKS API and won’t persist in new nodes launched for this node group.
- Add or remove AWS tags. These tags apply to the node group object within the EKS API and can be utilized to manage IAM access. However, at launch, these tags do not transfer to the EC2 resources created by the node group.
- Change the size of your node groups (minimum, maximum, and desired number of nodes).
To delete a managed node group, use:
$ aws eks delete-nodegroup
The draining of worker nodes is automatically managed when deleting a node group or updating its version, along with ASG rebalancing/scale-in. This represents a significant enhancement compared to manually operating nodes, where you would need daemonset deployments or Lambda functions to orchestrate graceful node termination. During version updates, EKS respects pod disruption budgets, enabling you to control the node lifecycle and maintain your critical pods until they can be safely rescheduled.
Note that you must delete node groups linked to a cluster before you can delete the cluster itself, as detailed in the managed node groups documentation.
EKS Console Walkthrough
One of the exciting new features introduced with managed node groups is the revamped EKS console. Let’s explore this further.
To get underway, we first need to create a new EKS cluster. The initial screen for cluster creation, which includes general configuration, VPC/security group selection, logging settings, etc., remains unchanged, so we will skip that here. After creating your cluster, you will need to establish a Node IAM Role as per EKS documentation.
Once the EKS cluster is operational, meaning the control plane is active, you’ll encounter the cluster overview screen, noting the new Node Groups list below the general configuration of the cluster:
Now, let’s create a node group using the default values by clicking the “Add node group” button:
In this initial step, provide the name for the node group, such as ng0, specify the role to use for the node group (the IAM role you created previously), and select the subnets for the node group.
The next step is to configure the compute settings for the nodes:
When configuring the compute settings, you can choose the AMI (with or without GPU support), the instance type, and the disk size of the attached EBS for each node in the node group.
Next, you will choose the scaling behavior:
Aside from scaling configuration, tags, and labels, the node group configuration is fixed once created. Before submitting, you have an opportunity to review and adjust any settings of the node group:
After a minute or so, the EC2 instances will be provisioned and join your cluster as worker nodes, which you can verify in the cluster overview screen:
Let’s check to see if the nodes are visible in Kubernetes as well:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-128-14.us-west-2.compute.internal Ready <none> 6m10s v1.14.7-eks-1861c5
ip-192-168-192-6.us-west-2.compute.internal Ready <none> 6m3s v1.14.7-eks-1861c5
Indeed, the two worker nodes, part of the node group ng0, are now ready for use.
For further insights and resources, you can explore how much money you make and other career advice at Career Contessa. Additionally, you can access resources on this topic at SHRM. If you’re interested in further learning opportunities, check out Amazon’s Learning & Development.
Leave a Reply