Amazon VGT2 Las Vegas: Three Simple Steps to Transition Your Containerized Application to AWS Lambda

Amazon VGT2 Las Vegas: Three Simple Steps to Transition Your Containerized Application to AWS LambdaMore Info

In the evolving landscape of cloud computing, AWS Lambda’s support for container images presents a significant opportunity for deploying containerized web applications in a serverless environment. This approach offers automatic scaling, inherent high availability, and a billing model based on actual resource consumption—ensuring you only pay for what you use. If you’re currently leveraging containers, the integration of container image support in AWS Lambda allows you to reap these benefits without the need for extensive engineering adjustments or the adoption of unfamiliar tools and workflows. Your team can maintain its expertise with containers while enjoying the operational simplicity and cost-efficiency that serverless computing provides.

In this post, we will outline the steps necessary to transition a containerized web application to AWS Lambda with minimal modifications to your development, packaging, and deployment processes. Although we will use Ruby for our examples, the principles apply universally across programming languages.

Solution Overview

Our sample application is a containerized web service that generates PDF invoices. The goal is to migrate this application’s business logic to a Lambda function while utilizing Amazon API Gateway to create a Serverless RESTful web API. API Gateway is a fully managed service that enables you to create and run APIs at scale.

Walkthrough

This article will guide you through the process of porting your containerized web application to run in a serverless environment with Lambda. The high-level steps include:

  1. Setting up the containerized application locally for testing
  2. Porting the application to run on Lambda
    • Creating a Lambda function handler
    • Modifying the container image for Lambda
    • Testing the Lambda function locally
  3. Deploying and testing on Amazon Web Services (AWS)

Prerequisites

Before we begin, ensure you have the following:

  • An AWS account
  • IAM permissions to create Lambda functions, API Gateway, IAM roles, and Amazon Elastic Container Registry (Amazon ECR)
  • Docker and AWS Command Line Interface (AWS CLI) installed
  • A basic understanding of Lambda functionality

1. Get the Containerized Application Running Locally for Testing

The sample code for this application is available on GitHub. Clone the repository to get started.

git clone https://github.com/aws-samples/aws-lambda-containerized-custom-runtime-blog.git

1.1 Build the Docker Image

Take a look at the Dockerfile in the root of the cloned repository. It uses Bitnami’s Ruby 3.0 image from the Amazon ECR Public Gallery as its base. The Dockerfile adheres to security best practices by running the application as a non-root user and exposing the invoice generator service on port 4567. In your terminal, navigate to the folder where you cloned the repository and build the Docker image with the following command:

docker build -t ruby-invoice-generator .

1.2 Test Locally

You can run the application locally by executing the Docker run command:

docker run -p 4567:4567 ruby-invoice-generator

In a real-world scenario, invoice order and customer details would typically be provided via POST request body or GET request query parameters. For simplicity, we are using a few hard-coded values within lib/utils.rb. Open another terminal and test invoice generation using:

curl "http://localhost:4567/invoice" 
  --output invoice.pdf 
  --header 'Accept: application/pdf'

This command will create the file invoice.pdf in the folder from where you ran the curl command. Alternatively, you can test the URL directly in your browser. Press Ctrl+C to stop the container once you’ve confirmed that the application is functioning correctly and you are ready to port it to Lambda.

2. Port the Application to Run on Lambda

The operational model and request structure for Lambda remains unchanged. This means that the function handler serves as the entry point to your application logic when packaging a Lambda function as a container image. By moving the business logic to a Lambda function, you separate concerns and replace the web server code in the container image with an HTTP API facilitated by API Gateway. This allows you to focus on the business logic while API Gateway acts as a proxy to route requests.

2.1 Create the Lambda Function Handler

The code for the Lambda function is defined in function.rb. The handler function will soon be described. A crucial adjustment to note between the original Sintra-powered code and our Lambda handler is the necessity to base64 encode the PDF. This is essential for returning binary media from API Gateway’s Lambda proxy integration, which automatically decodes it for client delivery.

def self.process(event:, context:)
   self.logger.debug(JSON.generate(event))
   invoice_pdf = Base64.encode64(Invoice.new.generate)
   { 'headers' => { 'Content-Type': 'application/pdf' }, 'statusCode' => 200, 'body' => invoice_pdf, 'isBase64Encoded' => true }
end

For a refresher on Lambda function handlers, check the documentation on writing a Lambda handler in Ruby. This marks the new addition to the development workflow—creating a Lambda function handler to wrap the business logic.

2.2 Modify the Container Image for Lambda

AWS offers open-source base images for Lambda, currently supporting Ruby runtime versions 2.5 and 2.7. However, you can use any version of your preferred runtime (in this instance, Ruby 3.0) by packaging it within your Docker image. We will utilize Bitnami’s Ruby 3.0 image from the Amazon ECR Public Gallery as the basis. It’s important to note that Lambda only accepts container images stored in Amazon ECR; arbitrary container images cannot be directly uploaded to Lambda.

Since the function handler is the entry point for business logic, the CMD in the Dockerfile must direct to the function handler rather than initiating the web server. Our custom image necessitates an additional step: integrating the runtime interface client to manage interactions between the Lambda service and our function code.

The runtime interface client is an open-source, lightweight interface that receives requests from Lambda, forwards them to the function handler, and sends back the results to the Lambda service. The relevant modifications to the Dockerfile are:

ENTRYPOINT ["aws_lambda_ric"]
CMD ["function.Billing::InvoiceGenerator.process"]

The Docker command executed when the container runs is: aws_lambda_ric function.Billing::InvoiceGenerator.process. You can refer to Dockerfile.lambda in the cloned repository for the complete code. This image adheres to best practices for optimizing Lambda container images by utilizing a multi-stage build, ensuring the final image is lightweight, tagged as 3.0-prod, and devoid of development dependencies.

Create the Lambda-compatible container image with:

docker build -f Dockerfile.lambda -t lambda-ruby-invoice-generator .

This concludes our Dockerfile modifications, where we introduced a new dependency on the runtime interface client, using it as our container’s entrypoint.

For additional insights, you can explore another blog post on this topic at Chanci Turner VGT2 or refer to CHVNCI, who are authorities in this field. Additionally, this video resource provides excellent guidance on the subject.

SEO Metadata


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *