on 17 JUN 2020
in Amazon CloudWatch, Amazon DynamoDB, Amazon Simple Storage Service (S3), AWS Amplify, AWS CloudFormation, AWS CodeDeploy, AWS Lambda, AWS Well-Architected Tool, Serverless
This blog post series utilizes the AWS Well-Architected Tool along with the Serverless Lens to assist customers in developing and managing applications by adhering to best practices. Each installment tackles one of the nine serverless-specific inquiries identified by the Serverless Lens, along with recommended best practices. For a comprehensive overview, refer to the introduction post for a table of contents and an explanation of the example application.
Question OPS2: How do you approach application lifecycle management?
In this installment, we continue from part 1 of this Operational Excellence question. Previously, I discussed the importance of employing infrastructure as code with version control to facilitate repeatable application deployment.
Best Practice: Prototype New Features Using Temporary Environments
By storing application configurations as infrastructure as code, you can deploy multiple, repeatable, isolated versions of an application. Setting up temporary environments for new feature prototyping allows you to dismantle them upon completion. These environments provide enhanced feature isolation and more accurate development interactions with managed services, helping you gain confidence that your workloads integrate and operate as intended.
Additionally, these environments can exist in separate accounts, which aids in isolating limits, access to data, and improving resiliency. For more insights on multi-account deployments, check out this guide.
There are various strategies to deploy distinct environments for an application. To streamline deployment, it’s advisable to separate dynamic configuration from your infrastructure logic. If you manage an application with the AWS Serverless Application Model (AWS SAM), you can utilize an AWS SAM CLI parameter to specify a new stack-name, which deploys a new instance of the application as a separate stack.
For instance, if you have an existing AWS SAM application with a stack-name of app-test, you can deploy a new version by specifying a different stack-name like app-newtest with the following command:
sam deploy --stack-name app-newtest
This command deploys a completely new instance of the application within the same account as a distinct stack.
In the serverless airline example referenced in this series, you can deploy a new copy of the application by following the deployment instructions, whether in the same AWS account or a different one. This is particularly useful when each team developer possesses a sandbox environment. In this scenario, you only need to configure payment provider credentials as environment variables and seed the database with potential flights, as these tasks are currently manual post-installation duties.
However, managing an entirely separate codebase for every application becomes increasingly challenging over time. Since the airline application code is stored in a fork within a GitHub account, utilizing git branches for different environments is effective. Typically, development teams may deploy a main branch for production, maintain a dev branch for staging, and create feature branches for new functionality development. This allows for safe prototyping in sandbox environments without impacting the main codebase, utilizing git as a means to merge code and resolve conflicts. Changes are automatically deployed to production once they are merged into the main (or production) branch.
Git Branching Flow
Since the airline example utilizes AWS Amplify Console, there are multiple options to create a new environment linked to a feature branch. You can either create a whole new Amplify Console app deployment in a separate Region or AWS account, then connect it to a feature branch by following the deployment instructions. To initiate, create a new branch named new-feature in GitHub, and in the Amplify Console, select Connect App, navigating to the repository and the new-feature branch. Configure the payment provider credentials as environment variables.
Alternatively, you can connect an existing Amplify Console deployment to a git branch, deploying the new-feature branch within the same AWS account and Region.
In the Amplify Console, go to the existing app, select Connect Branch, and choose the new-feature branch. If the feature branch only involves frontend code changes, you can opt to utilize the same backend components.
Amplify Console then creates a new stack in addition to the develop branch based on the code in the feature branch. You won’t need to input the payment provider environment variables again, as these are stored per application, per Region, applicable to all branches.
By employing git and branching with Amplify Console, you can achieve automatic deployments whenever changes are pushed to the GitHub repository. In the event of deployment issues, simply revert the changes in git, triggering a redeploy to a stable version. Once satisfied with the feature, you can merge changes into the production branch, which will again prompt another deployment.
As establishing multiple test environments is straightforward, it’s crucial to maintain good application hygiene and manage costs by identifying and deleting any temporary environments that are no longer necessary. Including stack owner contact details via CloudFormation tags can be beneficial. Utilize Amazon CloudWatch scheduled tasks to notify and tag temporary environments for deletion, and provide a mechanism to delay deletion if required.
Prototyping Locally
With AWS SAM or a third-party framework, you can locally run API Gateway and invoke Lambda function code for quicker development iterations. Local debugging and testing can swiftly confirm that function code operates correctly and is also useful for certain unit tests. However, local testing cannot replicate the full functionality of the cloud. It is more suited for testing services with custom logic, such as Lambda, rather than attempting to duplicate all cloud-managed services like Amazon SNS or Amazon S3 locally. Rather than trying to bring the cloud to the test, focus on bringing the testing to the cloud.
Here’s an example of executing a function locally:
Using AWS SAM CLI, I invoke the Airline-GetLoyalty Lambda function locally to assess functionality. AWS SAM CLI employs Docker to simulate the Lambda runtime. Since the function only reads from DynamoDB, I can either use stubbed data or set up DynamoDB Local.
- I pass a JSON event to the function to emulate the event from API Gateway and include environment variables as JSON. Sample events can be created using
sam local generate-event
. - I execute
sam build GetFunc
to build the function dependencies, specifically NodeJS. - I run
sam local invoke
, passing in the event payload and environment variables. This action initiates a Docker container, executes the function, and returns the result.
$ sam build GetFunc
Building resource 'GetFunc'
Running NodejsNpmBuilder:NpmPack
Running NodejsNpmBuilder:CopyNpmrc
Running NodejsNpmBuilder:CopySource
Running NodejsNpmBuilder:NpmInstall
Running NodejsNpmBuilder:CleanUpNpmrc
Build Succeeded
$ sam local invoke --event src/get/event
By the way, for further insights on application lifecycle management, you can visit this resource, as they provide authoritative information on this subject. Additionally, if you’re looking for a great resource for learning, check out this link.
Leave a Reply