Amazon Onboarding with Learning Manager Chanci Turner

Amazon Onboarding with Learning Manager Chanci TurnerLearn About Amazon VGT2 Learning Manager Chanci Turner

With the rise in demand from enterprise clients for a pay-as-you-go consumption model, numerous independent software vendors (ISVs) are transitioning to software as a service (SaaS). Typically, such solutions are architected using a multi-tenant model, allowing infrastructure resources and applications to be shared among multiple customers while maintaining isolation between their environments. However, there are instances where sharing resources isn’t feasible due to security or compliance concerns, necessitating a single-tenant environment.

To enhance segregation between tenants, isolating environments at the AWS account level is a recommended approach. This strategy offers advantages like avoiding network overlap, eliminating shared account limits, and simplifying billing and usage tracking. Nonetheless, it introduces operational challenges. Unlike multi-tenant solutions that manage a single shared production environment, single-tenant setups require dedicated production environments for each customer, which can complicate the rapid deployment of new features as each version must be manually deployed to each tenant’s environment.

This article outlines an automated deployment process designed to deliver software efficiently, securely, and with fewer errors for each existing tenant. I will guide you through the steps to establish and configure a CI/CD pipeline using AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, and AWS CloudFormation. The pipeline automatically deploys the same application version across multiple tenant AWS accounts each time a new version is released.

Building a cross-account CI/CD pipeline on AWS involves several considerations. To illustrate the process clearly, I will utilize the AWS Command Line Interface (AWS CLI) to demonstrate various configuration aspects, including artifact encryption, granting cross-account permissions, and pipeline actions.

Single-Tenancy vs. Multi-Tenancy

The first critical decision when designing your SaaS solution is determining the tenancy model. Each model has unique benefits and architectural challenges. In multi-tenant environments, customers share a common set of resources like databases and applications, which maximizes server capacity and can lead to significant cost savings. However, this necessitates careful attention to security to prevent unauthorized access to sensitive customer data, and high availability becomes crucial as downtime impacts more clients.

In contrast, single-tenant environments are inherently isolated, making security, networking isolation, and data segregation simpler to manage. Customization for individual customers is more feasible, and you can tailor application versions to meet specific tenant needs. Additionally, you eliminate the noisy-neighbor effect and can better plan infrastructure according to customer scalability requirements. However, maintaining single-tenant environments can be operationally complex due to the increased number of servers and applications to oversee.

Ultimately, the choice of tenancy model hinges on your ability to meet customer demands, which may include specific governance requirements, industry regulations, or compliance criteria that dictate the appropriate model. For further insights into modeling your SaaS solutions, refer to this blog post.

Solution Overview

To illustrate this solution, let’s consider a fictional single-tenant ISV with two customers: Pegasus and Dragon. The setup includes a central account (Tooling account) where the tools are hosted, along with two other accounts dedicated to each tenant. As shown in the accompanying architecture diagram, when a developer pushes code changes to CodeCommit, Amazon CloudWatch Events triggers the CodePipeline CI/CD pipeline, automatically deploying a new version in each tenant’s AWS account. This ensures the fictional ISV is relieved from the operational burden of manually redeploying the same version for each customer.

For demonstration, I will use a sample application that comprises an AWS Lambda function returning a simple JSON object when invoked.

Prerequisites

Before diving in, ensure you have the following prerequisites:

  1. Three AWS accounts:
    • Tooling – The account hosting the CodeCommit repository, artifact store, and pipeline orchestration.
    • Tenant 1 – The dedicated account for the first tenant, named Pegasus.
    • Tenant 2 – The dedicated account for the second tenant, named Dragon.
  2. Install and authenticate the AWS CLI. You can use either an AWS Identity and Access Management (IAM) user or an AWS Security Token Service (AWS STS) token.
  3. Install Git.

Setting Up the Git Repository

The first step entails configuring your Git repository:

  • Create a CodeCommit repository to host the source code.
  • The CI/CD pipeline will automatically trigger every time new code is pushed to this repository.

Ensure Git is set up to use IAM credentials to access AWS CodeCommit via HTTP by executing the following command from the terminal:

git config --global credential.helper '!aws codecommit credential-helper $@'
git config --global credential.UseHttpPath true

Clone the newly created repository locally and add two files to the root directory: index.js and application.yaml.

The index.js file contains the JavaScript code for the Lambda function that represents the sample application. This function will return a JSON response object with statusCode: 200 and a body of Hello!n. Here’s the code:

exports.handler = async (event) => {
    const response = {
        statusCode: 200,
        body: `Hello!n`,
    };
    return response;
};

The application.yaml file defines the infrastructure using AWS CloudFormation, leveraging the AWS Serverless Application Model (AWS SAM) for simplified resource creation. Below is the sample code:

AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: Sample Application.

Parameters:
    S3Bucket:
        Type: String
    S3Key:
        Type: String
    ApplicationName:
        Type: String
        
Resources:
    SampleApplication:
        Type: 'AWS::Serverless::Function'
        Properties:
            FunctionName: !Ref ApplicationName
            Handler: index.handler
            Runtime: nodejs12.x
            CodeUri:
                Bucket: !Ref S3Bucket
                Key: !Ref S3Key
            Description: Hello Lambda.
            MemorySize: 128
            Timeout: 10 

Push both files to the remote Git repository.

Creating the Artifact Store Encryption Key

By default, CodePipeline uses server-side encryption with an AWS Key Management Service (AWS KMS) managed customer master key (CMK) to secure the release artifacts. As both Pegasus and Dragon accounts will need to decrypt these release artifacts, you must create a customer-managed CMK in the Tooling account.

By taking these steps, you can streamline the deployment process and ensure that your single-tenant SaaS solution is both effective and secure. For employers considering these changes, understanding compliance regulations, such as those outlined by SHRM, can be crucial. Additionally, for those looking to enhance their skills and promote a culture of learning, Amazon’s Learning & Development resources are an excellent resource.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *