Amazon VGT2 Las Vegas: Enhanced AWS Support for Commercially-Supported Docker Applications

Amazon VGT2 Las Vegas: Enhanced AWS Support for Commercially-Supported Docker ApplicationsMore Info

Amazon Web Services (AWS) has become an ideal companion for the flexibility provided by Docker containers, as evidenced by the growing popularity of Amazon EC2 and Amazon ECS for deploying Docker applications. As organizations increasingly transition their applications from development to testing and production environments, they are seeking improved support and additional product features while utilizing the AWS cloud for their Docker container needs. At DockerCon 2015 in San Francisco, we shared insights from both teams on enhancing Docker support within AWS, and today we are excited to announce the introduction of Docker Trusted Registry (DTR) within the AWS Marketplace. This new offering enables customers to seamlessly transition from building Docker applications locally on a developer’s machine to deploying them in their production Amazon Virtual Private Cloud (Amazon VPC) with just a few simple commands.

Much like Docker Hub, Docker Trusted Registry provides organizations with a solution for storing and managing Docker containers. The key difference is that DTR can be deployed as an EC2 instance, granting organizations complete control over accessibility and management of the registry within their environment.

Configuring Your AWS Environment for Docker Trusted Registry

Running Docker Trusted Registry allows organizations to implement customized access control for their Docker images. This access control framework includes features such as support for customer SSL certificates, LDAP integration to restrict access to specific users, and the use of Amazon VPC’s network access control features.

When setting up DTR, you must first decide whether your registry instance should be accessible from the Internet or limited to your VPC. If the instance needs to be publicly accessible, it can be launched in a public subnet. However, it is crucial to configure the security group to restrict access to trusted IP ranges for ports 22 (SSH), 80 (HTTP), and 443 (SSL). Remember that when launching DTR from the AWS Marketplace, the default security group is open to the world, so it is your responsibility to limit access appropriately.

Alternatively, you can deploy your DTR instance in a private subnet to ensure that only resources within your network can access the registry. In this scenario, you will need either a bastion host or a VPN connection to manage the DTR instance via the web interface.

For added convenience, we recommend using an Amazon Route 53 private hosted zone with Docker Trusted Registry. This allows instances in your VPC to query the private hosted zone first, ensuring that access to your registry is restricted to within your VPC. You can designate a DNS record, such as dtr.mydomain.com, which will point to the IP address of your DTR instance.

Since DTR utilizes Amazon S3 for backend storage of Docker images, it’s advisable to create an IAM role that enables your instance to securely communicate with S3. IAM roles are assigned to EC2 instances upon launch. You can configure the IAM policy as follows:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::my_DTR_bucket",
                "arn:aws:s3:::my_DTR_bucket/*"
            ]
        }
    ]
}

Once you’ve determined your VPC settings, access control, IAM role, and DNS record, you can proceed with setting up the DTR instance itself.

Setting Up Supported Docker Environments on AWS

To start, utilize the Docker Trusted Registry “pay as you go” AMI from the AWS Marketplace. This licensing model simplifies the deployment experience, and Docker offers a 30-day free trial of their software. You can find the details on their product page on the AWS Marketplace, which is linked here: AWS Marketplace Product Page.

After launching the AMI, refer to the AWS and Docker Trusted Registry documentation to configure your DTR instance: Docker Trusted Registry Documentation. When launching your instance, select an appropriate size; Docker recommends starting with an M3.Large for initial testing deployments. As your environment scales, you can monitor resource usage through the Docker Trusted Registry web interface and adjust your instance size as needed.

Once your DTR instance is operational, you will need to launch Docker Engine instances (the commercially-supported version of Docker). AMIs for Docker Engine instances can be found here: Docker Engine AMI.

If using a self-signed certificate, ensure your clients are configured to retrieve the certificate from the DTR instance. This can be achieved with a user data script containing the following commands:

#!/bin/bash
export DOMAIN_NAME=dtr.mydomain.com
openssl s_client -connect $DOMAIN_NAME:443 -showcerts /dev/null | openssl x509 -outform PEM | sudo tee /etc/pki/ca-trust/source/anchors/$DOMAIN_NAME.crt
sudo update-ca-trust
sudo service docker restart

Be sure to check the specific details for your operating system here: Installing Registry Certificates on Client Docker Daemons.

Once your Docker Engine clients are operational, you can begin interacting with the DTR instance, enabling you to push and pull images to and from your private registry from any EC2 instance within your AWS VPC, a peer VPC, or even a remote location via VPN.

Streamlining Development to Deployment

Continuous Integration and Delivery (CI/CD) processes are crucial for many teams, and Docker supports various CI/CD tools, including AWS CodePipeline and AWS CodeDeploy. Docker Trusted Registry can serve as the backbone for these automated workflows, allowing seamless transitions from a developer’s desktop through integration testing to staging or QA, and ultimately to production deployment.

To illustrate a basic Docker image workflow with DTR, we can initiate a client machine configured to interact with the DTR instance. First, pull a public Jenkins instance using docker pull jenkins. Next, tag the image with docker tag jenkins dtr.mydomain.com/my-jenkins, and finally push the image to your local DTR instance with docker push dtr.mydomain.com/my-jenkins.

With this setup, teams can build a robust and scalable CI pipeline using Docker and Jenkins on AWS, facilitating the movement of code from developer laptops straight into an integration testing cluster on AWS. Code submitted to repositories like Github can automatically trigger container builds using Jenkins, streamlining the development process significantly. For further insights on this topic, this link provides valuable information.

For those interested in diving deeper into this process, this resource is an excellent source for understanding Amazon’s interview processes and expectations.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *