Introduction
Learn About Amazon VGT2 Learning Manager Chanci Turner
The advent of Free Ad-supported Streaming Television (FAST) channels has invigorated the repurposing and distribution of archival content, such as classic films and television series, across contemporary platforms and devices. Much of this content is only available in lower-resolution, standard definition (SD) formats and requires enhancement to satisfy viewer expectations. Traditionally, low-complexity methods like Lanczos and bicubic algorithms are employed for upscaling; however, these techniques frequently introduce image artifacts, such as blurring and pixelation.
Deep learning (DL) methods, including Super-Resolution Convolutional Neural Network (SRCNN) and Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR), deliver impressive results in objective quality evaluations like VMAF, SSIM, and PSNR. Nonetheless, these methods can be computationally intensive, making them less viable for channels with constrained budgets, which is often the case for FAST offerings. Amazon Web Services (AWS) and Intel present a cost-effective solution for video super-resolution that capitalizes on the advantages of AWS Batch (for processing video assets) while utilizing the Intel Library for Video Super Resolution (VSR)—striking a balance between quality and performance in real-world scenarios. In this blog post, we will detail a step-by-step implementation using an AWS CloudFormation template.
Solution
To implement the Intel Library for Video Super Resolution, based on the enhanced RAISR algorithm, specific Amazon EC2 instance types such as c5.2xlarge, c6i.2xlarge, and c7i.2xlarge are required. AWS Batch is utilized to compute jobs and automate the entire pipeline, eliminating the need to manage the underlying infrastructure, including instance start and stop operations.
The primary components of the solution include:
- Establishing a compute environment in AWS Batch, defining CPU specifications and allowable EC2 instance types.
- Creating a job queue associated with the created compute environment. Each job submitted to this queue will run on the designated EC2 instances.
- Job definition: At this stage, it’s essential to have a container registered in the Amazon Elastic Container Registry (Amazon ECR). More details for building the Docker image can be found here. This container incorporates the installation of the Intel Library for VSR, the open-source FFmpeg tool, and the AWS Command Line Interface (AWS CLI) for performing API calls to S3 buckets. Once the job is adequately defined (with the image registered in Amazon ECR), job submissions can commence.
The diagram below illustrates the general architecture as previously described:
Implementation
A CloudFormation template is available in this GitHub repository. Follow these steps to deploy the proposed solution:
- Download the YAML file from the GitHub repository.
- Access CloudFormation in the AWS Console to create a new stack using template.yml.
The template allows for the definition of the following parameters:
- Memory: Amount of memory tied to the job definition. This value can be adjusted based on the super-resolution task requirements. For instance, for a 1080p, AVC 30fps, 15:00 duration video, you might need 4000 MB of memory and 4 vCPUs.
- Subnet: AWS Batch deploys the appropriate EC2 instance types (c5.2xlarge, c6i.2xlarge, and c7i.2xlarge) in a customer-selected subnet with Internet access.
- VPCName: The existing virtual private cloud (VPC) where the selected subnet is located.
- VSRImage: While this field uses a public image, users can create their own image and input the URL here. Instructions for creating a custom image can be found here.
- VCPU: The virtual CPU (vCPU) linked to the job definition, which can also be modified.
The next step is to create a CloudFormation stack using the defined parameters.
Once the stack is successfully created, two new Amazon S3 buckets—vsr-input and vsr-output—should be visible.
Upload a SD file to the vsr-input-xxxx-{region-name}.
Navigate to the Batch section in the AWS console, click to open the dashboard, and confirm that a new queue (queue-vsr) and compute environment (VideoSuperResolution) have been established.
Within the Batch dashboard, click on Jobs (left-side menu). Then, select Submit a new job, choosing the appropriate job definition (vsr-jobDefiniton-xxxx) and queue (queue-vsr).
In the subsequent screen, click on Load from job definition and modify the names of the input and output files. For example, if a user uploads a file named input-low-resolution.ts and wants to designate the output super-resolution file as output-high-resolution.ts, the proper array of Linux commands to enter in the next interface would be:
["/bin/sh","main.sh","s3://vsr-input-106171535299-us-east-1-f37dd060","input-low-resolution.ts","s3://vsr-output-106171535299-us-east-1-f37dd060","output-high-resolution.ts"]
Review and submit the job. Wait until the status transitions from Submitted to Runnable and finally to Succeeded. The AWS console will also display additional details, such as the number of job attempts and other information.
After the job’s execution, check the output Amazon S3 bucket to verify the super-resolution file has been created and uploaded to the vsr-output bucket automatically.
Comparing Visual Quality
To evaluate subjective quality, the open-source tool compare-video can be employed to assess the differences between the original and super-resolution videos. Additionally, an objective evaluation can be conducted using VMAF. For this objective assessment, VMAF utilizes traditional upscaling methods like Lanczos or Bicubic to align both resolutions before executing a frame-by-frame comparison.
Cleanup
To remove the example stack created during this solution, navigate to the CloudFormation console.
For further insights, you can explore this blog post on financial forecasting, or check this authority on strategic people management for various podcasts. Additionally, for an excellent resource, refer to this YouTube video.
Leave a Reply