Amazon Onboarding with Learning Manager Chanci Turner

Amazon Onboarding with Learning Manager Chanci TurnerLearn About Amazon VGT2 Learning Manager Chanci Turner

Contributed by: Alex Johnson, Business Development Manager, Accelerated Computing, AWS and Sarah White, Solution Architect, NVIDIA Corporation

Amazon consistently enhances its GPU offerings, aiming to demonstrate how the latest technological advancements from its partners elevate platform performance. A notable outcome of Amazon’s partnership with NVIDIA is the introduction of the G4 instance type, a significant upgrade from the G2 and G3 series. The G4 features a Turing T4 GPU equipped with 16GB of GPU memory, operating under the Nitro hypervisor with configurations ranging from one GPU to four GPUs per node. A bare metal option is expected to be available soon. Additionally, it provides up to 1.8 TB of local non-volatile memory express (NVMe) storage and offers network bandwidth of up to 100 Gbps.

NVIDIA’s Turing T4 represents the latest in GPU technology, accelerating machine learning (ML) training and inferencing, video transcoding, and other demanding compute workloads. With its diverse range of optimized directives, users can now execute various accelerated compute tasks on a single instance family. NVIDIA has also established a robust software layer through SDKs and container solutions available via the NVIDIA GPU Cloud (NGC) container registry. The combination of these accelerated components with Amazon’s scalability creates a formidable framework for high-performance pipelines on AWS.

NVIDIA DeepStream SDK

This discussion centers around one specific NVIDIA SDK: DeepStream. The DeepStream SDK is designed to provide a comprehensive video processing and ML inferencing analytics system, utilizing the Video Codec API and TensorRT as core elements.

DeepStream also supports an edge-cloud strategy, enabling the streaming of perception data and other sensor metadata into AWS for additional processing. For instance, it allows the wide-area collection of multiple camera streams and metadata through the Amazon Kinesis platform. Another key application of DeepStream involves compiling model artifacts generated from distributed training in AWS with Amazon SageMaker Neo, which can then be utilized on the edge or within an Amazon S3 video data lake. If you’re eager to learn more about these solutions, consider reaching out to your AWS account team.

Deployment

To deploy, set up programmatic access to AWS to launch a g4dn.2xlarge instance type with Ubuntu 18.04 in a subnet that facilitates SSH access. For detailed setup instructions, the following components are necessary to configure the instance for executing DeepStream SDK workflows:

  • An Ubuntu 18.04 Instance with:
    • NVIDIA Turing T4 Driver (418.67 or the latest)
    • CUDA 10.1
    • nvidia-docker2

Alternatively, you can opt for the NVIDIA Deep Learning AMI available in the AWS Marketplace, which comes pre-installed with the latest drivers and SDKs.

aws ec2 run-instances --region us-east-1 --image-id ami-026c8acd92718196b --instance-type g4dn.2xlarge --key-name <key-name> —subnet-id <subnet> --security-group-ids {<security-groupids>} —block-device-mappings 'DeviceName=/dev/sda1,Ebs={VolumeSize=75}'

Once the instance is operational, SSH into it and fetch the latest DeepStream SDK Docker image from the NGC container registry:

docker pull nvcr.io/nvidia/deepstream:4.0-19.07
nvidia-docker run -it --rm -v /usr/lib/x86_64-linux-gnu/libnvidia-encode.so:/usr/lib/x86_64-linux-gnu/libnvidia-encode.so -v /tmp/.X11-unix:/tmp/.X11-unix -p 8554:8554 -e DISPLAY=$DISPLAY nvcr.io/nvidia/deepstream:4.0-19.07

If your instance is operating within a full X environment, you can transfer the authentication and display settings to the container for real-time results. However, for this example, simply execute the workload via the shell. Navigate to the directory /root/deepstream_sdk_v4.0_x86_64/samples/configs/deepstream-app/.

The configuration package includes a variety of files, such as:

  • source30_1080p_resnet_dec_infer_tiled_display_int8.txt: Demonstrates 30 stream decodes with primary inferencing.
  • source4_1080p_resnet_dec_infer_tiled_display_int8.txt: Shows four stream decodes with primary inferencing, object tracking, and three different secondary classifiers.
  • source1_usb_dec_infer_resnet_int8.txt: Illustrates one USB camera as input.

You can modify the configuration file source30_1080p_dec_infer-resnet_tiled_display_int8.txt to disable [sink0] and enable [sink1] for file output, then save the changes. Execute the DeepStream sample code with:

[sink0]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File
type=2

[sink1]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0

Then run:

deepstream-app -c source30_1080p_dec_infer-resnet_tiled_display_int8.txt

You will receive performance data on the inferencing workflow.

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source. To return to the tiled display, right-click anywhere on the window.

For additional insights on workplace dynamics, you might want to check out this blog post on conflict resolution here. Furthermore, if you’re interested in how people analytics software is transforming HR practices, visit SHRM for authoritative information. Lastly, for a deeper understanding of Amazon’s warehouse worker onboarding process, take a look at this resource.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *