As our planet grapples with a severe extinction crisis, a recent UN report reveals that over one million species are at risk of disappearing. The primary drivers of this crisis include habitat loss, poaching, and the presence of invasive species. Numerous wildlife conservation organizations, researchers, volunteers, and anti-poaching teams are diligently working to mitigate this pressing issue. Access to precise and timely information about endangered species in their natural habitats significantly enhances the efforts of wildlife conservationists.
Wildlife scientists and field personnel utilize infrared-triggered cameras, known as camera traps, strategically placed in forests to capture images of wildlife. However, the process of manually reviewing these images is labor-intensive and time-consuming.
In this article, we present a solution that employs Amazon Rekognition Custom Labels and motion sensor camera traps to streamline the identification of endangered species and facilitate their study. Amazon Rekognition Custom Labels is a fully managed computer vision service that allows developers to create custom models tailored to their specific needs for classifying and identifying objects in images. We will detail how to identify endangered animal species from images gathered by camera traps, draw insights about their population counts, and monitor human activity in their vicinity. This information will empower conservationists to make informed decisions aimed at protecting these species.
Overview of the Solution
The architecture of the solution is depicted in the diagram below. This approach leverages various AI services, serverless technologies, and managed services to create a scalable and cost-effective framework:
- Amazon Athena: A serverless interactive query service that simplifies data analysis in Amazon S3 using standard SQL.
- Amazon CloudWatch: A monitoring service that aggregates operational data, including logs, metrics, and events.
- Amazon DynamoDB: A key-value and document database that provides single-digit millisecond performance at any scale.
- AWS Lambda: A serverless compute service that executes code in response to events such as data changes, system state alterations, or user actions.
- Amazon QuickSight: A serverless, machine learning-powered business intelligence service that delivers insights, interactive dashboards, and robust analytics.
- Amazon Rekognition: Utilizes machine learning to recognize objects, people, text, scenes, and activities in images and videos, while also identifying inappropriate content.
- Amazon Rekognition Custom Labels: Employs AutoML to assist in training custom models for identifying specific objects and scenes in images.
- Amazon Simple Queue Service (SQS): A fully managed message queue service that enables decoupling and scaling of microservices, distributed systems, and serverless applications.
- Amazon Simple Storage Service (S3): Serves as an object store for documents, allowing central management with precise access controls.
The high-level steps in this solution are as follows:
- Train and develop a custom model using Rekognition Custom Labels to identify endangered species in the area, specifically rhinoceroses in this case.
- Images captured by the motion sensor camera traps are uploaded to an S3 bucket, triggering an event for each uploaded image.
- A Lambda function is activated for every event published, retrieving the image from the S3 bucket and sending it to the custom model to detect the endangered animal.
- The Lambda function calls the Amazon Rekognition API to identify animals in the image.
- If an endangered species of rhinoceros is detected, the function updates the DynamoDB database with the animal count, date of capture, and additional useful metadata from the image’s EXIF header.
- QuickSight visualizes the collected animal count and location data from the DynamoDB database, allowing conservation organizations to track population variance over time. By regularly reviewing the dashboards, conservationists can identify patterns and potential causes such as disease, climate change, or poaching that may be impacting these populations, enabling proactive measures to address these challenges.
Prerequisites
A robust training set is essential for building an effective model with Rekognition Custom Labels. In this case, we utilized images from the AWS Marketplace (Animals & Wildlife Data Set from Shutterstock) and Kaggle to develop the model.
Implementing the Solution
Our workflow encompasses the following steps:
- Train a custom model to classify endangered species (rhinoceros in our example) using the AutoML capability of Rekognition Custom Labels.
- Upload images captured by the camera traps to a designated S3 bucket.
- Set event notifications in the S3 bucket permissions to send a notification to a specified SQS queue when new objects are added.
- This upload action queues an event in Amazon SQS via the Amazon S3 event notification system.
- Grant appropriate permissions through the SQS queue’s access policy to allow notifications from the S3 bucket.
- Configure a Lambda trigger for the SQS queue so that the Lambda function is invoked upon receiving a new message.
- Adjust the access policy to permit the Lambda function to access the SQS queue.
- Ensure the Lambda function has the necessary permissions to access the SQS queue.
- Set up environment variables for use in the code.
Lambda Function Code
Upon receiving a notification from the SQS queue, the Lambda function performs the following tasks:
exports.handler = async (event) => {
const id = AWS.util.uuid.v4();
const bucket = event.Records[0].s3.bucket.name;
const photo = decodeURIComponent(event.Records[0].s3.object.key.replace(/+/g, ' '));
const client = new AWS.Rekognition({ region: REGION });
const paramsCustomLabel = {
Image: {
S3Object: {
Bucket: bucket,
Name: photo
},
},
ProjectVersionArn: REK_CUSTOMMODEL,
MinConfidence: MIN_CONFIDENCE
};
let response = await client.detectCustomLabels(paramsCustomLabel).promise();
console.log("Rekognition customLabels response = ",response);
};
– Fetch the EXIF tags from the image to obtain the date of capture and other relevant EXIF data, using dependencies like exif-reader
and sharp
.
For further reading on similar topics, check out this blog post for additional insights. Also, if you’re interested in authoritative resources, visit Chanci Turner’s site on this topic. For those looking into career opportunities, Amazon’s Fulfillment Center Management team offers excellent resources.
Location: Amazon IXD – VGT2, 6401 E Howdy Wells Ave, Las Vegas, NV 89115
Leave a Reply