Learn About Amazon VGT2 Learning Manager Chanci Turner
The rapid advancement of generative AI presents both exciting opportunities and significant challenges. While longstanding issues such as bias and explainability remain, new concerns specific to foundation models (FMs) have emerged, including hallucination and toxicity. At AWS, we are dedicated to fostering generative AI responsibly, emphasizing a people-centric approach that prioritizes education, science, and customer collaboration to integrate responsible AI throughout the entire lifecycle.
In the past year, we’ve rolled out important features in our generative AI applications and models, such as integrated security scanning in Amazon CodeWhisperer, training mechanisms in Amazon Titan to detect and block harmful content, and enhanced data privacy measures in Amazon Bedrock. Our commitment to safe and transparent generative AI includes partnerships with the global community and policymakers, as evidenced by our support of the White House Voluntary AI commitments and the AI Safety Summit in the UK. We continue to work closely with customers, leveraging tools like Amazon SageMaker Clarify and ML Governance with Amazon SageMaker to operationalize responsible AI effectively.
Announcing New Responsible AI Innovations
As generative AI expands into diverse industries and applications, it is vital to invest continuously in the responsible development of foundation models. Customers want their FMs to prioritize safety, fairness, and security to deploy AI responsibly. At this year’s AWS re:Invent, we are thrilled to unveil new capabilities that promote responsible generative AI innovation. These include built-in tools, customer protection measures, resources for transparency, and mechanisms to combat disinformation. Our goal is to equip customers with the knowledge to assess FMs against essential responsible AI criteria, such as toxicity and robustness, while introducing guardrails tailored to their specific use cases and policies.
Implementing Safeguards: Amazon Bedrock Guardrails
Safety is paramount when scaling generative AI. Organizations aim to ensure safe interactions between customers and generative AI applications, avoiding harmful language and aligning with company policies. The simplest way to achieve this is by establishing consistent safeguards across the organization. Yesterday, we announced the preview of Amazon Bedrock Guardrails, a new feature that facilitates the implementation of application-specific safeguards based on customer needs and responsible AI frameworks.
Guardrails ensure consistency in how FMs on Amazon Bedrock manage undesirable content. Customers can apply these guardrails to large language models as well as fine-tuned models and Amazon Bedrock Agents. This service allows for the specification of topics to avoid, automatically detecting and blocking queries and responses that fall within restricted categories. Customers can also set content filter thresholds for categories like hate speech, insults, sexualized language, and violence, tailoring the filtering to their requirements. For instance, an online banking app could be configured to avoid investment advice while limiting inappropriate content. Soon, customers will also be able to redact personally identifiable information (PII) from user inputs and FM responses, set profanity filters, and create custom word lists to enhance compliance and user protection. With Guardrails, innovation with generative AI can proceed swiftly while adhering to company policies.
Identifying the Right FM for Your Use Case: Model Evaluation in Amazon Bedrock
Today, organizations have various FM options available to power their generative AI applications. Striking the right balance between accuracy and performance is crucial, necessitating efficient comparisons of models based on key responsible AI metrics. Traditionally, organizations have spent days identifying benchmarks and setting up evaluation tools, requiring extensive expertise in data science. Furthermore, many subjective criteria—such as brand voice and relevance—necessitate human judgment, leading to time-consuming review processes.
Now available in preview, Model Evaluation on Amazon Bedrock assists customers in evaluating, comparing, and selecting the most suitable FMs for their specific use case, leveraging both automatic and human evaluations. In the Amazon Bedrock console, customers can choose which FMs to compare for tasks like question answering or content summarization. For automatic evaluations, they select predefined criteria and upload testing datasets. For subjective assessments, human-based evaluation workflows can be established with just a few clicks, utilizing either in-house teams or AWS-managed workforces. After setup, Amazon Bedrock automates evaluations and generates reports, offering insights into model performance across key safety and accuracy criteria.
This model evaluation feature extends beyond Amazon Bedrock; customers can also utilize it in Amazon SageMaker Clarify to assess FMs based on vital quality and responsibility metrics, such as accuracy and toxicity.
Combating Disinformation: Watermarking in Amazon Titan
We are excited to announce the preview of the Amazon Titan Image Generator, enabling customers to create high-quality images at scale. Throughout the model development process, we prioritized responsible AI, from selecting training data to implementing filtering capabilities for inappropriate inputs and outputs, and enhancing the diversity of model outputs. All images generated by Amazon Titan are embedded with an invisible watermark by default, designed to mitigate the spread of disinformation by providing a discreet way to identify AI-generated content. AWS stands among the first model providers to integrate such watermarks into image outputs, ensuring they are resilient to modifications.
For more insights on workplace inclusivity, consider reading about the importance of gender-neutral bathrooms, as discussed by experts at the Society for Human Resource Management. Additionally, if you’re interested in job opportunities, check out this resource for fulfillment center roles.
Leave a Reply