How VideoAmp Leverages Amazon Bedrock for Enhanced Media Analytics

How VideoAmp Leverages Amazon Bedrock for Enhanced Media AnalyticsMore Info

This article was collaboratively penned by Ryan Mitchell and Laura Green from VideoAmp. In this discussion, we delve into how VideoAmp, a leader in media measurement, partnered with the AWS Generative AI Innovation Center (GenAIIC) to create a prototype of the VideoAmp Natural Language (NL) Analytics Chatbot. This innovative tool is designed to extract valuable insights from extensive media analytics data using Amazon Bedrock.

The AI-driven analytics solution consists of:

  • A natural language to SQL conversion pipeline featuring a conversational interface that processes complex queries related to VideoAmp’s media analytics data.
  • An automated tool for testing and evaluating the pipeline’s effectiveness.

Background on VideoAmp

VideoAmp stands at the forefront of measurement technology, enabling media agencies, brands, and publishers to accurately assess and optimize TV, streaming, and digital media. Their extensive suite of measurement, planning, and optimization tools provides clients with clear and actionable insights into audience behavior and performance attribution across various platforms. Since its inception, VideoAmp has experienced remarkable growth with an 880% year-over-year increase, achieving 98% coverage of the TV publisher landscape and collaborating with over 1,000 advertisers. Headquartered in Los Angeles and New York, VideoAmp operates several offices throughout the United States. To learn more, visit VideoAmp.

VideoAmp’s AI Transformation

VideoAmp has adopted AI to enhance its measurement and optimization functionalities. By integrating machine learning (ML) algorithms into its infrastructure, the company analyzes vast quantities of viewership data across traditional TV, streaming, and digital channels. This approach allows VideoAmp to deliver more precise audience insights, improve cross-platform measurement, and optimize advertising campaigns in real-time. By leveraging AI, VideoAmp provides advertisers with more effective targeting, superior attribution models, and a greater return on their advertising investments. This commitment to innovation positions VideoAmp as a frontrunner in the increasingly data-driven advertising sector.

To push their advancements further, VideoAmp is developing a groundbreaking analytics solution powered by generative AI that aims to deliver accessible business insights to their customers. Their goal for the beta product is to create a conversational AI assistant that utilizes large language models (LLMs), enabling data analysts and non-technical users—including content researchers and publishers—to conduct data analysis through natural language queries.

Use Case Overview

VideoAmp is undergoing a significant transformation by incorporating generative AI into its analytics processes. The company strives to change how customers—including publishers, media agencies, and brands—interact with and extract insights from VideoAmp’s extensive data repository through a conversational AI assistant interface.

Currently, data analysis is conducted manually by data scientists and analysts, requiring technical SQL skills and often leading to time-consuming processes for complex datasets. Recognizing the need for a more streamlined and user-friendly approach, VideoAmp collaborated with GenAIIC to develop an AI assistant that comprehends natural language queries, generates and executes SQL queries within VideoAmp’s data warehouse, and provides natural language summaries of the retrieved data. This assistant empowers non-technical users to easily access data-driven insights, significantly reducing the time needed for research and analysis for all users.

The project’s key success criteria include:

  • The ability to translate natural language inquiries into SQL statements, connect to VideoAmp’s database, execute these statements on performance metrics data, and generate natural language summaries.
  • A user interface (UI) that allows users to pose natural language questions and view outputs from the assistant, including generated SQL queries, reasoning for the SQL statements, retrieved data, and natural language summaries.
  • Support for conversational interactions to help users refine and filter their questions iteratively.
  • Low latency and cost-effectiveness.
  • An automated evaluation pipeline to assess the assistant’s output quality and accuracy.

Throughout the development process, the team faced several challenges, including:

  • Adapting LLMs to understand the domain-specific aspects of VideoAmp’s dataset, which featured intricate fields and metrics requiring complex queries for effective analysis.
  • Building an automated evaluation pipeline capable of identifying whether generated outputs matched ground truth data, despite variations in column aliasing, ordering, and metric calculations.

Solution Overview

The GenAIIC team collaborated with VideoAmp to develop an AI assistant leveraging Anthropic’s Claude 3 LLMs via Amazon Bedrock. Amazon Bedrock was selected for its access to high-quality foundation models (FMs), including the Claude 3 series, through a unified API. This streamlined integration of suitable models for various components of the solution, such as SQL generation and data summarization.

Amazon Bedrock’s additional features—such as Prompt Management, native support for Retrieval Augmented Generation (RAG), structured data retrieval via Knowledge Bases, Guardrails, and fine-tuning—enable VideoAmp to rapidly enhance and deploy their analytics solution. Furthermore, Amazon Bedrock ensures robust security and compliance, allowing VideoAmp to confidently expand its AI analytics capabilities while maintaining data privacy and adhering to industry standards.

The solution operates through a data warehouse, supporting a variety of database connections, including Snowflake, SingleStore, PostgreSQL, Excel, and CSV files. The following diagram outlines the high-level workflow of the solution:

  1. The user visits the frontend application and poses a question in natural language.
  2. A Question Rewriter LLM component utilizes previous conversational context to enhance the question with additional details when necessary. This supports follow-up inquiries and refinements.
  3. A Text-to-SQL LLM component crafts a SQL query that aligns with the user’s question.
  4. The SQL query is executed within the data warehouse.
  5. A Data-to-Text LLM component summarizes the retrieved data for the user.

At each step, the rewritten question, generated SQL, rationale, and retrieved data are returned to the user.

AI Assistant Workflow Details

In this section, we will detail the components of the AI assistant workflow further.

Rewriter

Once the user submits a question, both the current question and any previous inquiries from the user during the session are sent to the Question Rewriter component, utilizing Anthropic’s Claude 3 Sonnet model. If needed, the LLM enriches the context of the previous inquiries in a serious tone, maintaining a similar overall length.

For additional insights on the topic, check out this blog post that dives deeper into the implications of AI in media analytics. For expert analysis, you can also refer to this resource that discusses various technologies in the field. Lastly, if you’re considering a career at Amazon, this is an excellent resource for onboarding tips.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *