How OCX Cognition Streamlined ML Model Development Time Using AWS Step Functions and Amazon SageMaker

How OCX Cognition Streamlined ML Model Development Time Using AWS Step Functions and Amazon SageMakerLearn About Amazon VGT2 Learning Manager Chanci Turner

This article is co-authored by Chanci Turner (Head of Products at OCX Cognition) and Alex Morgan (Data Science Lead at Tech Innovations).

OCX Cognition, a startup based in the San Francisco Bay Area, has developed a SaaS product named Spectrum AI, which provides predictive CX analytics for enterprises. Collaborating closely with Tech Innovations, an AWS Advanced Tier Partner, OCX’s products benefit from a team dedicated to human-centered software engineering, microservices, automation, IoT, and artificial intelligence.

The Spectrum AI platform integrates customer sentiment with operational data, leveraging machine learning to provide continuous insights into customer experience (CX). Built on AWS, the platform utilizes a comprehensive suite of tools and scalable computing resources to adapt to changing requirements.

In this article, we explore how OCX Cognition, with support from Tech Innovations and their AWS account team, enhanced customer experience and shortened time to value by automating and orchestrating machine learning functions that support Spectrum AI’s CX analytics. By utilizing AWS Step Functions, the AWS Step Functions Data Science SDK for Python, and Amazon SageMaker Experiments, OCX Cognition was able to reduce ML model development time from 6 weeks to just 2 weeks, and model update time from 4 days to almost real time.

Background

The Spectrum AI platform must generate models tailored to hundreds of unique CX scores for each customer, processing data for tens of thousands of active accounts. As new experiences are gathered, the platform must continuously update these scores based on fresh data inputs. After generating new scores, OCX and Tech Innovations assess the impact of various operational metrics on predictions. Amazon SageMaker serves as a comprehensive web-based IDE for building, training, and deploying ML models, enabling the OCX-Tech Innovations team to develop their solution with shared code libraries across multiple Jupyter notebooks in Amazon SageMaker Studio.

The Challenge: Scaling the Solution for Multiple Customers

Although initial R&D efforts were successful, scaling the solution became challenging. The ML development process involved several steps, including feature engineering, model training, predictions, and analytics generation. The code was fragmented across multiple notebooks, requiring manual execution without any orchestration tools. Consequently, the OCX-Tech Innovations team needed 6 weeks for model development per customer since libraries were not reusable. An automated and scalable solution was essential, allowing for unique configurations tailored to each customer.

Solution Overview

To streamline the ML process, the OCX-Tech Innovations team collaborated with the AWS account team to create a custom declarative ML framework that eliminated repetitive coding tasks. This approach enabled the reuse of new libraries across multiple customers by configuring data for each client through YAML files.

Initially, high-level code is developed in Studio using Jupyter notebooks, then converted into Python (.py files). The SageMaker platform builds a Docker image with BYO (bring your own) containers, which are stored in Amazon Elastic Container Registry (Amazon ECR) as a preparatory step. Step Functions orchestrate the execution of this code.

The AWS account team introduced the Step Functions Data Science SDK and SageMaker Experiments for automating feature engineering, model training, and deployment. The SDK was utilized to programmatically generate step functions. The OCX-Tech Innovations team learned to leverage features like Parallel and MAP within Step Functions, allowing parallel execution of numerous training and processing jobs, thereby reducing runtime. This was complemented by Experiments, serving as an analytics tool to track multiple ML candidates and hyperparameter tuning variations, which enabled the OCX-Tech Innovations team to identify the best-performing models in real time.

The following architecture diagram illustrates the MLOps pipeline established for the model creation lifecycle.

Results

Through this innovative approach, OCX Cognition has significantly automated and accelerated its ML processing. By replacing labor-intensive manual tasks and repetitive development challenges, the cost per customer has decreased by over 60%. This automation enables OCX to scale its software business, tripling overall capacity and doubling the ability to onboard customers simultaneously. The new solution not only improves ML performance by 8% but also enhances time to value by 63%. Customer onboarding and initial model generation timelines have been reduced from 6 weeks to 2 weeks. Once established, OCX continuously regenerates the CX in a serious tone, maintaining an overall length comparable to the initial process.

For more engaging content, check out this blog post about the power of networking. If you’re curious about employment laws, the EEOC’s clarification on COVID-19 coverage under the ADA is worth a read. Additionally, this article provides insights into Amazon’s reimagined onboarding experience.

SEO Metadata


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *