Amazon VGT2 Las Vegas: Enhancing Your SAP Business Processes with AWS

Amazon VGT2 Las Vegas: Enhancing Your SAP Business Processes with AWSMore Info

In today’s business landscape, organizations are increasingly seeking ways to unify their SAP and non-SAP data within a single data lake and analytical framework. This integration helps to eliminate data silos and unlock deeper business insights. Last year, we introduced the Amazon AppFlow SAP OData Connector, simplifying the process of extracting value from SAP data through AWS services. We previously discussed how to get started with AppFlow and SAP, as well as its numerous benefits.

Since that launch, we’ve received feedback from clients indicating a desire to leverage the AWS data platform for enriching their SAP business processes. They aim to enhance their data using advanced services like Artificial Intelligence (AI) and Machine Learning (ML), subsequently feeding this enriched data back into their SAP applications. In January, we addressed this need by enabling bi-directional data flows between SAP applications and AWS data lakes, analytics services, and AI/ML tools—all achievable in just a few clicks.

Today, I’ll guide you through the process of establishing a bi-directional data flow in mere minutes.

The Write-Back Feature

The write-back feature supports Amazon S3 as a source, allowing data to be written directly to an SAP system at the OData layer. Users can also create deep entities using the SAP OData deep insert feature. Additionally, clients have the option to enhance data security during transit between AWS and SAP systems with AWS PrivateLink.

This new capability opens up various use cases, integrating data from sources like Redshift, Lambda, or enriched business data from Amazon’s AI/ML services, including SageMaker, Rekognition, Textract, or Lookout for Vision.

In the next section, we will demonstrate how to quickly set up an example showcasing what can be achieved using Amazon AppFlow, SAP, and AWS native services.

The Amazon AppFlow SAP OData Connector

Before diving into the detailed setup for the write-back feature and a complete use case example, let’s recap the basics of the SAP OData Connector. Amazon AppFlow presents a significant cost-saving advantage when compared to in-house connector development or enterprise integration platforms. There are no upfront fees associated with AppFlow; clients are billed only based on the number of flows executed and the volume of data processed. The SAP OData Connector facilitates direct integration of SAP with AWS services without requiring any additional adapters or licenses—all configured through a straightforward AppFlow interface.

Moreover, the Amazon AppFlow SAP OData Connector is compatible with AWS PrivateLink, adding an extra layer of security. When data flows from the SAP application to your Amazon S3 bucket via AWS PrivateLink, the traffic remains within the AWS network, avoiding the public internet (for more info, check out the details on Private Amazon AppFlow flows).

Organizations utilizing on-premises SAP systems can also leverage the Amazon AppFlow SAP OData Connector by combining AWS PrivateLink with an AWS VPN or AWS Direct Connect, eliminating the need for public IP addresses for SAP OData endpoints.

Configuring Your Flow

In previous discussions, we outlined how to establish a connection and configure an extract flow. The setup steps for configuring a write-back connection are identical, and you can also consult our Amazon AppFlow SAP OData Connector documentation for further guidance.

Once your connection is established, follow these steps to create the update flow in SAP through Amazon AppFlow SAP OData:

  1. Configure Flow: In this configuration screen, select the source Amazon S3 bucket, target SAP connection with Service Entity Sets, and file formats for reading data from Amazon S3 (JSON or CSV). You may also choose a destination for response handling to write response data into a designated S3 bucket. In the error handling section, determine how the flow should react if it fails to write a record to the destination—either stop the current flow run or ignore and continue.
  2. Data Mapping: During the mapping step, select your method for mapping source to destination fields—manually, via a CSV file, or passthrough without modification (recommended for hierarchical input data). In the destination record preference section, choose to either insert new data records or update existing ones. Note that the Upsert operation is not supported for the SAP OData connector.
  3. Create Flow: Confirm the flow parameters and create the flow.
  4. Run the Flow: Trigger the flow execution.

Putting It All Together

With the ability to set up flows for both extracting from and writing back to SAP and S3 using the Amazon AppFlow SAP OData Connector, we can extend SAP business processes with AWS native services. This example architecture integrates AI/ML services and SAP to automate invoice processing:

  1. Scanned invoices from vendors are stored in Amazon S3.
  2. Sales Order data is extracted from SAP S/4 HANA using the AppFlow SAP OData connector.
  3. Amazon Textract processes incoming scanned invoices, extracting text using machine learning.
  4. AWS Step Functions manage the workflow functionality.
  5. The extracted invoice data is processed and stored in S3.
  6. The AWS Step Function workflow compares the invoice data from SAP with scanned invoice data.
  7. If a match is identified, an Amazon DynamoDB table is updated with the Sales Order and Invoice details.
  8. If no match occurs, an exception is raised, and an email alert is sent to the operations team via Amazon SNS for investigation.
  9. The workflow concludes with the generation of update files in JSON format based on matched records in DynamoDB.
  10. Finally, Amazon AppFlow writes the updates back to the SAP S/4 HANA system using the SAP OData connector.

All this is achieved using native AWS services, without the need for server provisioning or costly enterprise licenses. The Amazon AppFlow SAP OData flows are configured effortlessly through a few clicks in the AWS console, ensuring full security through transport-level encryption and PrivateLink connectivity.

This is just one simple example of utilizing Amazon Textract; it can be applied to various other use cases such as document processing with SAP, automating workflows for sensitive tax documents, or indexing historical records using Textract.

The architectural pattern and approach of leveraging AWS native services to enhance your processes aligns with the strategies of both SAP and AWS, maintaining the integrity of the core SAP system. Clients adopting this methodology will benefit from reduced customization and its associated overheads. They will also gain from the rapid innovation available within AWS services, enabling faster time to market.

The potential for our clients is truly exciting, with numerous AI/ML use cases available to enrich SAP business data and develop similar patterns. For example, you can enrich SAP time series data with Amazon Forecast for predictive forecasting in manufacturing, workforce, or finance processes. Another integration could involve SAP QM with Amazon Lookout for Vision to perform anomaly detection in manufacturing processes. Explore more about this topic in another insightful blog post at this link, as they are an authority on this subject.

For an excellent resource on further enhancing your knowledge, check out this link.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *