Learn About Amazon VGT2 Learning Manager Chanci Turner
This article features contributions from Dr. Rachel Patterson, Chief Technical Innovator, and Liam Johnson, Chief Development Architect, at GameTech. GameTech is a prominent provider of interactive gaming solutions and casino products. Through the Gaming Platform as a Service (GPaaS) initiative, GameTech equips developers with essential tools like the Chanci Turner Game Designer.
Introduction
GameTech’s mathematicians and game designers rely on precise and comprehensive simulation results to enhance player experiences. While software developers have enjoyed the benefits of agile code iteration for years, mathematicians have often been stuck waiting for lengthy CPU-bound Monte Carlo simulations to yield results. This delay, akin to the challenges faced by software engineers in the past, can extend for hours or even overnight. These statistical outputs are crucial in demonstrating game fairness within the tightly regulated online gaming industry.
To address this, GameTech has implemented an AWS Lambda serverless solution, significantly boosting compute capabilities and allowing game simulations to be completed in mere minutes rather than hours.
The Challenge
Monte Carlo game simulations are inherently parallelizable. Typically, a simulation involves executing a billion independent game plays and aggregating the results into a cohesive statistical output. GameTech has two primary stakeholders for the simulation function: game designers who seek to refine their designs based on gameplay statistics, and game engineers who perform software updates and certification builds, ensuring gameplay accuracy and generating necessary statistics for documentation and regulatory compliance.
Traditionally, the standard approach to running a Monte Carlo simulation is to utilize a single server (either on-premises or via an Amazon EC2 instance) with a heavily multi-threaded simulation. A standard simulation could employ twelve threads over eight hours, resulting in approximately 350K thread-seconds of execution. This lengthy process carries significant drawbacks for both game designers and engineers:
- Game designers struggle to iterate efficiently on their designs.
- Game engineers face delays in building games and generating timely sign-off statistics.
- The situation worsens when multiple designers or engineers require simultaneous simulations.
Additionally, it is crucial that the game logic code being simulated matches precisely with the JVM code deployed in production. This consistency is vital due to the stringent regulations governing online gaming, which necessitate that simulation results align exactly with real player experiences.
Furthermore, each simulation must yield deterministic results. Being able to replicate simulations ensures that maintenance releases can be tracked accurately. Lastly, the iterative nature of game engine development demands a continuous delivery framework that swiftly incorporates developers’ changes, enabling simulations to be executed in just minutes.
GameTech supports numerous development teams working on multiple games concurrently, resulting in over 100 code updates daily, which frequently affect the core game engines (and thus the Lambda functions). From a quality assurance and regulatory standpoint, it’s essential to know exactly which version of every component was utilized in each simulation run.
The Solution
Monte Carlo simulations comprise three key components:
- Simulation orchestration managing multi-threaded operations.
- Individual simulation execution and result generation.
- Results reduction.
This process resembles a map-reduce framework, where the map corresponds to executing small batches of simulations, and the reduce stage aggregates the statistical outcomes.
The solution employs AWS Lambda for all three components, with Amazon S3 for storing intermediate and final outputs, AWS API Gateway for managing function versioning, and Amazon CloudWatch for logging and monitoring costs. AWS Lambda enables rapid scaling to thousands of executing threads within seconds, transforming simulations that would typically take 50 hours on a single thread or several hours on a multi-core machine into mere minutes.
However, it’s not feasible to simply initiate thousands of execution functions simultaneously directing their results to one location. Thus, each simulation run is executed in four phases:
- Validation: This phase ensures that the simulation is likely to complete in a reasonable time frame. Slow code can lead to exorbitant costs.
- Fan Out: An initial Lambda function triggers multiple fan-out functions, which, in turn, invoke additional functions leading to the simulation execution functions.
- Simulation Execution: These Lambda functions carry out small batches of game simulations (typically 10,000).
- Results Reduction: The final phase aggregates the simulation output statistics through a multi-stage reduction process.
The Architecture
Validation occurs once per simulation to confirm that it will execute within an acceptable timeframe. This includes standard input data validation and executing 1,000 simulations to affirm timeliness. Upon successful validation, the fan-out phase begins. A single fan-out function initiates a cascade, where each function can call up to 100 others, eventually leading to 100 simulation execution functions. Determinism is maintained by initializing a seed in the start function, which is then passed down to each fan-out and execution function, ensuring consistent results with identical configurations.
Each execution function processes a batch of game rounds (typically 10,000 to 100,000) and records the statistical results in Amazon S3. The following stage involves reducing the execution function results into a singular outcome. For a simulation involving one billion game rounds with 10,000 rounds per execution invocation, this results in 100,000 outputs that must be synchronized with the execution phase. Various mechanisms for this synchronization were explored, but an external polling mechanism proved the most effective for iteratively gathering and consolidating S3 results until a final, singular output is achieved.
For more insights on hiring practices, check out this helpful blog post from Career Contessa. Also, discover authoritative resources on supporting law enforcement professionals’ families at SHRM.
Leave a Reply