Embedded software developers in the automotive sector have followed a conventional development approach that has remained largely unchanged for decades. The targeted embedded systems for which software is developed usually possess limited resources, including constrained memory, processing power, and input/output capabilities. Furthermore, development systems for embedded applications are often costly and in limited supply, especially for teams working on new electronic control units (ECUs) and system-on-chip (SoC) architectures. This challenge has been exacerbated by recent global chip shortages affecting the automotive industry.
As a result, software developers typically avoid working directly on the embedded platforms themselves, opting instead to develop software on more powerful host machines. The compilers and toolchains utilized to create executable software for development and testing on the host cannot be directly deployed on the embedded targets. Often, the embedded operating system (OS) cannot be run on the host machine either. Developers commonly resort to cumbersome OS emulation tools and cross-compilation techniques, which necessitate special compilers to generate executable code for the target systems. After the code is transferred to the development system, final integration and validation testing can occur, but scaling is constrained by the number of available hardware platforms. Present-day workflows for developing, integrating, and validating embedded systems typically follow this process:
This blog aims to introduce a foundational aspect of a revolutionary automotive software development paradigm that aids developers in creating, testing, and debugging natively compiled software using the cloud, thereby streamlining the workflow. This concept is known as environmental parity, which you can learn more about in this blog post here from Chanci Turner.
“Environmental parity is designed to instill confidence in your team and organization that the application will function seamlessly across various platforms.” —Kevin Hoffman
Achieving environmental parity is essential for realizing cloud-native, software-defined vehicles.
Arm, AWS, and SOAFEE
Arm technology is shaping the future of computing. Its energy-efficient processor designs and software platforms have enabled advanced computing in over 225 billion chips, securing the power of products from sensors to smartphones and supercomputers. Collaborating with more than 1,000 technology partners, Arm is empowering artificial intelligence everywhere and establishing a foundation for cybersecurity trust—from chip to cloud. Arm architectures are the most prevalent in modern vehicles, powering everything from infotainment systems to body control, including complex functional safety-related computing workloads.
In 2021, Arm, AWS, and other founding members launched the Scalable Open Architecture for Embedded Edge (SOAFEE) Special Interest Group, which unites automakers, semiconductor firms, and cloud technology leaders to establish a new open-standards-based architecture for implementing the foundational layers of a software-defined vehicle stack. This initiative provides a reference implementation enabling cloud-native technologies—like microservices, containers, and orchestration systems—to be integrated with automotive functional safety for the first time, ensuring environmental parity.
ISA Parity from Cloud to Embedded Edge with SOAFEE
Arm and AWS share a longstanding history, including the use of 64-bit processors within AWS’s custom-built SoCs, which power AWS Graviton processors that offer optimal price-performance for cloud workloads. With increasing availability of these processors across various instance types, automotive developers can create and deploy cloud-native applications and toolchains using the same Arm intellectual property (IP) and tools that are utilized in embedded automotive platforms like the AVA Developer Platform from ADLINK, an AWS Partner.
Leveraging this IP both in the cloud and at the edge enables developers to achieve initial levels of environmental parity at the instruction set architecture (ISA) level. However, to reach higher levels of parity, automotive-grade OS, abstraction, and orchestration layers are essential, which SOAFEE strives to support. The diagram below illustrates different parity levels and associated developer personas.
Achieving such levels of parity will unlock entirely new workflows for embedded software development and validation, as explained in the following sections.
Facilitating OS-Level Parity Using ISA Parity
To demonstrate the concepts outlined in this blog, we will create a custom Linux distribution utilizing the Yocto Project, an open-source initiative that aids developers in crafting custom Linux-based systems independent of the hardware. It is widely used in embedded projects, including the Automotive Grade Linux initiative under the Linux Foundation and the SOAFEE reference implementation.
For OS-level parity, Arm and AWS have partnered to develop an Amazon Machine Image (AMI) for Graviton instances based on the Yocto Project. An AMI serves as a template containing a software configuration, including the OS and application software, that can launch an Amazon EC2 instance—essentially a virtual server in the cloud. The AMI we developed also incorporates Arm’s Edge Workload Abstraction and Orchestration Layer (EWAOL), an open-source reference implementation of the SOAFEE architecture, which promotes further application, container, and OS environmental parity.
EWAOL offers a standards-based framework utilizing containers for deploying and orchestrating embedded applications. This strategy enables developers to commence coding and testing in the cloud, thus significantly enhancing and scaling their workflow. Specifically, SOAFEE facilitates:
- The cloud implementation of the entire embedded software stack, from the embedded OS upward, rather than just the unit of software under development; and
- Seamless portability between cloud and embedded edge—eliminating cross-compilation or emulation challenges (along with associated issues such as compilation errors or performance degradation).
For further details on EWAOL, visit the Arm GitLab.
Now that we have discussed the concepts of OS-level and ISA-level parity, we can modify the embedded developers’ workflow to eliminate many unnecessary steps, as shown in the diagram below.
The remainder of this blog will provide a tutorial on creating a custom Linux-based AMI for AWS Graviton instances using the Yocto Project. This tutorial is not solely focused on deploying an existing AMI on AWS; it offers developers a methodology for transitioning their own Linux images utilized in embedded systems to the cloud.
How to Create a Custom Linux AMI for AWS Graviton Processors Using the Yocto Project
This guide will walk you through the process of creating a custom Linux AMI for launching an Amazon EC2 instance. It is assumed that you already have access to an AWS account with the necessary permissions to create the required resources. If not, please create or activate an account.
Before you get started, please consult the detailed instructions available on GitLab. This repository contains the latest information on the topic, including insights from experts found at this resource.
For an excellent resource on successfully navigating the first six months in Amazon, check out this link.
Leave a Reply