RISC-V Enables Performant and Flexible AI and ML Compute

RISC-V: introducing new paradigms to the world of hardware design with software-focused hardware.

author avatar

03 Sep, 2024. 6 min read

The emergence of Artificial Intelligence (AI) and Machine Learning (ML) is one of the most significant computing trends in recent history. According to research, by 2027, spending on AI software alone will grow to nearly $300B, with a CAGR of 19.1% [1]. And, as the software half of the AI/ML world is growing at meteoric rates, the hardware side is also teeming with innovation. Similar research suggests that the AI accelerator market will increase from $21B in 2024 to $33B by 2028 [2].

AI/ML is a software-driven pursuit and can have very different requirements depending on the market and application (i.e., automotive, IoT, cloud, training vs inference, etc). Supporting this fast moving and diverse software market requires a collaborative,industry-wide effort to develop flexible and customized domain-specific hardware solutions optimized for this range of AI workloads. Innovation means that the market should not be limited to a few key players. Rather, the process needs to be democratized to enable everyone, enterprises and startups, to contribute their ideas to the growing AI/ML computing landscape. 

RISC-V has become a preferred standard for designing the industry’s most performant and efficient AI/ML computing resources. Let’s take a look at what’s happening in the world of RISC-V for AI/ML and discover why RISC-V is a key technology for supporting the industry’s future. 

Why RISC-V for AI/ML?

RISC-V is a compelling architecture for the development of AI/ML systems.The extensible industry standard RISC-V ISA enables a software-focused approach to AI hardware, freeing developers from the restrictions of proprietary compute. RISC-V provides a common language for AI development, as an industry standard ISA, creating a cohesive ecosystem for AI/ML development. This robust and highly capable ecosystem of member companies, organizations, technologists, enthusiasts and academics, brings the expertise and technologies that can deliver future generations of innovative new AI systems.

Software-Focused Hardware

RISC-V introduces new paradigms to the world of hardware design by enabling the concept of software-focused hardware. AI/ML is software defined, built on frameworks such as TensorFlow, ONNX and OneAPI, and rapidly evolving algorithms. Transformers, the basis of the recent boom in AI were only introduced in 2017. New algorithms will emerge, requiring new hardware to run on, and while it is not possible to predict these algorithms, by using RISC-V we can prepare to support them with optimized hardware delivered fast. 

Historically, software designers have been constrained by the restrictions of proprietary hardware platforms. The development process involved designing applications to the constraints of predefined hardware, ultimately limiting performance and efficiency. Not every AI/ML workload belongs on a GPU. Instead of developing software to fit the constraints of predefined hardware, RISC-V’s modular architecture enables industry leading differentiation, through the development of custom processors and accelerators developed precisely for the requirements of specific AI workloads such as energy efficient IoT and powerful HPC applications. 

The configurability and openness of RISC-V enables a “software-focused” design paradigm, wherein hardware customization for specific workloads becomes feasible without the constraints imposed by proprietary instruction sets. This is achieved through RISC-V’s modular architecture, which allows developers to define custom extensions and instructions tailored to their unique workloads. For example, in AI/ML applications, developers can implement specialized instructions for matrix multiplications and vector processing—key operations in neural network computations.

Using custom instructions, such as Single Instruction, Multiple Data (SIMD) operations, RISC-V processors can execute multiple data points with a single instruction, significantly boosting performance in parallel processing tasks common in AI/ML workloads. Additionally, hardware accelerators tailored to AI tasks can be integrated seamlessly. These accelerators, including tensor processing units (TPUs) or specialized neural processing units (NPUs), are designed to handle the intensive mathematical operations required for deep learning models, such as convolutions and backpropagation.

This high degree of customization leads to superior computational efficiency, as the processor is optimized for specific tasks rather than relying on a generic, one-size-fits-all solution. Moreover, RISC-V’s ability to support fine-grained power management techniques reduces power consumption. For instance, designers can implement dynamic voltage and frequency scaling (DVFS) tailored to the specific needs of AI/ML tasks, reducing power usage without compromising performance.

Furthermore, the RISC-V standard promotes rapid innovation and cost-effective development. Developers can contribute to and benefit from a global ecosystem of shared knowledge and resources, facilitating rapid innovation and ultimately a faster time-to-market. This collective effort ensures that the latest advancements in AI/ML can be quickly integrated into hardware designs, keeping pace with the fast-evolving demands of the field.

Creating a Common Language

The RISC-V instruction set architecture (ISA) provides a uniform language for hardware and software developers, creating a cohesive ecosystem for AI/ML development. It allows developers to create tailored domain specific solutions with greater flexibility. Importantly, the controlled way in which this is enabled ensures that the base foundational software elements, such as those running on Linux, remain consistent and interoperable across different RISC-V implementations.

Ultimately, RISC-V's unique proposition lies in its ability to foster AI/ML by leading the industry in providing the opportunity to build customized and differentiated solutions from standard and non-standard extensions. This enables domain-specific acceleration while maintaining the productivity benefits of a single, coherent programming model. RISC-V defines a small base feature set for implementers to use along with ratified extensions which they can use as required. Inside a single RISC-V based AI/ML SoC you can deeply customize different RISC-V cores for different jobs such as SoC Control or matrix multiplication, with these different cores all based on the same ISA, making them easier to program. 

To add to this flexibility, RISC-V permits the addition of vendor-specific extensions to differentiate products with application-specific functionality. RISC-V provides guidance and branding to implementors for interoperability requirements in market segments through the definition of “RISC-V Profiles"

Offering a standardized yet flexible platform allows designers to rapidly integrate cutting-edge research into hardware without being hampered by proprietary constraints. Developers can leverage community contributions, such as optimized libraries and toolchains, to expedite the development process and enhance the performance of their AI/ML solutions.

Beyond this, the RISC-V ISA presents a significant advantage in providing standardized, modular building blocks for various computing applications. This standardization enables hardware designers to develop and integrate components more seamlessly, reducing the complexity and cost typically associated with custom designs.

By making custom designs more accessible, RISC-V makes an easier business case for the creation of optimized platforms for specific use cases, rather than compromise on having to use more generic hardware and software created for other high volume markets. 

A Highly Capable Ecosystem Delivering AI/ML Solutions on RISC-V

The capabilities of the RISC-V member ecosystem enables AI developers to deliver the innovative software focused AI systems. From processor IP to design services and simulation to software components, the collective knowledge of these companies and organizations covers all the capabilities required to create custom software-focused AI/ML processing, enabling developers to concentrate on their innovation. The range of choice from this healthy market of ecosystem members prevents vendor lock-in and mitigates some of the supply chain challenges faced by other architectures. Alongside the capability to create custom processing, the RISC-V ecosystem also offers a range of pre-designed AI silicon, designed for different workloads.

There’s no doubt that this RISC-V ecosystem has taken a foothold in the industry at large. In fact, so far, we estimate that more than 10 billion RISC-V processor cores have shipped [3]. Now, we’re seeing similar adoption in the AI/ML hardware markets, where reports are indicating that RISC-V-based AI SoCs will have a CAGR for units of 73.6% by 2027, while AI SoC unit shipments are projected to reach 25B by then. [4]. RISC-V is quickly becoming the defacto standard for building AI accelerators. 

Here are some of the highlights of RISC-V’s use in AI/ML.

  • Meta’s MTIA v1 AI Inference Accelerator [5]

  • Esperanto ET-SoC-1 [6]

  • Google’s TPU [7]

  • SiFive Intelligence X390 [8]

  • StreamComputing STPC920 [10]

  • Ventana Veyron V2 [11]

  • Andes Technology QiLai SoC [12]

  • Semidynamics ‘All-In-One AI’ IP [13]

  • Axelera AI Metis AI Platform [14]

  • Project Open Se Cura [15]

  • Codeplay oneAPI Construction Kit [16]

Conclusion

As the AI/ML market continues to grow exponentially, RISC-V is poised to be a formidable driver of innovation and change for all. By offering a modular, open, and customizable architecture, RISC-V gives designers the creativity, power, and speed of innovation needed to address the diverse and evolving needs of AI/ML applications. Developers are empowered with the flexibility to tailor solutions to specific requirements while working in a unified and interoperable ecosystem. Finally, the community-driven innovation model of RISC-V accelerates the development of cutting-edge technologies and democratizes access to advanced hardware design for all.

For more information on how RISC-V is driving the future of AI, visit www.riscv.org/ai.

References

[1]https://www.gartner.com/en/documents/4925331#:~:text=Summary,2023%20to%2035%25%20by%202027.

[2] https://www.gartner.com/en/newsroom/press-releases/2024-05-29-gartner-forecasts-worldwide-artificial-intelligence-chips-revenue-to-grow-33-percent-in-2024#:~:text=Gartner%20forecasts%20AI%20PC%20shipments,will%20be%20an%20AI%20PC.

[3]https://riscv.org/news/2022/07/europe-steps-up-as-risc-v-ships-10bn-cores-nick-flaherty-ee-news-europe/

[4] https://semico.com/content/analyzing-risc-v-cpu-market-sip-socs-ai-and-design-starts

[5]https://ai.meta.com/blog/meta-training-inference-accelerator-AI-MTIA/

[6]https://www.esperanto.ai/products/

[7] https://riscv.org/wp-content/uploads/2018/12/Antmicro-RISC-V-Summit-Keynote.pdf

[8] https://www.sifive.com/cores/intelligence-x390

[9] https://tenstorrent.com/hardware/wormhole 

[10] https://www.streamcomputing.com/en/index.php?s=product&c=category&id=2

[11] https://www.ventanamicro.com/ventana-introduces-veyron-v2/

[12] https://www.andestech.com/en/2024/05/30/andes-technology-announced-the-qilai-soc-and/

[13] https://semidynamics.com/en/resources/press-releases

[14] https://www.axelera.ai/blog/the-metis-ai-platform-in-detail

[15] https://opensource.googleblog.com/2023/11/project-open-se-cura-open-source-announcement.html

[16] https://developer.codeplay.com/products/oneapi/construction-kit/home/