Skip to content
hw.dev
hw.dev/signal/risc-v-ai-hardware-open-foundation
SignalJon Peddie Research

RISC-V Is Quietly Becoming the Connective Tissue of AI Hardware

Jon Peddie Research maps how RISC-V is scaling from always-on keyword detection up through chiplet-based inference SoCs, with three distinct NPU integration architectures now in commercial production.

#risc-v#ai-hardware#embedded
Read Original

Jon Peddie Research published a detailed survey of RISC-V's role across the AI hardware stack -- from always-on fixed-function accelerators handling keyword detection up through multi-chiplet SoCs running transformer-class models at the edge.

The framing that matters: RISC-V is no longer just a research or hobbyist ISA. It is the connective tissue threading together NPUs, vector extensions, and general-purpose cores into unified AI-native silicon. Three architectures are now in commercial production: discrete NPU alongside RISC-V CPU (the inherited model, with bus-latency baggage), Semidynamics' CPU-vector-tensor unified compute element targeting 8-64 TOPS for LLM inference, and a tighter academic approach using dynamic MAC sharing between the CPU and NPU -- 1.87x speedup at 93.5% efficiency while cutting power 70% at low clock.

The commercially significant data point is the MIPS S8200 (now under GlobalFoundries), which tightly couples RISC-V application cores with AI engines for low-latency CPU-to-inference data exchange. That GlobalFoundries now owns it puts mature RISC-V AI silicon into a foundry context that can actually deliver volume.

The open-ISA argument for AI hardware is not new, but the toolchain maturity question remains real. Arm's proprietary ecosystem still wins on compiler support and silicon-proven IP depth, per the CEPA survey that crossed the wire today. RISC-V closes that gap faster at the embedded tier than at the hyperscaler tier. Watch the compiler and runtime story more than the ISA itself.