IP Subsystems and Chiplets for Edge and AI Accelerators

From a business viewpoint we often read in the technical press about the virtues of applying AI, and in the early days most of the AI model building was done in the cloud, because of the high computation requirements, yet there’s a developing trend now to use AI accelerators at the Edge. The other mega-trend in the past decade is that the RISC-V ISA has been applied to more tasks, and the momentum is only growing. Ketan Mehta from OpenFive presented at the IP-SoC Silicon Valley 2022 in April, so I attended to see what’s happening with RISC-V, chiplets, the Edge and AI accelerators.

OpenFive was founded in 2003 and named Open-Silicon at that time, venture-funded, and has been growing swiftly to over 600 people now, while providing custom silicon design services resulting in over 350 tape outs. They have engineering expertise in RISC-V, memory IP; and connectivity IP like chip to chip, died to die, and even chiplets.

The data center is morphing as the demands of HPC (High Performance Computing) continue to build, so processors are now connecting to accelerator NICs using the CXL standard, even HBM (High Bandwidth Memory) are using accelerators to connect with processors through CXL. Memory IP for cache is often used inside of these accelerators. OpenFive uses their experience designing scalable chiplets to address some of these technology challenges, by meeting system design requirements like:

  • Low latency – sub-10ns
  • Low footprint – edge devices, PCIe server cards
  • Low power – from 0.5W to 10W
  • High throughput – Tbps/mm

For connecting HBM, LPDDR IP and D2D (Die to Die) there are three IP products designed by OpenFive:

A scalable chiplet platform from OpenFive can have compute, memory and connectivity IP; with both subsystems (green) and custom IP (blue):

An example of an Edge AI system using four RISC-V cores, hardware accelerators, memory controllers, and IO connectivity, all at a power target under 5W was presented by Ketan:

Engineers at OpenFive have already delivered several scalable chiplet platforms to customers in process nodes ranging from 5nm up to 16nm, using a variety of RISC-V cores, memory IP and interconnect IP combinations.

Chiplets are a way to combine multiple die together in a single package, in order to achieve higher yields at lower costs than a single SoC, while meeting power and throughput budgets. There were two chiplet examples provided: CPU, IO chiplet.


Ketan’s presentation showed me how OpenFive has been able to design and then deliver silicon-proven subsystems across multiple applications, like: Edge, AI and HPC. Chiplet usage is now ramping up, as more system companies are able to optimize their ideas using disaggregated silicon die that are tuned for the workloads of their applications. Using a vendor with a large array of IP subsystems is a competitive advantage, as IP reuse provides time to market benefits.