The UCIe Chiplet Interconnect Standard
The Universal Chiplet Interconnect Express (UCIe) is a standard developed to enable seamless interconnection between chiplets – small, modular SoC (system on chip) building blocks – even when developed by different vendors, implemented across different process nodes and manufactured in different foundries.
The semiconductor industry’s migration from monolithic SoCs to the adoption of chiplet-based architectures for the creation of advanced ICs is enabling higher yields as well as the ability to develop ICs more cost effectively, using multiple processes on a chip. The adoption of the chiplet model also allows for functional blocks to be used from multiple different vendors, reusing existing semiconductor IPs to reduce SoC development time significantly.
On-chip communication between these blocks is managed through a die-to-die interconnection protocol. The open UCIe standard, which is championed by a wide section of the industry, has emerged as a leading protocol of choice.
The UCIe Consortium was launched in March 2022 and has been co-developed by AMD, Arm, ASE Group, Google Cloud, Intel, Meta, Microsoft, Qualcomm, Samsung, and TSMC.
The specification covers the electrical, physical and protocol layers in order to facilitate high-speed, low-latency communication across the chiplets on the SoC. Since its launch of the first specification, the UCIe Consortium has announced two advances on the standard, launching version 1.1 in August 2023 and launching version 2.0 in August 2024.
The Shift to a Chiplet Ecosystem
The migration from monolithic SoCs to chiplet-based architectures is driven by the increasing complexity and scaling limitations of traditional semiconductor manufacturing. As chip designs become more advanced, fabricating large, monolithic SoCs becomes costly, challenging, and less efficient.
Data from Meta suggests that connectivity is among the most significant factors limiting AI acceleration, with 38% of the time data resides in a data center wasted, sitting in networks. Increased interconnectivity is vital, but monolithic dies have already hit the reticle limit (858 mm²) and cannot scale larger, this removes the ability to add additional I/O to the shoreline. By partitioning the SoC into smaller, modular chiplets, designers can mix and match specialized components, they can improve yield and reduce costs on larger SoCs, and increase breakout routing to have more space for connectivity.
Chiplets offer a flexible, scalable approach where different components—such as CPU cores, GPU accelerators, AI engines, memory controllers, and I/O interfaces—are fabricated separately and interconnected via high-speed die-to-die links like UCIe. This architecture is especially used in high-performance computing (HPC), AI accelerators, and data center processors. For example, AMD’s EPYC has embraced chiplet designs, with die sizes varying from 50 mm² to 100 mm² or more, depending on the function; while Intel’s CPU tiles used in its Sapphire Rapids data center processors is approximately 400 mm2.
Once combined, these can extend well beyond 1000 mm2, while preserving the monolithic qualities of the design. For example Intel combines four CPU tiles in its Sapphire Rapids processors to create a total area of 1600 mm2.
Chiplet-based architectures allow manufacturers to optimize performance while using different process nodes for specific tasks, enhancing flexibility and enabling more powerful systems without the drawbacks of scaling large monolithic dies. However, the success of these architectures relies heavily on robust die-to-die interconnect standards, which ensure seamless communication between the chiplets and maintain overall system performance.