PAM4 and Coherent-lite Interconnect for Hyperscale Campuses and AI Data Centers
The explosion of data processing demands driven by the ever-increasing size and complexity of AI models is introducing significant challenges in how data is transferred between processing units (e.g., GPUs and AI accelerators), between processing units and memory, and to the external world.
Additionally, the parallelization of AI models into smaller, more targeted models further increases the need for high-speed, low-latency interconnects both within a single AI node and between nodes across data center networks.
This trend is pushing data center architectures to scale at an unprecedented pace. Interconnects have become a critical enabler of this growth, ensuring low-latency, high-bandwidth communication between systems, where tens, hundreds, or even thousands of GPUs must operate cohesively within a single node. For example, data presented by Meta at OCP shows that over the past 25 years, processing capabilities (measured in FLOPS) have scaled at more than double the rate of interconnect bandwidth: 3.1X every two years versus 1.4X over the same period (Figure 1) indicating the critical need to improve the bandwidth of the interconnects.

Figure 1: Meta’s data highlights that peak hardware FLOPS have scaled at more than twice the rate of interconnect bandwidth – presented at OCP 2022.
The above trend has led to the definition of new standards and creation of innovative approaches to improve the efficiency of back-end processing interconnects, specifically in connecting matrices of accelerators and processing units. This is essential to meet the demands of AI models and their rapid evolution.
Scale-up focuses on interconnecting GPUs and CPUs within a single AI computing node, optimizing for low latency, high power efficiency, and dense connectivity. In contrast, scale-out addresses how to move processed data efficiently from the node to external systems.
These networks build tightly integrated matrices of GPUs and CPUs, and enable high-bandwidth, low-latency communication across numerous processing units. And while NVIDIA’s NVLink has long served this purpose, new standards such as UALink and Scale-Up Ethernet are emerging to define next-generation requirements for intra-node and node to node interconnects.
On the scale-out side, traditional Ethernet (especially in its current RDMA implementations) is limited, particularly with a tail latency that makes it less effective for the performance needs of AI workloads. To address this, the Ultra Ethernet Consortium is leading efforts to evolve Ethernet to better support AI-driven requirements across Network Interface Cards (NICs) and scale-out switching architectures.
Figure 2 illustrates the scale-up and scale-out architecture models optimized for AI-centric data centers.

Figure 2: A Scale up and Scale out architecture
Traditionally, copper has been the dominant material for interconnecting inside data center racks due to its cost-effectiveness, flexibility, and reliability. However, as bandwidth and speed requirements continue to rise, the physical limitations (loss and signal integrity challenges) of copper especially for rack-to-rack and even intra-rack connections are becoming more apparent. This has led to increased adoption of active components such as Active Electrical Cables (AECs) and Active Optical Cables (AOCs), which integrate retimers to extend reach and improve signal integrity.
Furthermore, the reach of traditional Intensity Modulation Direct Detection (IMDD) optical modules is becoming insufficient for long-reach (LR) data center campus applications at higher data rates. As data rates increase, Coherent-lite modulation schemes are emerging as a compelling alternative to PAM4, offering longer reach and higher optical link budgets.
Coherent-lite modules are designed to consume significantly less power than full Coherent solutions and be cost competitive with IMDD, making them ideal for campus applications. While current Coherent-lite standards at 800G span reaches of 2 to 10 km, Coherent-lite solutions will be used for links inside the data center for the first time, due to reach limitations of IMDD technology at the next generation data rates.
In March 2025, Alphawave Semi announced a portfolio of PAM4 and Coherent-lite DSP-based products, purpose-built to address the needs of hyperscaler data centers and AI interconnect markets. Alphawave semi is among a select group of companies capable of offering both PAM4 and Coherent-lite DSPs, designed in leading-edge process nodes and backed by a strong legacy in high-speed SerDes.
The company has introduced three product families:
- Cu-Wave™ PAM4 DSPs for Active Electrical Cables (AEC)
- O-Wave™ PAM4 DSPs for optical retimers and gearbox transceivers
- Co-Wave™ Coherent-lite DSPs for optical transceivers
These product lines are designed to support the scaling requirements of next-generation data centers, enabling the high-throughput, low-power interconnect solutions that are essential for AI and hyperscale workloads in both scale up and scale out architecture in order to create the highly efficient and flexible interconnects for AI.

Figure 3: Alphawave Semi owns all critical connectivity assets behind the Cu-Wave, O-Wave and Co-Wave portfolio of 800G/1.6T PAM4 and Coherent-lite DSPs
To find out more about Alphawave Semi’s portfolio of 800G / 1.6T PAM4 and Coherent-lite DSPs for AI data center and hyperscale campus applications, please visit the DSP page.
Product briefs are also available here.
Cu-Wave briefs: https://awavesemi.com/cu-wave-product-brief/
O-Wave briefs: https://awavesemi.com/o-wave-product-brief/
Co-Wave brief: https://awavesemi.com/wp-content/uploads/2025/03/aw400-o_product_brief_1_0_5.pdf