© COPYRIGHT 2024 ALPHAWAVE SEMI
As we stand on the brink of a new era in artificial intelligence (AI), we’re confronted with the ‘Memory Wall’—a critical bottleneck limiting the performance of AI systems. This webinar delves into the imperative need for High Bandwidth Memory (HBM) in AI and high-performance computing (HPC) systems, where the key performance indicator is Memory Bandwidth per Watt. You will understand why HBM has emerged as a top choice, offering the highest bandwidth, optimal area footprint, and superior power efficiency.
We begin by exploring the burgeoning AI landscape, where data throughput and speed are paramount. We’ll examine how diverse AI workloads tax memory systems in unique ways. Then look at how HBM is currently used in industry-leading processors and GPU-powered systems. Next, learn how a new generation of high-performance HBM-enabled custom SoCs together with connectivity chiplets provide hyperscalers with an unprecedented level of flexibility and scalability for their AI-enabled systems.
We will take you through the critical components and challenges of enabling a 9G+ HBM system, including the physical layer (PHY), controller, interposer, and packaging techniques, which together form the bedrock of this advanced memory system. We’ll share some of our experience deploying complete HBM subsystem solutions from Alphawave Semi which integrate the HBM PHY with a versatile JEDEC-compliant, highly configurable HBM controller that can be fine-tuned to maximize the efficiency for their application-specific AI and high-performance computing workloads.
Speaker:
Archana Cheruliyil
Sr. Staff Engineer IP Solutions & Marketing
Alphawave Semi