As we stand on the brink of a new era in artificial intelligence (AI), we’re confronted with the ‘Memory Wall’—a critical bottleneck limiting the performance of AI systems. This webinar delves into the imperative need for High Bandwidth Memory (HBM) in AI and high-performance computing (HPC) systems, where the key performance indicator is Memory Bandwidth per Watt. You will understand why HBM has emerged as a top choice, offering the highest bandwidth, optimal area footprint, and superior power efficiency.
We begin by exploring the burgeoning AI landscape, where data throughput and speed are paramount. We’ll examine how diverse AI workloads tax memory systems in unique ways. Then look at how HBM is currently used in industry-leading processors and GPU-powered systems. Next, learn how a new generation of high-performance HBM-enabled custom SoCs together with connectivity chiplets provide hyperscalers with an unprecedented level of flexibility and scalability for their AI-enabled systems.
We will take you through the critical components and challenges of enabling a 9G+ HBM system, including the physical layer (PHY), controller, interposer, and packaging techniques, which together form the bedrock of this advanced memory system. We’ll share some of our experience deploying complete HBM subsystem solutions from Alphawave Semi which integrate the HBM PHY with a versatile JEDEC-compliant, highly configurable HBM controller that can be fine-tuned to maximize the efficiency for their application-specific AI and high-performance computing workloads.
Date: Thursday, February 15th
Time: 8 AM – 9 AM PST | 11 AM – 12 PM EDT
Can’t Make It to the Live Session?
If you are unable to attend the live session, don’t worry! You can still register to receive a copy of the recording after the session. We’ll make sure you have access to all the valuable insights and information presented during the webinar.
For any questions, please email to email@example.com