PODCASTS

AI Connectivity & Chiplet Innovation at Alphawave Semi Unveiled

Letizia Giuliano of Alphawave Semi discusses advancements in AI connectivity, chiplet designs, and the path toward open standards at the AI Hardware Summit with host Allyson Klein, Founder and Principal, TechArena.

 

 

Allyson Klein: Welcome in the arena. My name is Allyson Klein. You’re coming to you from AI Hardware Summit in the Bay Area. And I am so delighted to be joined once again by Letizia Giuliano from Alphawave Semi. Welcome to the program. Letizia, how are you doing?

Letizia Giuliano: I’m good. Hi, Allyson. Thanks for inviting me again.

Allyson Klein: It’s always a delight to have you on the show, and I’m so glad that we caught up at AI Hardware. Why don’t we just start, for those who haven’t heard the previous episodes that you’ve been on, with just a brief introduction about Alphawave Semi and your role at the company.

Letizia Giuliano: Yes at Alphawave, we deliver solution for powering high performance connectivity and compute. We do that starting from leading edge connectivity, silicone IP. So we are leader on high speed service, including 100 gig, 200 gig, as well as PCIe Gen 7 and below that.

Letizia Giuliano: But also we do that delivering custom silicone business that is powered by our winning AP portfolio and also our partnership with ARM and our foundry ecosystem like TCMC for 2.5 and 3D packaging. So all the ingredients needed to afford these big AI chips and systems. At Alphawave I am responsible for the product marketing and management, so I see this product really coming to life and powering all our customer systems, so I’m really excited.

Allyson Klein: Letizia, you’ve been on the show so many times before, and we’ve always talked about the innovation in chips, and you’ve got such great purview, being involved in so many industry standards.

Allyson Klein: We’ve talked about chiplets before, and I was thinking about chiplets a lot when I was at AI Hardware. Tell me about where we are with chiplets and how you see the industry shaping up with so many different silicon suppliers out there. How do you see the industry shaping up in terms of that open chiplet ecosystem that we’ve talked about?

Letizia Giuliano: I think 18 months ago, we were talking about when the chips are going to come in, when, how we’re going to do that. I think now there is no more talking about it. We are designing it. We are designing, we are executing, and we are powering next generation, this current generation of a product with chiplets. The thing that we’re still trying to work on and we’re still having a roadmap is, the open part.

Letizia Giuliano: So I think chiplets are designing today and they’re mainly for closed system. And that is mainly due to the fast, rapid, time to market is required for this, where we use infrastructure of standards, but we are using that in a closed system. Now we’re talking about how we’re going to make those in the future years open ecosystem. And according our view and our customer partner point of view, that’s gonna still take a few years before we’re going to talk about open shape.

Letizia Giuliano: But at Alphawave we are committed to accelerate that. One important things that we did this year is to execute and deliver to our customer a first multi protocol IEO connectivity chiplet that is powered by the UCI standard. And now we talked about this in the last episode. That is our universal chiplet interconnect express standard for die to die interconnectivity that allow our customer to have an IEO chiplet with all the goodness and the silicon proven IP portfolio that we have for Ethernet, PCI Express and CXL, and connected that to an MPU or, other main SOC that I use today.

Letizia Giuliano: So that is really exciting for us. We have a lot of experience going into chiplet that they can really fast their cycle time to market. So it’s been really exciting to see that coming to life.

Allyson Klein: When you’re working with customers on chiplet designs, I know that there’s a lot of specificity and how they want you to land that chiplet and you want to deliver repeatable designs that can be applied to multiple configurations. How do you build in the flexibility for form factors and other considerations to be able to meet each customer’s demand.

Letizia Giuliano: Yeah. That is a really good point and a good problem to solve. There is so many customization happening in the world of AI, right? We will learn and we’re here at the hardware summit. There is no one solution. No one hardware that fits at all. So all our customers and our system we were building today needs to be tailored to the particular workload at a particular place in the data center they are right?

Letizia Giuliano: That transfer to the hardware that we are designing the hardware that we are designing can be satisfied with really very complex system in package with advanced packaging, for example, or can be tailored down for maybe another type of lower scale application where we can stay with a lower cost point with a standard package.

Letizia Giuliano: And all these creates different type or form factor, different type off price point that we want to achieve for the particular system. And at Alphawave, what we have done, we have created a reference platform for chiplets where you start from the physical layer, so that’s the thing that you see is provides a lot of standardization across from factor, but also goes farther to the chiplet design and the data path that we have from Ethernet to PC Express to you see that can be scaled up or scale down to satisfy a different type of form factor, bandwidth requirement, or package type.

Letizia Giuliano: So it’s like building blocks for designing your own chiplet, but comes along with a complete full suite of subsystem verification, package design, and how to build a complete system.

Letizia Giuliano: It’s come along with, many years of experience on what the customer really need internal connectivity and what is needed in the AI system internal connectivity. And now we make that tailored for multiple type of application.

Letizia Giuliano: For example, mean our service and our connectivity solution has been always a multi protocol standard. So with the same connectivity, we can satisfy Ethernet or PCI Express, or we can satisfy any custom protocol that needs to connect our front end network. So it’s a good application specifically in this world where we try to accelerate things and we want to reuse a standard, but you also want to have the option to have a customer protocol in it.

Allyson Klein: When you think about where we are, and you said that, open standard chiplet industr is still a ways off. What do you think it’s gonna take to get there, Letizia? Last time you talked about form factors. Are there other things that have come to play? And is it really the dynamics of market and folks not wanting an open industry that is gating the development of that vision on the hill, if you will?

Letizia Giuliano: Yeah, it’s a very important question. The hardware summit in this couple of days. I seen a lot of discussion. I’ve discussed myself also with some of key folks. You are definitely push from some of the industry. Now we want to do everything fast. We’re gonna no wait for standard, right?

Letizia Giuliano: And then you have instead in the other side people that want to foster collaboration. And I think for me, the important part is that we need to foster collaboration in a new way that also can be fast. So I see there is more people now coming together specifically, for example, in UCIE to reuse the UCIE building blocks to bring other protocol on top of it.

Letizia Giuliano: So we have the ARM ecosystem today that is proposing child chip to chip. That is a protocol for die to die standard that can reuse the die to die UCIE interface as a full file layer stack. stack. So that is really good example how we can collaborate and reuse what is already there.

Letizia Giuliano: We don’t need to reinvent it and just attach to another ecosystem like the ARM ecosystem. So we need more of this example where we can take benefit and recognize the value of other people and other work in the industry and reuse quickly in other ecosystem. The same is happening for example, on the UA link, the ultra accelerator link, or the ultra internet consortium, where we’re trying to reuse all the infrastructure that we already have built for Ethernet or PCI Express and build on top a new protocol that can accelerate the AI specifically workload. So that is pretty exciting to see.

Allyson Klein: That’s fantastic. I want to turn the table for a second to connectivity. There was a really broadening trend at AI hardware around conversations about connectivity from every layer that you want to think about it. From cabling on a motherboard to fabric technologies to connect an AI training cluster. Connectivity was a focus at AI hardware and you guys have a ton of IP and connectivity.

Allyson Klein: What are you seeing as the trend here? And can you break it out between what is needed for AI and then what is needed for general purpose compute and how do you see industry standards affecting this space?

Letizia Giuliano: Yeah. There is a suite of several connectivity technology that are enabling today AI from compute and what do we need for CPU to GPU connection, GPU to GPU. I think we can divide that you look at the classic data center, we used to have Eathernet, right? So the front end network still remain the purview of the ethernet, and we need to reuse all that experience there to build it up on top.

Letizia Giuliano: But then now. If you look out the way that we need to scale the compute of the eye inside a rack, we need to build something that can connect all the GPU together. And fortunately, unfortunately, PCI Express is not evolving fast as we want. All right. If the compute is evolving four acts every year in term of AI machine learning, I mean, the business press rates are one generation every 3-4 years, right? That now paves the road to build custom interfaces, or they can still reuse some of the platform of package and connection that we already know about new protocols. New protocols that are more tailored for AI, like more power efficiency, low latencies and in this case, really speeding up the connection between GPU and GPU.

Letizia Giuliano: So you see Envy link is a big example, right? There was a good discussion yesterday. PNV link maybe is the defacto standard, and that is now a new way to talk about standard is a defacto one. So I think I would like to branch in a way of to reuse what is available there in terms of standard and then build on top something custom for a custom workload.

Letizia Giuliano: So that is the way that I think the trend is going right now. And that will help you to speed it up where we have this bottleneck of network connectivity that is needed for AI workloads.

Allyson Klein: When you think about the industry’s response to these standards, what do you think it’s going to take to coalesce around them? And is it a fait accompli or is there still work left to do?

Letizia Giuliano: There is still work to do. And we are in general, we like to solve problems. I think having a better way to create platform for standardization and interoperability with this new consortium that are coming up is really what we need right now.

Letizia Giuliano: So what the UAC is doing, a UA link is coming up as well. And all this emphasis around interoperability between different solution out there is what we have to really put our focus. And Alphawave we’re really promoting all of that. We are always there to make sure that for frontal interoperability activity for any type of connectivity that we use and our customer use, and we create a platform also test vehicle for doing that.

Letizia Giuliano: So all our product, including our chiplet and our proof of concept are being used today to create an interoperability platform on the Ethernet side, PCI Express side and UCIE. So we’re pretty excited about that. And I think this is all I see in all the new industry forum coming up where there is more and more emphasis on interoperability and ecosystem.

Allyson Klein: Now you have released some products in this space in 2024. Can you share the new entrance?

Letizia Giuliano: Yes, this year we just launched in June our multi standard IEO product chiplet that I was talking before is our alpha chip 1600. And that is a good example of AI iP implemented in a chiplet specifically for connectivity like PCI Express Gen 6, CXL 3.0 and 100 gig Ethernet to multi lane. So that is a perfect ingredients for all those AI system that you will need to connect to NPU or for example, for NMPU or GPU to CPU.

Letizia Giuliano: Other important ingredients are definitely HBM. So we have launched our HBM three running in 9.6 gig in partnership with all our memory vendors, partners, and moving forward that there is a HBM four that is coming up and it is pretty exciting.

Allyson Klein: Letizia, it’s always lovely to talk to you. I only have one more question for you. Where can folks engage with you and find out more about the solutions we talked about today, as well as connect with your team to discuss potential implementations?

Letizia Giuliano: Definitely, our website is full of resources, awavesemi.Com. And you can find us on Linkedin. Please follow us. We also have a good information webinar that are always useful, and feel free to ping us on Linkedin.

Allyson Klein: Fantastic. Thanks so much for your time today. It’s always a pleasure.

Letizia Giuliano: Thank you. Thank you so much.