SemiWiki.com - Podcast EP276: How Alphawave Semi is Fueling the Next Generation of AI Systems with Letizia Giuliano
Episode Date: February 28, 2025Dan is joined by Letizia Giuliano, Vice President of Product Marketing and Management at Alphawave Semi. She specializes in architecting cutting-edge solutions for high-speed connectivity and chiplet ...design architecture. Prior to her role at Alphawave Semi, Letizia held the position of Product Line Manager at Intel, where… Read More
Transcript
Discussion (0)
Hello, my name is Daniel Nennie, founder of SemiWiki, the open forum for semiconductor
professionals.
Welcome to the Semiconductor Insiders podcast series.
My guest today is Letizia Giuliano, Vice President of Product Marketing and Management at AlphaWave
Semi.
She specializes in architecting cutting edge solutions
for high speed connectivity and chiplet design architecture.
Prior to her role at AlphaWave Semi,
Letizia held the position of Product Line Manager at Intel,
where she facilitated the integration of complex IP
for external customers, as well as within Intel's graphics
and CPU products.
With a background in intellectual engineering,
Letizia has contributed significantly to her field
through technical papers, presentations at conferences
and her involvement in defining industry standards
like OpenHBI and UCIE.
Welcome to the podcast, Letizia.
Hello, thank you.
Thank you for inviting me.
First, can you tell us what brought you
to the semiconductor industry?
Do you have an interesting story you can share?
Oh, yes, I guess it is interesting.
Well, you'll tell me.
I guess I started when I was very little in my house.
My mom was a computer science teacher.
So we had always computers,
the newest and the greatest lying around.
So I started really early to play around
with motherboards and programming and C, C++, all that stuff. So I've always been interesting
on understanding what was behind all that. So I started my degree in electrical engineering,
focused on semiconductor, back when I was in Milan and that's where
everything started. Really interested on how those things work, how they get manufactured,
how we make it better and smaller. So it was very, very a passion from the beginning and
I'm very excited to continue that passion since then. And now it's more interesting
than ever. So I think these are
very, very special, interesting time we're living. And I'm very excited to be part of
this right now.
That is a great story. Yeah, this is an exciting time. So what brought you to AlphaWave?
AlphaWave, I've been part of working with this executive team for a long time. I was in Intel before AlphaWave,
working on developing and architecting high-speed connectivity and also other type of
interfaces for our own product, Intel product or other products. So I worked with the executive
team for a long time and I was very happy and excited to join the M-Alpha wave to start scaling this architecture and this solution
for more complex systems.
So I was very happy about that.
And at AlphaWave, we're growing really fast.
Now we are a semiconductor company
working around solution on connectivity
and compute that are very important for the next generation of AI system.
So again, it's very exciting and I'm very happy to be part of all this.
Yeah, we've been working with AlphaWave for five years now.
It's such a great company.
You know, AlphaWave is developing a complete platform of connectivity and compute for the
next generation of AI systems. So what are the main challenges today in supporting these next generation systems?
So things are going very fast time to market.
They require a solution that can improve, as you know, the time to market and also the
risk of this product to the market.
So the main challenge today is really to satisfy this high demand of complex
technology in a very low risk solution.
So what we're doing today at Alfaway, we're definitely leveraging our core expertise
that lies in connectivity and developing high-speed interfaces, along with everything that goes around in terms of protocol, in terms of solution at platform level, and try to make that available
to customers in a cheap platform or in a system form that they can integrate easily. But yes,
definitely is how we help our customer to bring this complex technology
to market faster. I think the older time of integrating building blocks are the old way
and is not anymore there. You need to find inventive way of composing these building
blocks that helps our customer to focus on what they need to do best.
And we provide our added value in a form of something
silicon proven already.
You know, there's an ongoing battle on connectivity
standards for scaling up and scaling out,
driving custom solutions today.
And new standards trying to establish themselves
as valuable solutions.
Where do you see the next system heading?
Yeah, so when we look inside our data center today,
so if we look at the traditional data centers that we were building,
we see Ethernet being the main standard and PCI Express, of course,
for other types of connectivity.
But what is happening today is when the AI system, Ethernet and PC Express are not enough.
We have more requirements in terms of scaling up and scaling out that require innovative solution,
specifically tackling latency, power, optimization of this workload that need to run on these AI systems.
So we see a lot of initiative in the industry to bring other standards in the table that can address all these challenges.
And for example, UALink and UAC are two main ones, respectively for scaling up and scaling out. But still, they need to be developed really
fast, because as I said before, time to market is a key for this system. We need to deploy this
solution in a different way than before. So we still see introduction and competing battles,
right, with custom interfaces. So it's very important to help the industry
and guide our partner and customer
to make the right decision.
So it's gonna be very powerful for this system
to really select the right interface
and the right overall solution
for this scaling up and scaling out.
And what are the main challenges
when integrating on-package die-to-die interfaces?
And how is AlphaWave helping users or your customers
make the right decisions and selections?
Yeah, so die-to-die interface is all another level, right?
So we were talking about scaling up and scaling out.
So how we connect things within the rack, outside the rack.
And now what is happening within the package
is another realm of challenges.
So it's no surprise that we cannot fit any more
one of these big accelerator on one reticle.
So we start breaking down in building blocks in chiplets
and they need to be integrated in a way that provides
value to the total cost of the solution.
So die to die interface is the key ingredients,
needs to be optimal and needs to be very lightweight
and designed for that type of package. So at Alphawave,
we are not only providing the right building blocks in terms of the interface that can be scalable
for the type of package, standard package and advanced package, and we are embracing the UCIe
as a platform and standard that can provide you that flexibility with a different form factor
for standard package and advanced package.
But we also build an overall solution around that.
So that can be designed a package that
is optimal for that interface or providing a chiplet already
UCIe enabled with a data interface that could provide your extension for IEO interfaces,
like Ethernet, PCS Press, or UELink and UEC, like we mentioned before, or a compute companion chiplet.
And in that case, we are working closely with ARM to bring to the market a die-to-die system that is compliant with an ARM platform
and also compute chiplet that can be used as a companion chip to the AI accelerator.
So it's not just about the building blocks, the single die-to-die interface,
it's all about the overall solution that goes around it.
Yeah, and Alpoawave announced the introduction of
an IO chiplet for scaling up and scaling out connectivity.
Can you tell us a little bit more about how customers are using it,
and how do you see this evolving?
Yes. As I said before,
just die to die and guiding the customer,
it's very important to also bring what we
do best in terms of connectivity in a form of a chiplet.
So our customer can focus on their part and they can use our new chiplet already configured
for scaling up, for example, or scaling out to connect to their system.
So it's all a new paradigm, right?
Offering chiplet, it's a new business model,
it's a new working model.
We do have example in the industry,
if we think about HBM was one of the first chiplet out there.
So now we are transferring that to the IO connectivity
and how we provide a non-good eye for IO chiplet
and to also the compute. And we can talk about that too as well. Our solution can help accelerating
our customer both using our off-the-shelf IO chiplet or because we know that there is no
one solution that fits at all, using the derivative of this IO chiplet for a custom connection.
It can be a different type of form factor,
for example, a different type of package,
or it can be a different type of bandwidth requirement
for your scale up and scale out needs.
So it's interesting to see how many ideas are coming out
from this, right?
Every day and every discussion we have
in partnership with our customer,
there is new way of connecting all these building blocks
and make everything so exciting.
So it's very interesting to see how the market
is reacting to this.
And what other types of interfaces are needed
in the AI space that will revolutionize the way we connect compute to IO or more importantly, compute to memory?
Yeah, so if we focus within the package, as we discussed before, and we want to break down our chiplet system in a compute and the IO chiplet,
We talked before, I mean, UCIe is a good platform to connect a computer to an IOU chiplet and give you all the flexibility to address that in a standard package or an advanced package.
Another innovating way that we are seeing today really taking, giving the SSE, Archidat and
ourself a good way to building blocks is the connection compute to memory.
We know that traditionally, HBM is
one of the memory interfaces and one of the memory technology,
very optimal for the AI space.
And we're seeing there coming up a more hybrid way to use HBM.
So for example, if you think about your compute die,
you have UCIe and die-to-die all over the shoreline.
So UCIe is used for compute-to-compute and UCIe is used for compute-to-io.
So we're seeing now a lot of traction actually to use the same die-to-die interface also to connect to a memory chiplet.
So let's reuse that shoreline, that die- die, also to connect to maybe an HBM stock.
So that is a good alternative and a new innovative way to connect to memory.
And it's very exciting to see how all these tools, as I said before, die to die and package
technology give a new way to the system designers and to us
to build new systems. So it's very interesting.
Right. You know, the chiplet ecosystem is still evolving and we know that the majority
of design starts are not using chiplets yet. So how do you see the industry and ecosystem
helping to foster and accelerate chiplet adoption?
Yeah, so definitely you go chiplet if you can afford it and you have to because chiplet systems are not easy to integrate. But what you are seeing on the other side that these
these S2C now are reaching the radical size limit so they have to move to cheaply. I think we are
seeing in the last couple of years more and more initiative from the industry and the
virus ecosystem to help accelerating the cheap adoption. And I would like to mention a few of
them. So definitely we have all our partners like this in C and other foundries try to enable more and more technology
around the package that can accelerate the chiplet ecosystem and that comes along with all the
tooling you need for it. For example, the 3D blocks and the 3D Alliance from TCMC is a good
initiative to bring everything together from tools, IPs, a value chain aggregator company to try to
make all this possible. But also we're seeing other important initiative like the one from ARM
with the chiplet system architecture to bridge the gap on the architecting chiplets together,
how we connect those chiplets together, what are the interfaces that we need,
what did the industry standard are for connecting all these chiplets. So there is more and more that
we're seeing. So it's a thing, it's evolving very fast and people like us at Alphawave, we need to make these pioneer steps to embrace this chiplet design
to make sure that we pave the road for who's coming next. So it's very important to have
these first pioneers, as we said, chiplet pioneers coming first, and then we're going
to see more and more adoption from the future generation of SoC, maybe in different types of applications that are not just AI, but
can expand also to different types of marketing and applications.
Yeah, you know, as you said, it's an exciting time in the semiconductor
industry and Chiplets is really part of that excitement.
Yes, definitely gives us so many more options to break the barriers that we have designing systems.
And it's a very powerful tool now.
So yep.
Great.
Thank you for your time, Letizia.
And I look forward to seeing you again at the next conference.
I see you at conferences quite frequently.
AlphaWave is very active in the ecosystem.
Yes. And thanks for inviting me today, Daniel. And yes, see you around.
That concludes our podcast. Thank you all for listening and have a great day.