SemiWiki.com - Podcast EP305: On Overview of imec’s XTCO Program with Dr. Julien Ryckaert
Episode Date: August 29, 2025Dan is joined by Dr. Julien Ryckaert who joined imec as a mixed-signal designer in 2000, specializing in RF transceivers, ultra-low power circuit techniques, and analog-to-digital converters. In 2010,... he joined imec’s process technology division in charge of design enablement for 3DIC technology. Since 2013, he oversees… Read More
Transcript
Discussion (0)
Hello, my name is Daniel Nenny, founder of Semiwiki, the open forum for semiconductor professionals.
Welcome to the Semiconductor Insiders podcast series.
My guest today is Dr. Julian Rickart. He joined IMEC as a mixed signal designer in 2000,
specializing in RF transceivers, ultra-low power circuit techniques, and analog to digital converters.
In 2010, he joined the Process Technology Division in charge of design enablement for 3DIC technology.
Since 2013, he oversees IMEX design technology co-optimization platform for advanced CMOC technology nodes.
In 2018, he became program director for focusing on scaling beyond the 3-90mator node and the 3D galing extensions of CMOS.
Today, he is vice president of Logic in charge of compute scaling.
Welcome to the podcast, Julian.
Thank you, Daniel. Hello, everyone. Glad to be here.
First, I'd like to ask why you chose the semiconductor industry as a career path.
Yeah, so I guess that came through the course of my studies.
I was heading for engineering studies and then among all the courses that I had there,
you know, I think the one that fascinated me the most was the computer architecture.
It was fascinating for me. I mean, we were in the late 90s and fascinating
to me, you know, how those machines actually work and the intelligence that people build
into it. And I get one leading to the next, you know, I ended up pursuing these electrical
engineering studies. I guess that's where you come from as far as I can remember. Then after that,
I joined IMEC after my studies, almost immediately, no immediately actually, and I pursued my PhD
over there, as you said in your introduction. And I started as a designer because that's what I was
trained to do and mixed signal, low power, H2D converters, all kind of mixed signal type of circuits.
But as I progressed through these design research, you know, I got more and more interested in
what's behind these circuits and these transistors. And that led me to evolve or they changed my
career and move more closely inside IMEC to the process technology divisions, you know, to understand
how CEMOS and other technologies are being manufactured and built.
And I saw a whole new world of how actually circuit design and technology are highly entangled.
Of course, one drives, so you build technology to build systems at the end, right?
So of course, they are highly entangled.
And that led to a very fascinating world, what we call today, design technology optimization.
And basically that's where I have pursued, let's say, my last 10 years of
research.
That's a great story.
So let's get into it.
What is XTCO and how does it differ from the traditional technology benchmarking methodologies
such as DTCO or STCO?
Yeah, it's a very good question.
So by the way, XTCO for now is more of a IMEC coin name as far as I can tell, although DTCO
and more general STCO are now accepted terminate.
terminologies, let's say, or concepts in the industry. And as I mentioned just before, right, it comes from the fact that, you know, for already now a couple of decades, at least more than a decade, it's very difficult to identify the technology directions, the technology scaling directions without having in mind the end product and the end purpose. You know, at the end of the day, you're building standard cells, SRAM, circuits, complex processor architectures.
And you cannot disconnect those two worlds, as they probably have been disconnected for many decades because process technology people were just miniaturizing transistors and designers were making a use of that.
In design technology optimization, what you do is you take a more holistic approach and you try to understand the challenges in scaling a device or a component in the technology, but you try to identify at the circuit level what matters.
At the end of the day, what matters is to build a smaller nan gate or Norgate or flip-flop or S-RAM.
So there are certain choices that you make in pushing technology in certain corners or certain innovations
that are basically driven by the type of circuits that you do.
And this era that we call design technological optimization kind of coincides with the FinFET era.
So FinFET was a long period, a long generation succession of technologies.
where this approach of co-optimizing the technology for a design was basically the main mechanism
that was used. But then as people kept scaling, of course, the approach needed to become more
and more holistic in the sense that in order to continue scaling, it was harder and harder,
of course, to scale the transistor, scale the components. And so you needed to zoom out even further. And that
has led to what we call today system technology co-optimization, which is nothing else than DTCO,
but from, let's say, a higher abstraction level where you consider not anymore the standard
cells themselves, but maybe a compute block, a processor, or a piece of a processor. And you try to
identify technology solutions that can scale the processor as a whole. A very good example of
system technology optimization technologies or boosters, as we call them in our language, is
3D partitioning, for example, you know, partitioning the memory from the processor and stacking them on top of each other is a system level innovation that allows you to scale at the system level.
Backside power delivery, for example, is another good example, and there are many of them.
And to be, I mean, to a certain extent today, as we speak with the rays of two nanometer nanosheet technology, we can say that we are entering the system technology co-optimization era.
So now XTCOS, IMEC is of course at the forefront of pioneering, let's say, research.
We are trying to understand what are the next scaling paths and scaling methodologies.
And what we identified is that as you try to scale systems, you try to scale not just the CMOS silicon chip anymore, but actually your system, should we,
should call it that way, is now a much more heterogeneous assembly of multiple technology.
You stack a memory on a logic, you assemble it on an interposer, and you connect the DRAM
very closely, and you have a lot of heterogeneous component that you try to combine together.
As they come together, of course, they start to interact with each other much more, and there
are money, more innovation that you can build around the heterogeneity.
So it's once again a step up, let's say, in the abstract.
in the methodology, let's say, you abstract now the whole system and not any individual die,
and you try to optimize all the technology that you have at hand.
And there, we're talking about technologies beyond just T-MOS, but memory, new memory technologies,
3D and all kinds of advanced packaging technologies, 3D or 2.5D and Interpose and RDL.
And you try to treat the problem as a whole.
And as you treat the problem as a whole, as an entire system, then all of a sudden, there are many types of physics that are at play.
It's not anymore, the compute performance that is at play, but you need to plug in, you know, thermal consideration, power delivery consideration, all kinds of memory subsystem optimization.
And in that sense, you have to optimize the system across different technology.
And this is how we perceive, you know, this new or this evolution, let's say, in optimization, the technology as a cross-technology co-optimization approach.
Does that make sense?
It does.
Great explanation.
So how does XTCO help tackle the growing complexity in AI or mobile and edge compute systems?
Yeah.
So I think the fact that XTCO emerged today is.
also not that an accident and there's of course a relationship with the applications
that drive technology scaling you know for a long time technology scaling was
mostly steered by mainframes in the beginning and then smartphones later so
it was somewhat a certain class of product that was pushing the technology and
somehow you know the rest of the ecosystem of products would kind of just
benefit from from the scaling in in various ways more or less let's say what is
happening today is that the application that drive the technology and the drive the
technology and push the technology to scale and in the investments in in new
technology nodes is much more complex and much more heterogeneous. AI appearing in
many different forms in edge and mobile systems as much as in HPC you know
drive technologies in many different corners. So the the interesting observation
that we made is that pursuing a hardcore CMOS scaling, as we have known in the past, which
is still miniaturizing the components and going to smaller and smaller transistor, is not necessarily
the unique answer to scale an AI system.
AI systems are very specific.
They have specific ways of computing and manipulating the data or transferring data between
the processor and the memory.
And somehow we need to come with more disrupt.
approach in answering the demand of AI, which are enormous, and we've all seen these, you know, scaling curves that AI is now exhibiting, and if one to pursue that evolution, we need to go to much more profound modification of the system. And so the CMOS technology scaling as such is not a unique answer to that. We need a much more holistic, once again, approach to the problem.
That is the reason why XTCO is becoming the right approach to tackle these challenges moving forward.
Right.
So can you share some general research directions of IMEXXXXTCO program?
Yeah.
So as XTCO tries to scale or takes the system, and I mean by system, the entire package, including the memory,
including the power distribution or the power management, including all the thermal consideration,
and try to tackle the problem as a whole, right? Now it becomes a system scaling challenge
to identify where certain types of technology can influence or improve the performance,
the overall performance of the system. Now, as you try to tackle those problems,
you have to tackle them, of course, as I said, as in a holistic way, and tackle
tackle it from the perspective of a system engineer, a person that design and defines an architect's the system.
Now, a person that architects the system doesn't think in terms of elementary technology elements, right?
It treats the problem at the system level and takes considerations like thermal performance of my system.
You know, how do I manage the thermals? Many systems today are bounded by their TDP, by their thermal performance.
performance and so as you can improve by invoking certain types of technology you can improve the
heat flow or the thermal behavior of the system you will improve the overall performance of the
system so they think in terms of thermal they also think in terms of power how do i distribute power
through my system and there are many different technologies that can be involved in the power management
of the system you know there are voltage conversion throughout the system decoupling different smart power
management that are being involved and these utilize different types of technologies so if you
want to tackle the power problem you again need to take a holistic approach through multiple
technologies and understand how they combine in the overall system so power is another vector
of xTCO a third one is what we are all after in the end in terms of you know improving system
performance is density. We need compute density, more operations per square millimeter to have a
cost-effective scaling of our technology, and so providing more compute performance into the system.
So density scaling is another scaling vector that people consider at the system level. And then there's
the whole memorization capability of my system, and I don't, on purpose, don't say memory as such,
because it's not just a capacity increase, but it's also the transfer of the data from the memory to the processor, which is also at stake.
So it's the memory subsystem or the capacity of our system to memorize and store data and invoke it at different moments of the computing, as the system computes.
So the memorization capability is another vector.
So today we have identified these four power, thermal, computer.
and memory as the four XTCO system scaling vectors that we need to track as we start exploring
different technology solutions.
And what role does IMEX ecosystem play in shaping and validating XTCO innovations?
I mean, ecosystem is everything, right?
Yeah, so indeed.
So as you try to tackle the problem holistically, right, you need to take into account a lot of
constraints at different levels of the ecosystem and what i mean by that is for example uh if i take
the thermal as an example you know you will start exploring materials that allow you to dissipate
or or to um yeah to spread the heat uh throughout the complex uh system because you want to avoid
hotspots emerging inside your system now this is a material problem so of course you need to deal
with that problem with suppliers and material vendors. Now, as you try to integrate those materials
in the technology stack, of course, there comes the IDMs and the foundries and the manufacturers
that need to put these materials in an integrated platform, which also comes with its own set of
challenges. Then once you have assessed a thermal material in a complex heterogeneous stack,
You want to understand how in the end, how the performance improves at the system level,
knowing that the system is not a homogeneous set of activity.
It has workloads that steer the systems dynamically in different ways,
and the way the heat flows in the system is fluctuating with the workload.
So to understand whether a thermal solution is actually solving any thermal constraint, right?
You need to analyze it in the context of a workload.
So you see that there are many, many players across the stack that need to be engaged in providing these kind of solutions
and they have to be verified across the stack.
So it's only in a place where the research is done in holistically throughout the entire chain
that you can provide and do these meaningful research.
And this is why we think IMEC is a very good place for conducting this kind of research
because we have all those players in our ecosystem, we interact with all of them,
so we can identify where the bottlenecks might be throughout the chain
when we invoke a new solution or a new technology solution.
And how is the XTCO program related to the technology research program of IMEC?
Yeah, so the way we positions today our XTCO program is to create a bridge between companies that build systems,
the large hyperscalers, for example, or the large chip design company, fabulous companies,
and the people that builds the technology, right?
And so for us, what is very important is to create a dialogue between those two entities.
And so what we use the XTCO framework to is to engage into this dialogue with the system
companies.
They share with us the challenges that they face at the system level, the thermal challenge
or a power distribution challenge or density, compute density scaling, right?
And then we propagate and we do a kind of a translation of those bottlenecks, try to identify
where those bottlenecks appear in the system at first and then translate that into specific
technology research, right? We identify that a thermal bottleneck actually appears at a certain
position or say in a certain part of the system and that we can identify which technology
are at play in this particular part of the system and we can explore how a specific
technology can maybe mitigate or solve that problem. Now you've got to
take that problem holistically because you might solve the problem locally right by invoking a
new device a new component or a new material but then you might have shifted the problem elsewhere
in the system or shift the bottleneck in another part of the system so somehow you need to
navigate back and forth between you know individual technology solution and there can be uh thermal
materials as i mentioned new devices like a new semos switch for example or a new memory technology a new
interconnect technology. So you have to move back and forth between, you know, these technology
solution and how they end up materializing at the system level. And creating this dialogue requires
you to understand each other, create a language for that dialogue. And we found that this
XTCO approach, tackling the problem in terms of these system scaling vectors, power thermal density
and memory is a very, you know, attractive or interesting way to create this particular dialogue.
Got it. Final question, Julian. How does XTCO influence the design of future systems,
especially their architectures and packaging approaches? Yeah. So I mentioned a lot of example
where you're trying to mitigate, you know, scaling challenges and it feels like, you know,
you find a solution and then you can scale a bit more, you know, scale the thermal behavior or the
power delivery. But the moment you start to approach a problem holistically, and that's not true
just in this context. I guess it could be true in many different domains, even in biology or other.
You know, the moment you start taking a higher, a more abstracted posture, you'd say, to your problem,
then you start to see the problem also more holistically.
And then some innovation that can only emerge at the system level may start to appear.
So I know I'm a bit abstract and cryptic, but let me take an example, right?
The moment you start tackling the scaling problem by invoking 3D stacking technology, for example.
You stack a memory on a logic and you offer a higher bandwidth to your memory,
maybe some more embedded capacity of your memory, and that would be true for a more conventional architecture.
But then, once you have proposed this solution, you can also start asking yourself, hold on a minute.
If I can stack my memory close to my logic, you know, that close, maybe I can architect my processor differently
to make a better usage of that capability. And then you start rethinking the architecture of your CPU.
And this is a very, it's one of the example, it's something that is today pursued by,
many design companies but there are many of those ideas that emerge only by the fact that you're
tackling the problem holistically right and then you are more able to disrupt the way people
conventioning bit systems because a new technology has opened up a new way to organize or architect
you know the overall system so there is definitely and we hope and we see already in our research
you know, a lot of emerging disruptions that can appear as a result of this XTCO approach.
And one of them, and maybe that should be the topic for another podcast, but one of them is what we call CMOS 2.0.
So it's a new approach to CMOS scaling that we've already popularized in other, you know, fora, in other media, that you can look it up on the internet.
But CMOS 2.0 is actually built on this XTCO approach and is proposing another approach to scaling by recognizing the heterogeneity of systems and take a more holistic, a more cross-technology optimization approach to the scaling of future systems.
That sounds great. CMOS 2.0. We're definitely have you back for that. Thank you for your time, Julian. Excellent discussion. And we will talk to you again soon.
Thank you, Diane.
That concludes our podcast.
Thank you all for listening and have a great day.