SemiWiki.com - Podcast EP306: The Challenges of Advanced AI Data Center Design with Josue Navarro
Episode Date: September 5, 2025Dan is joined by Josue Navarro, product marketing engineer for Microchip’s dsPIC business unit. He began his career as a process engineer at Intel and has since transitioned into product marketi...ng with Microchip Technology where he supports customers developing system designs utilizing Microchip’s Digital Signal … Read More
Transcript
Discussion (0)
Hello, my name is Daniel Nenny, founder of Semiwiki, the open forum for semiconductor professionals.
Welcome to the Semiconductor Insiders podcast series.
My guest today is Jose Navarro, product marketing engineer for Microchip's DSPIC business unit.
He began his career as a process engineer at Intel and has since transitioned into product
marketing with microchip technology where he supports customers developing systems designs
utilizing microchips digital signal controllers welcome to the podcast hostway hey daniel yeah thanks for
thanks for having me today and uh really appreciate your time yeah so first i'd like to ask what
brought you to microchip technology yeah just kind of you mentioned in the introduction i started my
career early in process engineering at Intel. And I kind of found, you know, with those first
jobs, that it's either you fall in love or it's something you just don't want to be doing all the
time. So I quickly segued after my quick time at Intel to more of a front end facing experience.
And that's kind of led me to microchip where I get to deal now with a very cutting edge
portfolio of technology and devices that I get to play around with and get to talk to customers
about as well. Yeah, I agree. I really fell in love with customer-facing jobs when I started
out as well. So to start, can you give us an overview of how artificial intelligence is transforming
data centers and what new challenges this rapid growth is creating, particularly around energy
consumption? Yeah, sure. It's, you know, AI is just the talk of the town these days and it's
just hard to escape from it, even from just the daily life. But AI is really full.
fundamentally changing almost the landscape of data centers by driving significant
growth in a lot of those computational workloads.
So every question you're sending off to chat, GBT, just what's the weather look like today?
It's making a bigger impact than you think.
So this adoption of AI, especially with these large language models, deep learning, has led
to this surge in power consumption.
So AI workloads actually can double the energy used compared to traditional tasks.
now actually account for around 10 to 20 percent of total data center energy consumption
globally.
So this is a rapid growth that is actually forcing data centers to rethink how they deliver
power, manage costs, and also try their best to minimize environmental impact.
So as the global data center energy demands rise by 10 to 50 percent annually, microchip as well
is doing their best to be very aware of the impact relieving in the world.
Yeah, you know, I agree completely.
You know, with AI workloads driving up power demands, what are some of the key differences
between traditional data center infrastructure and the next generation of AI-focused data centers
that we're going to be seeing?
So actually, next generation AI data centers differ from traditional ones in kind of several
key ways.
One being, while conventional centers were built around CPU-centric architectures and more
moderate power densities, so around 5 to 15 kilowatts per rack,
AI focus centers rely heavily on high-performance GPUs and specialized accelerators.
So these racks, you know, they often exceed around 50 kilowatts.
So that's almost a three-time turnaround.
So this shift, of course, requires more advanced power distribution, also some high-efficiencies
voltage regulation, and it introduces some more innovative cooling solution.
So over the past decade, these AI data centers have also undergone significant architecture
textual transformation as well. Again, moving from the traditional CPU-centric to GPU-centric,
and this shift has been driven by the unique computational demands of AI workloads.
Got it. Yeah, you know, as power densities increase, how are data centers addressing the dual
challenges of energy efficiency and thermal management to maintain reliability and sustainability?
So this is actually probably one of the biggest topics, and I really appreciate you bringing this up.
So what's interesting is that these changes aren't just.
just amount of technology. They're also about meeting new expectations for environmental
responsibility and reliability. So these data centers are under quite the pressure to operate more
sustainably, you know, this meaning investing in things like advanced cooling systems and renewable
energy sources. So with these, you know, rising power densities, some of the more
traditional air cooling and power supply units are reaching their limits. So by adopting these
advanced power management and liquid cooling techniques,
and also real-time thermal monitoring, this is a method in order to maintain that reliability and
sustainability within these power plants.
Got it.
You know, security is a major concern as AI servers process sensitive data.
So what advancements in hardware-based security and data integrity are being implemented to protect
these critical workloads?
Yeah, that is a very, very good question.
So actually, as AI servers process, these sensitive data, robust,
hardware-based security is very essential. So these data centers are actually implementing
various amount of secure mechanisms such as secure boot, hardware route of trust, and also
cryptographic accelerators to ensure that only authenticated firmware and software can run. And actually,
there's a lot of compliances and bodies of, you know, experts put in this place to kind
of create these compliances in order to safely run and operate within the means of keeping
your data safe. One example of this is actually OCP, the Open Compute Project. This is an
initiative to set open standards for designing data center hardware, and this includes server,
storage, and also networking equipment. So another example, this is also the SPDM set of protocols,
which stands for a security protocol and data model. So this allows different hardware components
in a data center to communicate securely and verify each other's identities. So it's almost
like a digital handshake that ensures only trusted devices can talk to each other.
Yeah, you know, security is a big topic now, certainly on Semawiki and in the media.
But, you know, one of the other things is scalability and flexibility, you know, which is
essential for supporting evolving AI applications.
You know, we really don't know how big these things are going to grow, or maybe we do,
but they are going to grow, they're going to scale quite fast.
So how are modular and composable infrastructure solutions enabling data centers to adapt to
changing computational needs yeah you know I'd have to agree with you there we just can't see the
limit of you know where AI might take us and in order to support this rapid growth you know we must
adapt so whatever is good today might not be so good tomorrow so these AI technologies are evolving
at a very rapid base so what meets performance and efficiency standards today may quickly become
outdated as new models and even algorithms emerge so there needs to be constant innovation in
data centers and we must be prepared to kind of adapt quickly to change in
requirements and workloads. These workloads again being highly dynamic and
they require data centers to be both scalable and flexible so kind of to
implement and mitigate kind of any downtime while transforming to these new
standards modular server architecture and even composable infrastructure allows
these operators to upgrade and reconfigure hardware is needed. In turn this
reduces costs and also enables rapid adaptation
to newer technologies and even changing workload requirements.
Right. That's great.
Hey, final question.
Looking ahead, what role do you see advanced power management technologies
and comprehensive development ecosystems playing
and shaping the future of AI-driven data centers?
Yeah, actually, I think advanced power management technologies
and like you mentioned, robust development ecosystems,
are critical to the future of AI-driven data centers.
high efficiency power devices, digital controllers, and even integrating security features will help enable these operators to meet the demand requirements of next-gen AI workloads.
Again, these development tools, reference designs, and technical support will help accelerate this innovation and stay ahead of the game and also help you reduce time to market.
And this is kind of where microchip comes to play.
We help empower customers with a comprehensive development ecosystem, intuitive code configurator, simulation tools, and a library of proven reference design.
So our ongoing innovation and support position us and yourself as a trusted partner for powering the AI revolution.
Great conversation. Thank you, Housway. You know, I'm a big fan of Microchip, and I hope to talk to you again, you know, for another update.
Thank you so much, Daniel. Yeah, it was great talking to.
That concludes our podcast. Thank you all for listening and have a great day.
Thank you.