In The Arena by TechArena - From OCP 2025: Arm’s Chiplet Push and the AI Ecosystem Play
Episode Date: January 14, 2026Allyson Klein and co-host Jeniece Wnorowski sit down with Arm’s Eddie Ramirez to unpack Arm Total Design’s growth, the FCSA chiplet spec contribution to OCP, a new board seat, and how storage fits... AI’s surge.
Transcript
Discussion (0)
Welcome to Tech Arena, featuring authentic discussions between tech's leading innovators and our host, Alison Klein.
Now, let's step into the arena.
Welcome in the arena. My name's Allison Klein. We're coming to you from the OCP Summit in San Jose, and this is a Data Insights episode, which means I'm here with Janice Kerowski.
Hey, Janice.
Hi, Allison.
Day two of recording at OCP Summit, and wow, we've talked to some interesting people.
I might say that this is one of the highlights of today.
What are we going to be talking about?
Who did you bring with us?
Yeah.
So today we're going to be talking a lot about how to transform your infrastructure with ARM,
in particular around AI and what the heck do you do with LLMs, right?
How do you make the lowest of your infrastructure?
So today we have Eddie Ramirez.
And Eddie is the VP of Go to Market for Arms infrastructure line of business.
So welcome, Eddie.
Glad to be here.
Great to be back here at OCP.
really excited with how big the show has grown,
and there's so much interest in AI.
And Eddie, this is our third year in a row with you on Tech Arena.
So it's becoming an OCP tradition.
Why don't you just start with reminding our audience,
who haven't listened to those previous episodes,
what it means to be the VP of GoTo Market for the infrastructure line of business at Arm?
Yeah.
Essentially, for folks that aren't as familiar with Arm,
we play a key role in the AI infrastructure stack
because we're helping so many of our customers develop custom silicon solutions that go into AI
servers. And the AI servers then are used to run a lot of the LLM applications and generate all the
tokens that we see when we think about LLM and AI work flows. I help work with a lot of the partners
on trying to provide guidance as to how to build their custom solutions, how to use it integrate
ARMS IP, and also how to build a broader ecosystem that they can tap into.
Excellent. So AI is quickly transforming from a focus on training,
LLMs, right, for massive adoption of enterprise.
You know, from your perspective, where are you seeing customers navigate as they face this
transition?
Yeah, I think you bring on a good point.
There's been a lot of focus up to this point on how to scale these LLM models, how to make
them smarter, more intelligent, which means more tokens and more data have to be trained on.
And now there's a lot more emphasis on how to leverage those models to actually do inference
workbooks, right?
like how do you actually have software that incorporates an AI engine or an AI agent of some sort?
And so what a lot of folks are really looking at is I want to run AI workloads,
but I want to run them whether it's on a phone, whether it's in my IoT device,
or whether it's in my PC.
And so Arm is very uniquely placed in the ecosystem because we are the compute engine in a lot of these products.
And so we've been really hearing from our partners who want to run AI inference workloads
throughout the data center edge and even in the home.
Now, when you talk to customers,
how do you see them navigating workload placement
across that fast continuum that you just discussed?
Yeah, that's super interesting
because it changes very dynamically,
meaning the frameworks that people are using
to write software are changing quite dynamically
and at the same time, so are the models, right?
So how do you deal with this kind of change in the ecosystem?
What we're trying to do is be able to provide the right tools,
to developers so that they can be able to write, regardless of the model that they use,
they could be able to leverage all of arms, key technologies in that.
So we've done initiatives like with our Clydie AI software libraries that we're providing
so that people can get the most out of their compute regardless of the differences in
software approach that they're taking.
Sure.
I mean, in the past, right, it was all about training and inference, and I think 2025 has
really been about inference, but now we're leaning into Agentic.
So can you explain a little bit?
bit more about how Arm is really tackling agentic workloads? Yeah, JNTIC AI is like super exciting for
folks who aren't familiar with it. Think about the idea of having like your own AI bot that is working
for you, right? And is interacting with other AI bots to get things done for you. So that becomes
super interesting because instead of you having to go to chat GPT and ask a question, potentially you can
have an AI agent be able to do a lot more things for you. We're showing a demo here at OCP
that is showing, using three different AI engines that are actually answering your emails for you,
scheduling appointments, and taking action for you. And we're showing how all of that's being done
on a Neoverse-based CPU, together working with a GPU and an Nvidia Grace Blackwell system.
So it's really showcasing that for us, we feel like as you get to this world where there are more
and more agents interacting. The actually processor plays a big role in that as well.
I love that demo. You know about that. I've written a
about that demo.
I do have a question for you when you look
at the underlying infrastructure in that space.
How do you deliver infrastructure
that keeps things like efficiency, scalability,
and reliability in mind?
Yeah, that's the biggest theme I think of the show this year,
is you're seeing so many companies that are showcasing AI racks
that are in some cases a hundred times more power hungry
than a rack was just five years ago.
And so they're basically fitting more and more hardware
into these racks, but it's causing the power and the heat
to go up significantly.
So power efficiency is now at the forefront
of what everybody talks about in these designs.
That's a DNA of ARR.
Yeah, sure.
It's been our DNA of how you make phones that run off batteries
be able to scale performance.
So we think we have a lot to offer in this space.
And that's why we're seeing a lot of the big cloud providers,
a lot of OEMs now wanting to integrate ARM
as part of their AI stack to really focus on efficiency.
Nice.
Yeah.
And is it just efficiency?
I mean, we're seeing such an uptick with ARM, right?
We're seeing more and more innovation from you guys and we're seeing more of a trend.
So can you tell us a little bit about what's driving that and what will continue to drive that?
Yeah, I think this idea that every silicon going forward is going to end up having some sort of AI acceleration, right?
Or we're just going to have AI workloads be so prevalent that you're going to need to be able to have all kinds of silicon running.
An example of that is what we're doing with Solidine with their SSDs that are powering a lot of the data and the storage for these AI workloads.
How are we making these faster?
How are we able to have them do some computation as well?
Just an area where I think when we look at the AI trend,
it's no longer just an AI server doing all of this work.
So really excited by what we can do throughout an ecosystem,
from the hard drive to the server space to even the phone event as well.
Yeah.
You know, it's funny that you brought up ecosystem.
Two years ago, we talked,
and you had just announced your partner ecosystem program for the data
center and it really took on a different flavor for ARM in terms of leading with others to deliver
even those co-designed solutions, right? You made an incredible announcement this week that kind of
furthered this. And I want you to impact that and tell me about your vision for what you're
delivering. Yeah, no, I mean, two years ago, I think we talked about Arm total design program
at that point. It was our initial kickoff here at OCP two years ago. We've now increased the
program by three times, two years later, three times the participants.
And the vision is very simple.
Right now to build an AI-based server with custom silicon could be quite expensive.
It could be almost up to a billion dollars depending on if you're doing 3D stack packaging and so forth.
That puts that type of custom hardware development out of reach for most companies.
And so we've always thought that if we can bring the cost down,
we can have more people participate in that space.
And we're now starting to see that.
So our total design was really that ecosystem program where partners can work together,
If they want to do a chiplet-based design, they could leverage
chiplets from one partner or another so they don't have to do all of that spend on their own.
They can leverage solutions within that ecosystem.
Really excited with the 10 new partners that we announced.
Many of them are now actually showing real products on the OCP marketplace or in our booth as well.
You took that further though at OCP with a pretty major announcement around chiplets.
Tell us what you did with the foundation.
We did. So we announced two things at OCP.
One is OCP invited armed to be on the board of directors.
So we're now playing a much bigger role.
And the second part is we did a contribution for our FCSA,
our foundational chipwlett system architecture specs.
And the problems that's trying to solve is when people want to use chipwits from other vendors,
they don't interact very well if they were not designed to interrupt.
And so how do we solve that?
And it's more than just like a physical UCI-E type connection.
You want to decide how do these things boo,
How do you secure these systems and how are the information going back and forth between these systems managed?
So our intent is to be able to help the industry move further in that space.
And hopefully, we get closer to that chipplet marketplace vision.
Amazing.
I love it.
So taking all what you just said, but looking ahead, what are you most excited about?
And how are you going to take that and invigorate even more customers in the future?
Yeah, I think we're just excited now that I feel like we've almost a bit,
democratized AI computing, right?
So that it's not just the biggest companies with the biggest bank roles that are going to be able to have these high-powered AI processing systems.
In fact, we're showing some of the startups and some of the arm total design partners are smaller companies that each want to play a unique role in the ecosystem.
And several of the companies that we've announced in our total design are from places like Korea, Taiwan, and other countries that want to really start building their own semiconductor ecosystem.
as well and it's nice that arm tool design has helped them get to that point. Eddie, it's always a
pleasure. I know it's a really busy week for you and I'm so glad that you spent some time with
Tech Arena. One final question for you. For those who want to learn more about all of the stuff that
you talked about today, where do you send them for more information and to engage your team?
Yeah, easy.arm.com is a great resource. You can find information on armtotal design. You can see
the 10 new partners that we announced as well and have lots of sessions here.
at OCP, so I hope people get a chance to catch some of the talks as well.
And Janice, that wraps another episode of Data Insights. Thanks so much for being here for your
partnership. Thank you very much, Allison. I appreciate it. And thank you, Eddie. Thank you.
Thanks for joining Tech Arena. Subscribe and engage at our website, Techorina.a.i. All content is
copyright by Tech Arena.
