No Priors: Artificial Intelligence | Technology | Startups - Competition makes for better chip design with AMD CTO Mark Papermaster
Episode Date: February 29, 2024Compute is the fuel for the AI revolution, and customers want more chip vendors. AMD CTO Mark Papermaster joins Sarah and Elad on No Priors to discuss AMD’s strategy, their newest GPUs, where infere...nce workloads will live, the chip software stack, how they are thinking about supply chain issues, and what we can expect from AMD in 2024. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil Show Notes: (0:00) Introduction and Mark’s background (2:35) AMD background and current markets (4:40) AMD shifting to AI space (8:54) AI applications coming out of AMD (10:57) Software investment (15:15) The benefits of open-source stacks (16:58) Evolving GPU market (20:21) Constraints on GPU production (24:11) Innovations in chip technology (27:57) Chip supply chain (30:18) Future of innovative hardware products (35:42) What’s next for AMD
Transcript
Discussion (0)
Hi listeners. For potential AI founders, My Early Stage AI Fund, Conviction, is accepting applications for its Embed Accelerator for two more days.
Embed offers $150,000 in an uncapped safe, more than half a million of free compute and API credits, a hand-selected set of peers, and access to leading founder and research mentors.
Apply at Embed.conviction.com by March 1st.
Hi, listeners, and welcome to another episode of No Priors.
Today, we're excited to be talking to the CTO of AMD, Mark Papermaster.
Mark has had a storied career in chips and hardware with previous leadership positions at IBM, Apple, and Cisco.
We're excited to have Mark on to get into GPUs and the competition that's been driving this industry.
Welcome, Mark.
Thanks, sir. Glad to be here with you and Elad.
Can you start by telling us a bit about your background?
You've worked on all sorts of interesting things from the iPhone and the iPad to like the latest generation of AMD supercomputing chips.
Oh, sure. I've been around a while.
So what's really fun is my timing was pretty good getting into the industry.
I was an electrical and computer engineering grad, University of Texas, and got really interested in chip design.
And so it was back at a time when chip design was radically changing.
The kind of technology everyone uses today, CMOS, was just coming into.
you know, production usage. And so I got on IBM's very first CMOS projects and created some of the
first design. So I got to get my hands dirty and do just about every facet of a chip design and had
a number of years at IBM and took on different roles, took on driving the microprocessor development
at IBM across first their power PCs and that was, you know, meant working with Apple and
Motorola as well as the big iron, the big computing chips that we had in the mainframe and
the big risk servers. So really got all facets of technology there and included working on
some of their server development, but then shifted over to Apple. Steve Jobs hired me to run
the iPhone and iPod, and so I was there for a couple of years. But it was a time of a great
transition in the industry. And for me, it was a great opportunity because I ended up in
2011, fall of 2011, taking the role here at AMD of being both CTO and really running the
technology and engineering and right at a point where Moore's laws starting to slow down.
And so, you know, tremendous innovation was needed. Yeah, I want to get into that and sort of what
we can expect in terms of computing innovation if we're not just jamming more transistors on chips
or we're unable to do that. Every one of our listeners, I think, has heard of AMD, but can you
give like a very brief overview of the major markets you serve there? Sure. So AMD is a asteroid
company. It's been around well over 50 years. And it started out really being, you know,
a second source company, really bringing, you know, second source on key.
components and x86 microprocessors. But you fast forward to where we are today, and it's a very,
very broad portfolio. When Lisa and Sue, our CEO and I were brought into the company just
over 10 years ago, it was with a mandate to get AMD back into very, very strong competitiveness.
And so we started with the CPU line, brought the CPU very, very competitive, and then really across the portfolio.
And just in February of 2022, acquired Xilink.
So that expanded the portfolio further.
So AMD creates the world's largest supercomputers.
It's got a massive install base now in the clouds.
So many of your cloud operations that you're running are running on AMD Epic X86 CPUs.
Gaming were huge. We're underneath all the Xbox, all the PlayStation, as well as many gaming
devices that you buy when you buy your ad-in boards. And then across embedded devices with all
of that rich Zylink portfolio as well as embedded X86. And we acquired Pinsondo. So it extends
that portfolio right into a networking interconnect that we need as we scale out these workloads.
So very, very broad portfolio.
Yeah, MD's had a pretty amazing run over the last decade plus since you joined.
One of the things that you folks have really emphasized over the last couple years, as well as AI.
And there's been a big shift, both in terms of the adoption of AI over the last decade or so,
in terms of the traditional CNN, RNN, and other types of neural network architectures,
but also in terms of this shift to transformers and diffusion models and everything else.
Can you tell us a little bit more about what initially caught your attention
in the AI landscape, and then how AMD started to focus more and more on that over time,
and what sort of solutions you've come up with?
You bet.
Well, we all know the AI journey, you know, has been going since really the race began
when the application space for AI opened up, and GPUs were obviously pivotal there.
When you look at the key work that, you know, Henson had done in terms of showing
how GPUs could drastically improve the accuracy of image recognition, natural language processing.
And so that's been known for some time. And so what we did at AMD as we right away saw the
opportunity. The question was plotting our course to be that strong player in AI. So it was a very
thoughtful and deliberate strategy because AMD, we had to turn around the company.
So if you look at where AMD was in 2012 through, you know, really 2017, it was largely all of the revenue was based on PCs and then gaming.
And so it was about making sure that the portfolio, the building blocks were competitive.
Those building blocks had to be leadership.
They had to attract people to get on that AMD platform for high performance applications.
And so first, we actually had to rebuild the CPU roadmap.
And that was the Zen microprocessors that we released in 2017 in both PCs with our rising line,
as well as Epic, our X86 server line.
So that started the revenue ramp for the company and started extending our portfolio.
And so right about that time, in parallel, as we saw where heterogeneous computing was going,
And we had called the ball on heterogeneous computing.
Before myself, before Lisa ever joined the company, AMD had made a great acquisition of ATI that brought GPU into the portfolio.
It's one of the big reasons I was attracted to AMD in the role is that, wow, it was really the only company that had a very strong CPU portfolio and a very strong GPU portfolio.
and to me it was clear that the industry needed that powerful combination of the serial,
the scalar competing of these traditional CPU workloads and the massive parallelization
that you get from a GPU.
And so we started with that heterogeneous compute.
It's created an architecture around that.
So we've been shipping CPUs and GPUs combined for PC applications longer than anyone.
Start shipping those in 2011.
with what we call APUs, accelerated processor units.
And then for big data applications, we started with HPC,
the kind of high-performance compute technology that's in national labs.
It's in oil exploration companies.
And so we focused first with big government bids that ended up leading to supercomputer winds.
We now have AMD CPU and AMD GPUs under the world's largest supercomputers.
But that work started years ago, and it was equally a hardware and a software effort.
And so we've been building that hardware and software capability, and it really culminated in December 6th of 2020, of last year, when we announced our flagship, the MI300, which just is a beast for both high-performance compute with one variant we have and takes high-performance AI for both training and inference head-on with a variant.
which is optimized for those AI applications.
So it's been a long journey, and we're really pleased to be where we are,
where our sales are taking off.
Now, it's fantastic.
I mean, I guess when you launched the MI300, you had public commitments from META and Microsoft,
for example, to purchase that.
And you just mentioned that there's a series of applications that you're pretty excited
about there.
Can you tell us more about which A applications and workloads you're most excited about
or most bullish on today?
Sure.
So if you think about where the bulk of AI is today, you're still seeing just tremendous capital expenditures and building up the accuracy of capabilities for large language model, training, and inference.
So it is the likes of chat GPT of Bard and, you know, and the other, you know, LLMs that you can ask it anything because it's trying to ingest the vast of data that.
that is out there and it can be trained upon.
And it's with really an ultimate goal of artificial general intelligence, an AGI type of capability.
And so that is where we focus the MI-300 is to start with that, that halo product that could take on the industry leader.
And in fact, MI-300 has done that.
It's competitive on training, and it leads in inferencing.
And it has over 2X.
If you look at, you know, FP16 VLMs, which is a metric that generally everyone can run that.
It's got a tremendous performance advantage.
And we did that very purposely.
We created very efficient engines for the math processing that you need for that training or inference processing.
But we also brought the memory that you need to have more efficient computing.
So that's more computing at last.
less power, less rack space than you need with the competition.
A big front of competition is, as you just pointed out, there's performance, like overall
performance, there's efficiency, and then there's like the software platform, like Kuta,
Rockham, et cetera.
How do you think about the investment in the optimized math libraries and like how you want
developers to understand your approach versus competitors?
Yeah, you're so right, Sarah.
It's multifaceted to be able to compete in this.
this arena. You see many startups going after the space, but the fact is the bulk of
instrumenting done today is done on general purpose CPUs, not the huge LLM inferencing, but
you know, just general inferencing for AI applications. And then for large language model
applications, it's almost all on GPUs because that is the software and developer ecosystems
out there. And so we've been competitive on CPUs. We've been competitive.
been gaining shared a rapid clip because we've got a very strong CPU generation after generation
that we've been releasing on schedules we've laid out for the industry. But for GPU, it did
take us until now to develop really world-class hardware and world-class software. And what we've
done is ensured that because we're a GPU, it should be easy to deploy. And so really making
sure that we leverage the fact that we have all the GPU semantics. So if you're a coder,
it's just easy to code if you're using the lower level semantics. But also, we support all of
the key software libraries that are out there. When you think about the kind of frameworks,
whether it be PiTorch or founding member of a PiTorch Foundation, whether it be Onyx, whether it be
TensorFlow, we are out there very closely working with developers.
And so what we've now gone to now that we have, you know, competitive and leadership
offering is what you'll see is that when you're deploying with AMD, very facile.
If you're, let's say you're using Hugging Face, any of the, you know, thousands and
thousands of LLMs, open source LLMs out there on Hugging Face, while we partnered with
Clem and his team, they test as they release any of those length.
models, they're testing on AMD with our instinct GPUs, equally as they're testing on
Nvidia. So we've really done the same thing as well with PyTorch, where we're one of two
qualified offerings on PyTorch. And so all of that testing has been done, you know, routinely
with the regression testing that's run literally every night on any software release.
The other thing that's key is to learn from deployments. And so we've had to have
had early engagements like Lam and I, who's running on AMD, and they've been offering, you
know, services of getting on AMD and running your LLMs on their cloud, on their rack
configurations they have. And so they've already been working with customers. And now, as you
saw other people on stage with us at our December event, you can see that we're in there with
a key hyperscaler. And we're also being sold.
through many OEM applications, and we're directly working within customers.
So there's nothing like that feedback from key customers that are running on your platform
to speed us, you know, ensuring that we can just be easily deployed and make sure that it's a seamless
process.
Yeah, yeah.
Lam and I is a portfolio company for me, and Sharon and Greg are great.
I think it's an indication of you guys having a big ecosystem.
of software developers and machine learning people that want to see competition and more heterogeneous
compute out there for these AI applications.
So you cannot underestimate that.
It tells you that it was a very constrained environment.
There was a lack of a competition.
It's bad for everybody, by the way, if there's not competition, because you really end up
with a stagnant industry.
You can look at the CPU industry before we brought competitive and leadership.
It was really getting stagnant.
You're just getting incremental improvements.
And so the industry knows that, and we've had tremendous pull and partnership, and we're
very appreciative of that.
And in return, we're going to keep providing generation after generation of competitive
product out.
For such a huge, like, software stack, like Rockham to be open source, like talk about that
philosophy.
Oh, it's a great question.
It's very near and dear to us because we are.
as I mentioned, all about collaboration.
That's just such a strong part of our culture.
And what open source does is it opens up technology to the community.
And so if you look at the history of AMD, it's been very focused on open source.
Our compiler for our CPUs is LLVM.
It's open source.
The LLVM is underneath our compilers on our GPU.
But more than just the compiler on the GPU, we've opened.
opened up the rock and stack. It is our enabling stack. It was a huge piece in our winning
supercomputing with such large installations we have. Why is it our philosophy? And by the way,
Xylinks had exactly the same philosophy. And so bringing Xylinks and AMD together in
2022 did nothing more than even deepen that commitment to open source. But Sarah, the point is we're not about
locking in someone with a proprietary wall garden software stack.
What we want is we want to win with the best solution.
And we want, we're committed to open source and we're committed to giving our customers
choice.
We expect to win having the best solution, but we're not going to lock our customers in.
We're going to win on merit, generation in and generation out.
I guess one of the areas that I think is evolving very rapidly right now is sort of the
clouds for AI compute.
And so there's obviously the hypers, the Azure from Microsoft and AWS from Amazon and
GCP from Google.
But there's also other players that have been emerging, you know, base 10 together, modal,
replicate, et cetera, et cetera.
And one could argue that they both are providing differentiated services in terms of
different tooling, API endpoints, et cetera, that the hypers don't currently have.
But also that in part, they have access to GPU and there's a GPU short.
and so that's also driving part of their utilization.
How do you think about that market as it evolves over the next three, four years,
and perhaps, you know, GPU becomes a bit more accessible
and maybe shortages or constraints fall away?
Well, that's definitely happening.
I mean, the supply constraint will go away.
We'll be a part of that.
We're ramping up and shipping as we speak on our instinct line,
and it's going quite well.
It's going according to plan.
But moreover, to answer your question,
And I think the way to think about it is that it's just breathtaking how the market is expanding so rapidly.
I said earlier that most of the applications today that started on the, you know,
degenerative AI with these LLMs, that's been largely cloud-based and not just cloud-based, but hyperscaler-based,
because it's such a massive cluster that's required, not just for the training,
but frankly, for quite a bit of that type of generative AI LLM inferencing, also is on these massive clusters.
But what's happening now is we're getting application after application that is just taking off non-linearly.
And what we're seeing is a proliferation is people are understanding how they can tailor their models, how they can fine-tune it,
how they can have smaller noddles that don't have to answer any question you have or any application
you need to support, but it might be just for your business and your area of exploration.
And so that allows a tremendous variety of the size of compute and how you need to configure that
cluster. So a rapidly expanding market, application-specific configurations you need for your compute
cluster and it moving even further, not just from these massive hyperscalers to, you know,
I'll call it, you know, kind of tier two kind of data centers, but it just keeps on going
because when you think about applications which are really bespoke and they can be run on the edge
right on your factory floor where, you know, very low latency, put the inferencing and, you know,
right at the source of data creation, right to end user devices. So we've added our AI
inference accelerators right on to our PCs. We have been shipping it throughout all of
2023 and actually at CES this year announced already our next generation of AI accelerated PCs.
And then, of course, with our Xilinx portfolio across embedded devices, we're getting a lot
of pull from industry that has bespoke inference application right in a plethora of embedded
application. So with that trend, we're going to see more of that, more tailored compute installations
with, you know, an attempt to service this ballooning demand. Yeah, that makes a lot of sense.
I mean, it gets a lot, or a subset of inference is going to push to the edge, and obviously
we'll have things on device, both on laptops as well as phones in terms of, you know, where certain
small models will be running. And then it seems like there may be some ongoing potential set of
constraints for larger models or larger data centers, at least in the short run.
What are the main drivers of the constraints on the GPU supply side?
I've heard things around packaging.
I've heard things around TSM capacity.
I've heard sort of a mix of potential drivers of constraints.
Some people say the next constraint after that is, do you have enough power into data
centers to actually run these?
I just don't know what's real in terms of all this stuff.
And so I'm a little bit curious, like, how to think about, you know, what are the constraints
and how do we think about when those supply demand things come a bit more.
into balance? Yeah, supply demand is frankly something that any chip manufacturer, you know,
has to manage. You have to secure your supply. You look during the pandemic, we had actually
a tremendous run on our devices that stretched our supply chain because the demand for PCs
went way up. People were working from home. The demand for our X-A-6 servers went way up. And
And so we were in a scramble mode during the pandemic, and we did very well.
We had shortages of substrates, and we secured more substrate manufacturing capability.
We worked closely with our primary wafer foundry supplier, TSM.
We have such a deep partnership with them.
We've had it for decades that if we get out ahead of it and we understand the signals,
we are generally able to meet the supply.
or if there's a shortage, it's generally well contained.
And so what's happening with AI is, yes, it is clear that we're seeing this, you know, this massive increase in the demand.
And the fabs are responding.
And you're having to not think of it just as a way for fab.
But you're absolutely right.
It is the packaging.
Our cells and our GPU competitor both use advanced packaging.
I mean, I'll show you, I know if it'll come across here.
But that is our MI-300.
And what you see is a whole set of chiplets.
So smaller chips with either a CPU function, an I-O-and-memory controller,
it can be the CPU for the version we have that focuses on high-performance compute.
We literally drop our CPU chiplets right in that same integration
and all the high bandwidth memory that you have around it to be able to feed those engines.
and those are connected laterally, and on the MI-300, we connect those devices vertically as well.
So it's a complex supply chain, but it's one of which we are very, very good at.
We're a fabulous company.
We've been fabulous for, you know, coming on 18 years now.
And so we've got it down.
Hats off to the AMD supply chain team.
And I think overall is the industry, you'll hear that generally we're going to move beyond those type of supply constraints.
Now, you mentioned power.
This is, I think, ultimately going to be certainly a key constraint, and you see all the major operators looking for sources of power.
And for us, as a developer of the engines which are consuming that power, it brings tremendous focus for energy efficiency and that we can drive into each generation of our design.
And we are committed to that, certainly, at very top priority.
One thing you said before, Mark, is that you were actually excited about the innovation of the end of Moore's Law and that being a reason that you actually wanted to go to AMD.
Like, what directions of innovation should we expect investment in?
I don't know if it's, like, too deep to ask you to give us a layman's understanding of, like, 3D stacking.
But I think it is really interesting to think about it at a time when it's not obvious words.
to go. Well, no, Sarah, it's a great question. And the reason that I was so attracted to
AMD is, one, it's, it had a storied history of being a disruptor in the industry. And I
certainly felt very strongly that AMD could disrupt with very strong CPU and GPU, but more
importantly, putting the pieces together. The idea of chiplets was just coming together. There
was early expiration of that, of that around that time.
And the engineering team here at AMD, we were able to really get the team rallied
and the key leadership rallied around it and drove that innovation.
So the reason it's so important is when Moore's Law slows down, the easy way to think about
it is it used to be that the chip technology itself, the founder, going from one generation
to the next, did most of the heavy level.
lifting. So you could just bank on that new semiconductor technology node, shrinking your devices,
giving you more performance, it had less power and it'd be at the same cost. So that was what
Moore's Law was about. And with Moore's Law slowing, it means you still get those device improvements,
but it costs more. Your power's not coming down as much as it used to. And you are still
getting that integration. You're still certainly being able to pack more devices.
But it demands more innovation.
It demands what I call holistic design.
So you're going to rely on those new transition devices, new foundry nodes,
but how you use heterogeneous computing, meaning bringing the right compute engine for the right application, a CPU, a GPU, a dedicated engine.
Like we have super low power AI acceleration that we have in our PC devices and our embedded devices.
So it's about getting, you know, tailored engines for the right application, leveraging
chiplets that you combine them, put them on what is the best technology node.
You want each of those chiplets, each of those functions to be on.
And then, frankly, holistic design means you've got to keep going right up through the
packaging, how you package it together, how you interconnect it, and how you think about
the software stack.
And so it's literally got to, the optimization has to be the,
a full circle of transistor design all the way up through the integration of your computing devices
and equally with the view of the software stack and applications. And what I'm thrilled about
along with all the engineers that I work with at AMD is that we have that opportunity. We have
the building blocks and we are built on collaboration. It's just such a part of our culture
that we don't need to develop the entire system.
We don't need to be the ones developing the application stack and the end applications.
What we do is partner incredibly deeply and ensure that the solution is optimized into end.
I think everybody is very suddenly interested in the chip industry from a strategic perspective as well.
I think everybody's thinking more about the supply chain from the,
you know, TSM near monopoly to the idea of FAP security in an increasingly complex geopolitical
environment. How does AMD prep for this or think about these issues?
You know, you have to think about these things. We are very supportive of working with,
certainly the U.S. governments and other governments across the world, which I have exactly
that question. How, you know, our country is running now on chip design that powers such
essential systems that it becomes a matter of national security to make sure that there will be
continuity of supply. And so we build that into our strategy. We build it in with our partners. And so
we've been supportive of fab expansion. So you see TSM building fabs in Arizona. We're partnering
with them. You see Samsung building fabs in Texas. But it's not just in the U.S. They're actually
expanding as well, just a global facilities in Europe and other parts of Asia. And so it goes
beyond the foundry. It's the same thing with the packaging. So where do you, as you put those
chips onto carriers and you need interconnect it, you need that ecosystem to have geographic
diversity as well. So the way we think about it is it is a matter of importance for everybody to
know that there will be geographic diversity, and we are heavily engaged. And actually, I'm quite
pleased with the progress that we're making. It doesn't happen overnight. That's the difference
between chip design versus software. Someone kind of, you know, with software, you can come up with
a new idea and get that product out very, very quickly, get that, you know, MVP design, get it
out there, and it can go viral. But it does take years of prep in expanding the supply chain.
The whole semiconductor industry was built up as historically as well.
This is a global industry and will create geographic pockets of expertise.
So that's how we got to where we are today.
But when you have, you know, more volatile, you know, macro that we're facing today with political tensions, with, you know, economic tensions, it's just imperative that we spread out that manufacturing capability.
and it's walled away.
I guess one of the other things has been happening a lot recently is, and you know,
you've been involved with, I think, some of the most interesting and exciting new consumer
hardware platforms like iPhone and iPad and other things.
And obviously, AMD now is powering many interesting types of devices and applications.
What's your point of view on the new hardware things that people are building today?
There's the Vision Pro, there's Rabbit, which is sort of an AI-first device, there's humane, focus
on the health side, there's figure.
it seems like there's suddenly an explosion of new sort of hardware devices.
And I was just curious to get your perspective on what do you think tends to predict success
for those types of products, what tends to predict failure, like how to think about this
whole sort of suite of new things and devices that are coming our way.
Well, that's a great question.
I'll give you, you know, one point, I'll start just with sort of a technological point of
view.
I mean, I'm proud of the fact that chip design is part of the reason.
you're seeing all these different type of applications because you're getting more
and more compute capability that is shrunk down and draws such a low power that you can see
more and more of these devices that have simply incredible computing and audiovisual
capabilities that they can bring to you. I mean, you look at MetaQuest and Vision Pro and
things like that. This isn't happening overnight. You look at the earlier versions. They were
simply too heavy, too big, not enough computing umph, because if the, uh, the lag between,
you know, seeing a photon on that screen and on your head mounted device, uh, and actually being
a process, if that lag's too high, you actually get physically ill wearing, uh, you know,
wearing that and trying to watch a movie or play a game.
So one, I'm very proud of the, the technology advances, uh, that we've been able to make as
in industry, and we're certainly very proud of our aspects that we drive from AMD.
But the broader question that you've asked is, well, how do you know what's going to be
successful?
The technology is enabler, but if there's one thing I learned at Apple, the devices are successful
really serve a need.
I mean, they really give you a capability that you love.
It's not just that, oh, it's incremental.
I can do this a little better than something else.
did before. It's got to be something that you love, and that creates a new category. So it's
enabled by technology, but it is the product itself that has to really excite you and give you
new capabilities. I will mention one thing. I mentioned the AI enablement in PCs. I think it's
almost going to make PCs a new category, because when you think of the kind of applications that
you're going to be able to run with super high performance but low power inference you can run.
Imagine right now if I don't speak English at all and I'm watching this podcast.
Let's say it was a lot, you know, it's broadcast live and I click my live translation button.
I could just have it translated to my spoken language with no perceptible delay.
And that's just one of a myriad of new applications.
that will be enabled.
Yeah, I think it's a really interesting time
because for many years, like increasingly,
and AMD benefited from some of this, right?
You're also in the data center,
but there was so much compute load moving to servers, right?
Era of cloud, era of, like, all these, like, you know,
complex consumer social applications.
I think in, like, in the new era of trying to create experiences
and fighting, like, all these, like,
application companies are fighting latency as a primary consideration because you have you have the
network, the models are slow, you're trying to chain models, and you have, you know, things you
want to do on device once again.
And I just think that hasn't been like a real design consideration for a while.
Sir, I agree with you.
And I think it's one of the next set of challenges.
And that is really tackling the idea of not just enabling a high-per-referferial.
performance and AI applications on the cloud, on the edge, and these end-user devices, but thinking
about how are they working together synergistically, writing applications that where you don't
have that latency, that dependency on a lag in computing, run it on the cloud. It's going to be
the most efficient because you're optimizing this massive data center with the most efficient
computing. But write the algorithms such that where you do have that need for super low
latency. You just need that instant response, have those aspects of the algorithms be at the edge
or in fact on your in-user device. And often when you need to react quickly, it just has to be the case.
I mean, do you want to be in your vehicle that is being driven at a high degree of autonomous
driving, suddenly you get a loss of signal back to the cloud and you just stop because it says,
I don't have a signal. You wouldn't stand for that. So our audience is lots of engineers,
founders, tech executives, consumers, too. What do you want people to know about that AMD's
focused on in 2024? This for us is a huge year because we has spent so many years developing
our hardware and software capabilities for AI. We've just completed AI enabling our entire
portfolio. So cloud, edge, you know, our PCs, our embedded devices, our gaming devices,
we're enabling our gaming devices to upscale using AI. And 2024 is really a huge deployment
year for us. So now the bedrock's there, the capabilities there. I talked to you about
all the partners that we're working with. So 2024 is for us a huge deployment. I
think we're often unknown in the AI space. Everyone knows our competitor, but we not only want to
be known in the AI space, but based on the results, based on the capabilities and the value
we provide, we want to be known over the course of 2024 is the company that really enabled
and brought AI across those breadth of applications. Yes, in the cloud, and those massive LLM training
an inference for generative AI, but equally across the entire compute space. And I think this is also
the year that that expanded portfolio of applications comes to life. I look at what Microsoft is talking
about in terms of the enablement that they're doing of capabilities cloud to client. And it's
incredibly exciting. And many, many ISVs that I've talked to are doing the same thing.
And frankly, Sarah, they're addressing the very question you asked, how do I write my application such that I give you the best experience, tapping both the cloud and the device that's in your hand or in, you know, in your laptop, you know, as you're running the application.
So it will be a transformational year, and we're so excited at AMD to be right in the middle of it.
Awesome.
Looking forward to the year ahead and seeing great things.
Thank you so much for joining us.
Yeah, thanks for joining us.
Well, thank you both.
Like I said, you guys have just done a wonderful job here with No Pryor's.
And very happy and appreciative that you invited us on and loved the time with you.
It's a real pleasure.
Find us on Twitter at No Pryor's Pod.
Subscribe to our YouTube channel if you want to see our faces.
Follow the show on Apple Podcasts, Spotify, or wherever you listen.
That way you get a new episode every week.
And sign up for emails or find transcripts for,
every episode at no dash priors.com.