Big Technology Podcast - Qualcomm CEO Cristiano Amon: Future Of AI Devices, AI Fashion, Blending Reality and Computing
Episode Date: January 20, 2026Cristiano Amon is the CEO of Qualcomm. Amon joins Big Technology to discuss what the AI device of the future looks like—and why he thinks the next wave of personal computing will move beyond the sma...rtphone. Tune in to hear his vision for AI-powered glasses and wearables, what a truly useful agent experience requires, and why he believes the “winner at the edge” will shape the AI race. We also cover AI PCs and what will actually drive adoption, Qualcomm’s push into AI inference in the data center, the state of robotics and industrial AI, and where China may be pulling ahead. Hit play for a Davos-front-row conversation on where AI is heading next—and the chips and constraints that will determine how fast it gets there. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b --- Take back your personal data with Incogni! Go to incogni.com slash bigtechpod and Use code bigtechpod at checkout, our code will get you 60% off on annual plans. Go check it out! Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
What does the AI device of the future look like?
Let's ask the CEO building the chips that will power it.
That's coming up with Cristiano Oman right after this.
This episode is brought to you by Qualcomm.
Qualcomm is bringing intelligent computing everywhere.
At every technological inflection point, Qualcomm has been a trusted partner
helping the world tackle its most important challenges.
Qualcomm's leading edge AI, high performance, low power computing,
and unrivaled connectivity solutions have the power to build.
new ecosystems, transform industries, and improve the way we all experience the world.
Can AI's most valuable use be in the industrial setting? I've been thinking about this question
more and more after visiting IFS's Industrial X Unleashed event in New York City and getting a chance to speak
with IFS CEO Mark Mufford. To give a clear example, Mufford told me that IFS is sending Boston Dynamics
spot robots out for inspection, bringing that data back to the IFS Nerve Center,
which then with the assistance of large language models can assign the right technician to examine areas that need attending.
It's a fascinating frontier of the technology, and I'm thankful to my partners at IFS for opening my eyes to it.
To learn more, go to IFS.com. That's IFS.com.
Welcome to Big Technology Podcast, a show for cool-headed and nuanced conversation of the tech world and beyond.
We are here at Davos at the Qualcomm space, and we have a great show for you today.
we're going to talk about the future of the AI device.
We're going to talk about what an AI PC is and whether anybody's going to want it.
We're going to talk about the data center build out robotics and industrial AI.
And here to do it with us is the perfect guest.
Qualcomm CEO Cristiano Amman.
Cristiano, great to see you.
Great to see you too.
Very happy having this conversation with you.
Definitely.
It is a perfect time for us to have this conversation because talk of an AI device is going from theoretical to concrete.
And Qualcomm might be.
at the center of it. So let me give for our audience, if you're new to Qualcomm, a little bit of
a introduction to the company, $170 billion company. So it's very big. It's the designer of the
Snap Dragon chip, which is in mobile phones, notably high-end Android, also PCs, autos, and increasingly
wearables. There's also the Dragon Wing Chip, which we're going to talk about, which is in
industrial use cases like robotics, and you just got into AI data center building servers for
AI inference. So a chip designer really at the center of the AI story, whether it comes to
wearables or in the data center. I like that. Okay. Very good. I think that's a great
introduction of Qualcomm. Maybe I just add one thing to it. I think, you know, Qualcomm is a very
unique semiconductor company. I think especially in today's environment when connectivity is important,
computing is important, AI processes important, and one of the few companies they had all of it,
under a single roof.
And we're probably one of the few semiconductor companies
that go from 5 watts to your earbud,
now to 500 watts when you think about a data center.
And it's an exciting time for the company.
Also, exciting time for technology,
since AI is going into everything.
And designing the chip for the smartphone
has put you in a very interesting position
because as we all start to imagine
what an AI device is going to look like,
obviously when it comes to AI, the compute underneath is really important, and you're in position
to do it. And recently, you've talked about how your belief is that the market opportunity
for an AI device, and we're going to get into what the form factor is going to look like,
but the market opportunity is 10 billion devices, which would make it bigger than the smartphone
market. How do you get to that number? So it's interesting. And I think for you to get that number,
exactly important to see how the smartphone had, you know, evolve over different generations.
And I think you have a couple of things. You have the evolution of phones. You have the evolution
of compute. And then how AI changes that going forward. And maybe I will take us a little bit
into that journey just to talk about it. One of the biggest change, I won't go all the way back
to 2G, but one of the biggest change that happened in the phone.
industry. When we develop broadband into cellular and we said we can have broadband speeds,
we realize that on the other side of the broadband, you need a computer. So your phone need to
become a computer. And you need to develop a computer that will fit in the palm of your hand.
And that's the smartphone. That's the smartphone that changes computer forever, because it's our
inseparable device. We carry with us all the time. And it has been a decision.
center of a digital life.
Now, as you keep advancing, I think, in, you know, in smartphones right now, we are in the
billion.
Every single year is 1.2 billion phones are purchased.
It's the number one consumer electronics and everybody has one.
But when you start thinking about what's happening with AI, and especially as computers
using AI now understand us, then you're starting to go into not only the, you're starting to go into not
only the computer that you carry, but also the computer that you wear, especially because
if agents are going to be useful for you, they're going to be with you all the time. And then
you started to go from carrying a phone to also having a glass or a ring or a bracelet, a watch
and all those things, but they changed the nature of what they, of wearable used to be.
Wearable was, when you talk about wearables and technology, was designed to just extend
your phone functionality. Like, for example, yes,
you have a smart watch, who will tell you the time,
but also give you now your sensors back to the phone
and give you notifications from the phone to you.
But that's all going to change.
It's all about connecting to a model, connecting to an agent.
As those things change, and we're all going to start wearing those things,
then you start to think about big numbers.
You know, if you have, everybody has end up getting watch a ring on a glass
that's not connected to an agent,
then you're talking about order of magnitude as big as the phone.
And I think that's exciting.
That's how we think about the future of the mobile industry.
But here's the question.
The question is, why does it need to be wearable?
I was speaking with Sam Altman right before the end of the year.
And now opening eye is going to build a family of devices.
But the rumor had been that it's going to be a smartphone-sized device, no screen.
And it just listens to you and then it will push you notifications about your life.
and I was like, well, why can't it just be an app on the phone?
Why does it have to be a wearable?
Okay, it doesn't have to, look, we're working with them.
Unfortunately, I cannot tell you what it is.
What are they built in, Christian?
You will see, and it's going to be exciting.
Once it's coming.
But here's, we need to be thinking about this a little bit different, right?
Wearable is one of the things.
It's going to be more.
So I'll start first answering your.
question. This whole category of personal AI devices is humans already decided what they're going
to wear a long time ago, right? So I don't think we, you and I are going to be wearing like a big
helmet. I think we can wear glasses. We can wear jewelry. So humans kind of decide what they're going
to wear. And you can put, you can make, you know, that's our job to make electronics very dense
and a lot of computing power and small form factors come from a foam DNA. And you can put
electronics in all of this plus connectivity connect to an agent is going to be very useful but you could
have something in your desk it could be you could have uh you know something in in you know next to your
your bed you can you can connect to agents uh in different devices and uh I think what we'll see
everything will become will become smart in one way because see the biggest fundamental thing
is this now the computers understand what we'll
see, what we say, what we write, and that changes a little bit the human computer interface,
and with that changes the whole definition of what the computer is.
So wearable is the most logical thing to us because we're thinking about mobility and things
you're going to carry with you, but you could have things in your desk.
See, the way to think about this is let's think about.
devices that get caught in the transition of technology.
For example, you have a laptop right in front of you, right?
And I can bet you right now, and I see it's Qualcommpowered,
I can bet you that the laptop has the ability for you to touch the screen.
But you probably don't touch that often.
You use the keyboard.
That's what was designed for the user interface was designed for this.
You touch your phone.
Now the phone, when you pull the phone over your pocket,
you're going to be touching, going to apps.
It's not very natural for you to point in the phone like this to try to record images.
Glasses are, your head moves, camera move of you.
Maybe you can talk to the phone.
Maybe the phone is here and you talk to it before you pick it up.
So therefore, there's going to be other things that are going to be in your desk that you're going to talk to.
So we don't know how those things are going to pan out.
But I think going back to your question, wearables is logical.
That wearable is going to be things that we be wearing and carrying around.
But help us flush out a little bit about what this experience will be like.
Yes.
I mean, obviously, we're not there yet.
No.
And we've had many stops and starts.
Google Glass was an example.
People were wearing computing on their heads a long time ago.
Now it seems like the technology is actually getting there to the point where maybe it will be useful.
Maybe it can make sense of our context.
So, Cristiano, when you think about, all right, I'm going to put chips in glasses and maybe
some other different formats, and people will use them and have X experience. What is that experience?
Yes, let's talk about the experience, and I'm going to break this conversation. I'll talk about
the experience, and I'm going to talk about the technology that goes, you know, behind it.
So think about how glasses are performing today. You know, you have, for example, the Mata Rayban
glasses. I think there's going to be other glasses coming within the Google ecosystem this year.
And what are the glasses doing today?
Like you have cameras, so you see what you see, you can understand the image, can annotate
the image, and you have a microphone, it has a speaker, may or may not have a display.
You know, you have use cases even without the display, like you have the meta rayban glasses.
What the experience look like?
You're going to be, first of all, for those things to be, to get scale, they have to have very low friction.
and the experience has to be useful.
Otherwise, it's like a gimmick.
You're not going to use it.
So the experience is going to be like this.
I am talking to you, and then let's say I see somebody in the audience,
and I just said, who is this person?
And the glass will tell me, I don't know, let me check.
I check on the web, but it's this person, here is, this person name.
I said, oh, okay, yeah, you know, you met her before.
there was an email that was sent to you from this person.
He has to be something like you have,
like you have your friend with you all the time.
You walk it on the street and I said,
what is this?
This is what it is.
Or even something like you go into your day
and your agent is going to come to you and say,
you know, I notice that right now you seem to be free.
Can I talk about your agenda?
There's a conflict we need to resolve.
Those are examples of how it is experience going to be.
It's going to be this.
agent that it has ability to understand your context, understand what is around you,
around what you see, what you say, and react in real time.
And what is interesting is we're not there yet, but you see the beginnings of the change.
And I like to do parallels.
So I'm going to go tell you the parallel with the smartphone.
When the first time the smartphone arrived, like when you saw the iPhone, you saw the Android,
maybe, I don't know, I may get this number wrong, but maybe like there was 10 apps.
And you say, okay, those are the 10 new apps.
You couldn't at the time imagine that you're going to have probably hundreds of thousands of apps.
And if you're probably looking at your phone right now, you know, you have a ton of apps.
So your phone got better over time because all of a sudden a new app became available in the app store.
And I think that's how it's going to be with those agents.
Eventually, the agent gets integrated with some other service, and you started to see it.
For example, we have a customer of us in India that is doing smart glasses.
They integrated with the digital payment system.
So now you can look at a QR code and say, pay this, and it will pay.
And so you go from, translate this, explain this to me, pay this, you can get a bill.
And you say, I got this bill, please pay this bill, get out of my checkings account.
modify me when it's done. And you may take a picture and email to me because I want to keep a copy of it.
Those are going to be how you're going to interact with those computers, and that's what the experience
is going to be looked like? Is there a world where we get too close to computers where you think about
sometimes that free time is really nice. And now the agents being like, aha, you know, he has a moment.
I'm going to go and help him resolve a conflict. Or I'll help him understand who this person is as opposed to
them, you know, him going up and asking who the, you know, have we met before? Does, does,
does there eventually come a point where humanity and computers come too close together?
That's a good question. I, and I think, uh, I don't know the answer to the question,
but I think like everything, uh, it's going to be for you to decide. Um, look, it's, uh,
you know, there are some of us, not all of us, uh, sometimes it just put your phone down. And
it's going to be like that. You're just going to have to decide when it's time to disconnect.
But I feel it's going to be a little bit different because now we're going to be easier to
work with computers and the computers are going to be easier to work with us. And I'm going to
use this question that you asked me to tell something funny. I wasn't in CS. And I was having a
conversation with a customer of Qualcomm about this exactly this thing, about the smart
glasses and the camera, and the fact that now the cameras see what you see and can
annotate the image.
And then somebody said, you know, what if sometimes there are things that you want to
forget?
And then the answer was, well, you made, but the AI won't forget.
But it's, you know, those are going to be interesting things.
like with technology, I think how humans are going to use it and how those are going to be developed,
we're going to see it.
The natural extension of this conversation is as AI becomes more powerful and humanity comes
closer to AI, there's going to be people that are going to want to say, let's just bring us
together.
Elon Musk has talked about how the reason for building Neurilink is brain computer interface
company is, he said, eventually AI is going to get more powerful than humans.
and we better merge with them or they're going to destroy us.
So I want to just ask you, would you merge with AI?
No, but look, in the conversation that we just had,
we're talking very consumer-centric when you said about too much technology.
But it's easier to also understand when you move from the consumer to the enterprise.
If you actually think about the fact that if you have the ability to learn everything,
in real time.
Like, we're actually seeing some use cases right now,
especially for industrial,
when you have somebody that is,
as an operator of an equipment or of a refiner or everything,
and then all of a sudden you have this agent with you
that you get to a particular equipment
and you say, how do I operate this?
And they'll say, here's how you're going to operate it.
You do this, you do that.
So the ability for you to have access to knowledge in real time,
I think there's an incredible opportunity
to actually democratize knowledge and learning.
So that's another thing about the connection
between, you know, AI and in terms of augmenting human capabilities.
We can say that because we saw that with phones.
With phones is how many nations got access to the Internet
and became, you know,
access to digital through the phone.
You know, it wasn't to a computer.
And I think it was incredibly empowering to have people to be connected and ability to
internet.
I think maybe that's going to be the same thing with those personal AI devices.
Okay, I'm going to move off this in a second, but I ask if you'd merge with AI and you said
no very quickly.
Yes.
Why the reflexive no?
Because, look, it's different.
I think, you know, it's, I think it's fun.
I think people like to have those stories about.
science fiction. I have a very clear belief. I think there's humans, there's humanity. AI is our
creation. It's trained on the stuff that we do. I think if we look a lot of those models.
So it's really a tool to design to augment, but it won't take away our humanity.
Okay. Very quickly on Form Factor. You've mentioned glasses a number of times. You didn't mention
earbuds. And, you know, when you think about the way that this competition is shaping up,
you have different companies making different bets on different form factors, especially when you
look at the tech giants, big technology, as we like to cover here on the big technology
podcast, you have meta making a big bet on AI powered glasses. Google, as you mentioned, I think
we're going to see a very big bet from them. Google Glass part two, although maybe they'll have a new name.
Apple, it might be 2027 until we see a pair of glasses for them.
Maybe their big bet is going to be the AirPods and how AI already is delivered in the
AirPods with things like Translate and, I mean, series inside there, but still has some work
to do.
And maybe they'll do it with their Google partnership.
Why do you think glasses over earbuds?
Look, I won't say one over the other.
We have the benefit, I think, have been awesome.
The majority of the companies that are actually building personal AI devices,
we have, I think, the benefit would be working with them.
So we have a pretty broad visibility.
Like I give an example, there are some companies right now.
They're designing an earbut with a camera.
And earbud with a camera.
With a camera.
Because if you put in your ear and you have a camera,
he can see in front of you.
so it can provide some context, in addition,
I'm having just a speaker and a microphone.
I think it go back to the how start this conversation.
What are the things that humans are going to wear
and wear most of the time?
Glasses, I am a believer that glasses is the most natural,
and maybe because I wear glasses since it was 13.
So, you know, I'm used to them,
but when you turn your head,
you know, your camera goes with you,
is close to your eyes.
You should think about this.
I should have thought about that
when you asked the question about wearables
because that's the most simple way
to answer that question.
If the AI understands what we see,
what we say, well, here,
it's going to be closer to our senses
and glasses it captures everything.
It's closer to your mouth,
close it to your ear.
But earbud, it's the same thing.
It's just missing the vision.
And that's why some people are putting a camera
on an earbut.
But if you just have an earbud connected to an IP address,
you can connect to an agent,
and you can have a conversation with the agent.
What about PIN?
The same thing.
It's another way to put a camera on it.
There's pendants, the jewelry.
So it's, we'll see,
but I think you're going to see people experimented
with the form factors.
I think Glasses is likely going to be the primary way
that those devices are going to be built.
So let's say Glass is the winner.
Do you think that style matters?
Let me give you a binary here.
I have the more stylish classes with the worst assistant
or the less stylish classes with an amazing assistant.
Which wins?
This is a great question.
This is a great question because we're going to see another thing
happening in the industry,
which is when you start thinking about wearables,
then you're going to have the mix of fashion and technology.
And I actually think I'm going to make a prediction here.
They want to be offensive to any other company,
but I think that's where horizontal model is going to win versus vertical model.
And the reason I'm saying that is because it's very unlikely that everybody on Earth
is going to use the same exact glasses.
People want different form factors.
They want different colors.
It is different.
It's the same, especially things that you wear.
As a result, I think you're going to have.
different brands.
They are going to be,
it will be a little bit of an interesting dynamic
because is that a rayban?
Or is a rayban that you're wearing or it's a meta?
If it is a rayban made by a consumer electronics company,
is the consumer electronics brand or is Rayban?
We'll see.
But I think you're going to have the combination of fashion
in technology and there's going to be choices.
different brands for different people from different age groups and etc.
So I think that we're going to see a lot of diversity.
Very unlike the phone space when, you know, most people will carry a similar phone.
I think that's going to be different.
I'm going to answer my own question.
I'll take the better assistant and the ugly glasses over the nice classes and the bad assistant.
Yeah.
The best thing is maybe the most –
You could get both.
It would be nice.
Maybe the most successful glass is going to pair with the best assistant.
Eventually, you would think we get there.
Yeah, I think so.
Handicap the AI device race for us.
We have many companies that are running at this.
We have meta that's been making this multi-year metaverse bet, which has really transformed into the smart glasses bet.
We have Google, which all indications are, I mean, if you look at their recent thinking game documentary, they're just like,
pointing their phone at things and saying, what is this?
It's like, you need glasses.
You have open AI.
You're working with Open AI on this project.
Family of devices that are going to be in a bunch of different places.
And Apple, obviously, has to be considered a power player here as well.
Who wins?
Look, I'll answer this question by going into the beginning of the Internet, right?
So Orkut wasn't the social media that won.
end up being Facebook and then later Instagram.
I think MapQuest wasn't the main map, you know, eventually it was Google Maps.
So it's early to call.
I think you see all those companies.
I think they have big ecosystem.
They're investing on their ecosystem.
We'll see what happens.
However, I'm going to try to give you a little bit of an answer.
I think the, I have this view, and this is maybe.
a longer conversation that we're going to have time for.
But I think at the end of the day,
the winner of the edge is going to be
the winner of the AI race.
And the reason I say that is because
especially for
everything that is personal, the edge
has real context.
You know, you can all...
The edge meaning your phone, the devices
that you use as opposed to. Where the humans are.
The humans don't knock on the data
center and say, give me some AI.
They're experiencing it to that some other
devices over there. And what happened is, if you look how models got trained, models got
trained on the information's available on the internet. But when you fast forward to a model that is
when you had physical AI, understanding our world, understanding your context, understanding
you, that's going to be a lot more useful for you than a generic model that got trained on
data available on the internet. So whoever had access to that data is in a very, very strong
position. So it's companies that have, you know, presence in all of those different devices already.
I think they have an advantage. I will not bet against them. All right, but then let me take this a level
deeper than with you because we have seen those companies. I'll just name them, Amazon, Apple,
Google, meta. They've all tried to build this contextually aware personal assistant.
We've heard presentations about Alexa Plus and Apple Intelligence and the meta, all the different
buddies you can have in the meta properties, Google obviously with Gemini. But even though they
have all this data, we still don't really have an assistant that's capable of doing what they've
promised. I mean, Apple, you know, might be the most notable in promising this contextually
aware assistant that will help you figure out when your flight is and tell you, all right, time to
get to the airport. They haven't done that yet. What is holding these companies back? Is it a
hardware problem,
AI problem, where is the bottleneck?
I think it's a
combination of things, but I am
more optimistic, I think
than, I think then you describe,
I think we're starting to see
I think the beginning of
some real
experiences. I think you had,
you have to get the maturity. I think
first of all, the AI models
need to get more mature. I think they need to get more
capable. I think you had a lot
of changes, even within AI, you
went, you started to see
mix of experts, you start to see
chain of thought reasoning.
So you
have different things specialize
in specific
task. I think we're just the beginning
of physical
AI, which is really
important for you to have context.
So I think this is going to happen. The other part
of it is compute. You
need to have a lot of high performance
computer, and this is where we come into the picture.
Because you cannot do everything
on the cloud because of also latency.
It is not going to be useful for you if I go back to when you ask me to describe the experience.
If you and I are walking together in the street and I'm going to say, hey, who's this person?
And you say, this is so-and-so.
The answer, you can be saying, hold on, let me think it.
Let's keep walking.
I've been thinking about it.
And then the person went by, you missed the point.
And I think you're going to have to have certain things you need to do on the device.
It needs to be fast.
Like all companies right now voice to text, they're starting to do locally because you don't tolerate any delay.
So, and we're going to get there.
Yeah, we were just talking earlier in the room here about potentially being on the ski hill
and having the glasses point you down the hill that suits your skill set.
But if you have to wait like two minutes, you know, you might be a,
bunny hill skier and down the black diamond. So you really want to be able to work fast. And you're
going to break your glasses. Yeah, your glasses will be the first casualty. All right. We're here with
Cristiano Oman, the CEO of Qualcomm. Here at Qualcomm space at Davos, we're going to be doing
poor conversations through the week here and thrilled to be here. On the other side of this break,
we're going to talk about AIPCs, AI data center, the constraints on the AI buildout and robotics.
If we have time, we'll be back right after this.
Here's the problem. Your data is exposed everywhere. Personal data is scattered across hundreds of websites, often without your consent. This means data brokers buy and sell your information, address, phone number, email, social security number, political views, and that exposure leads to real risks, including identity theft, scams, stalking, harassment, discrimination, and higher insurance rates. Incognity tracks down and removes your personal data from data brokers, directories, people search sites, and companies.
commercial databases.
Here's how it works.
You create your account and share minimal information needed to locate your profiles.
You then authorize Incogni to contact data brokers on your behalf,
and then Incogni removes your data, both automatically from hundreds of brokers and via
customer removal.
There's also a 30-day money-back guarantee.
Take back your personal data with Incogni.
Go to incogni.com slash big tech pod and use code big tech pod at checkout.
Our code will get you 60% off annual plans.
Go check it out.
At Medcan, we know that life's greatest moments are built on a foundation of good health.
From the big milestones to the quiet winds.
That's why our annual health assessment offers a physician-led, full-body checkup
that provides a clear picture of your health today and may uncover early signs of conditions like heart disease and cancer.
The healthier you means more moments to cherish.
Take control of your well-being and book an assessment today.
Medcan. Live well for life.
Visit Medcan.com slash moments to get started.
And we're back here on Big Technology Podcast.
Special edition here at Davos.
We're here broadcasting, talking together on a Monday going live across our channels on Tuesday.
And let's keep going here about how AI will transform devices.
The AIPC is a subject that has been interesting to me.
A lot of noise about how, if you have AI baked devices,
into your computer, then you'll be able to be more productive,
and it can really transform the way that you work.
That's the marketing.
In reality, that rollout, that promise has been slow to meet the reality.
This is from the head of product at Dell, speaking to The Verge.
He says, what we've learned over the course of the year,
especially from a consumer's perspective,
is they're not buying based on AI.
And in fact, I think AI probably confuses them more than it helps them understand a specific outcome.
Obviously, Qualcomm has a stake in the success of AI PCs.
What is happening today and where is it going?
It's a great topic of conversation.
Look, first of all, as we entered the PC space, I would argue that a lot of what's driving the sale of Snapchat and PowerPC is the fact that we deliver
multi-day battery life, a lot of performance in a very exciting thin and light form factor,
right? So we just build a better PC. On the consumer side, I would agree with that,
that you don't see yet a lot of agents. And, you know, I know people want to see this right
away. I wish it was seeing right away. I don't necessarily disagree with that on the consumer
front because Microsoft just launched an agent for,
Windows. It just launched. So I think it's going to, people are going to use it more and more as
you're starting to rely on agents. And I think you're going to see things they're going to be
running on your device. But I think that's not the story for AIPC. The story is a little bit
different. What we've seen happening with AIPC and the fact that we actually have the
ability to run a significant high performance inference on a laptop.
We've seen is something else.
What we're seeing is right now, you have many, many, many applications and services on your PC
that are doing a lot of cloud computation.
And if you could rely on the computing that is available on the PC, not only is going to be
faster, but it has a completely different economics.
I'll give an example.
If you're a SaaS company and all the SaaS companies right now,
are being threatened by AI.
If you're a SaaS company and you say,
I'm going to have an agent within my application
and every time I'm going to, I have this data,
I'm going to run in, and you're paying for a computer in the cloud,
your economics changed dramatically
if you actually use the computer into the device.
I'll give you like a practical example.
There's many things now.
You just have a button.
You see that on the Microsoft Copilot.
you see that across a number of different applications.
Summarize this.
Like you have a bunch of data.
You have several pages of a document.
Summarize this.
You can go all the way to the clouds and have a cost of cloud compute to run the model.
Or you can run that model that summarizes on your text in the computer.
That's free because it's the computer that you already have.
So we're starting to see a lot.
of interest for enterprises or even applications to start running a portion of the application
on the AI engine on the device and that's starting right now.
So the reason to buy AI PC hardware as opposed to like let's say letting Claude
code take over your computer, mostly its cost?
I think you see, well, I just give you one example.
There's more.
Like gaming, for example.
Okay.
A lot of the gaming engines right now are thinking about using AI on the PC.
For example, you can have on an RPG game, you have a dialogue with a character.
Like a model, you have a dialogue.
The gameplay changes.
There's an example of cost.
They're an example of a new use case, is an example of agent.
I think the answer to your question is, first of all, why should you buy a Snapdragon PowerPC?
because by definition, even if you're not using AI,
it's going to be a faster multi-day battery life
and it's going to feel like your phone.
You can use your laptop all day without,
you go places don't take the charger with you.
The second part of it,
why should you buy an AI PC as a consumer?
As a consumer, I think over time,
you're going to see more and more apps having an AI front end
and they're going to leverage the capabilities on the PC,
but it's going to be transparent to you.
On the enterprise, I think the economics are going to change
because a lot of the ISVs and SAS applications
are going to require the onboard computing,
and I think that's going to make a difference.
Very interesting.
So that'll be a requirement from software companies.
So Qualcomm has also gotten into the data center world,
and you're building data centers.
So obviously you have the chips in the devices like we talked about,
but now you're working on building data centers for AI inference.
So let's talk a little bit about, well, actually,
why don't you first give us a little bit about why this is a move that Qualcomm is making?
Yes, and it was, look, we always believe that what's going to happen with AI in a data center,
you started to see all this build out for training.
But eventually, and now it's well understood.
When we start developing our solutions, that's what we thought.
Eventually, inference is going to take over training.
Because just think about that for a second.
If you're a company spending billions of dollars building a data center for training,
you expect to get a return on that investment.
So when you start putting AI into production, you're doing inference.
And we always believe that when you go to inference, there's going to be a lot of competition
between the different AI players.
So then I think the total cost of ownership matters, how much power you consume matters,
and the architecture matters.
So first answer to your question is, we realize when the data centers start to transition
to inference, we have an opportunity to leverage our assets to build a very power-efficient
inference solution for the data center, scaling the technology that we developed for the edge.
Because the power is efficient in the phone and power is such a bottleneck in AI,
you can use that advantage and put it in a data center. That's the logic.
If you just look at today, you have this very aggressive ramp of growth of AI,
and you don't have the same ramp on energy. You know there's a gap between the available energy and AI.
So I think energy is going to be a scarce resource.
Also, to operate an inference data center, that's one of the biggest, you know, items in operating expenses.
And then I think people wanted to have a different architecture, which is the second part of my answer.
The second part of my answer is we believe that the data center is going to another process of disaggregation.
And let me explain what I mean by that.
One of the key things that happen in the mobile industry,
if you look at for a smartphone today,
your smartphone, it's a very difficult engineering challenge
from a semiconductor standpoint,
because I have to pack a lot of computing in your smartphone.
It has to fit in your pocket.
It cannot get hot.
You're going to touch your face.
You cannot get hot.
I cannot have fans.
I cannot do liquid cooling on the smartphone.
and your battery has to last all day.
Otherwise, it's not useful.
It's worthless.
Yes.
So in order to do that,
we had to perfect the disaggregation
of the compute,
for lack of a better way to describe it.
I'll give an example.
In the PC, everything was CPU-centric.
So if you're going to do a decode of music
or you do the code of a video,
you go and load up the CPU.
You can't do that on the phone.
It burns too much power.
create a dedicated hardware just for music decode, a dedicated harder, just to jpag and code
when to take a picture, a dedicated hardware for you to do video decode. And everything is as aggregated.
And I think, and you do that because you wanted to maximize the use of the available energy in the
battery for you. And this all exists in the phone. It's the most we call heterogeneous compute.
If you look at a Snapdragon today, he has several engines for different.
things. We don't run everything on a CPU or even for that fact on the GPU. Data centers go
into that and we're starting to see disaggregation. There's an architecture that they use for
pre-field. There's an architecture they use for decode. So we're building what we believe is
post-GPU when you started to do inference and you need the dedicated engines. We'll build in that.
I actually believe that the Nvidia acquisition of Kroc validates that you've used.
different engines for different things.
And I think that's what we're doing.
And I think that's our focus on data center.
Okay, let's talk about robotics.
Are you buying the hype on humanoid robots?
I will, like this whole conversation with you,
I've been doing comparisons,
and I'm going to do a comparison with automotive
to kind of outline our strategy.
But let me give you the answer first.
I buy the opportunity to the humanoid robot.
However, the opportunity is going to manifest itself different, and some of those things are going to take time.
For example, to get straight to your question, a robot that is going to be with you in her house and it's going to do everything you ask the robot to do, it's going to take time to train that.
It's very difficult.
Right now it's teleoperated.
It's difficult.
Every house is not going to be the same.
Every task is not going to be the same.
It's going to be a lot of training required.
Having said that, a robot that can be a robot that can be the same.
can do certain tasks and do that task over and over,
that's not actually not a hard problem to self.
So with that, I'm going to give you my comparison, a metaphor.
When we start building platforms for automotive,
and we're very proud of our automotive business right now,
we also got into a stack for autonomous driving.
When you think about autonomous driving,
when you think about a robot taxi, like a level five,
no steering wheel, you go to the back seat and you take a nap. That requires a lot of training
because you can get to 0 to 95%, but if you get to 99.999% or the corner case, you have to do a lot
of training. However, if you do assisted driving, with the humans still responsible to pick up
the steering wheel and something happens, then you have the ability to put this in every car from level
2 to plus, 2 plus, 2 plus to level 3, and then all the way to level 4. So that's a massive
market opportunity. And that's what we're doing right now. You can bring some form of assisted
driving to every single model. I feel the same way about robotics. If you do a humanoid robot
or a humanoid arm, you do anything that it can leverage the world that's being designed for us,
and you train the robot on a particular task that, I think, we're very good.
it's already happening.
And I believe the opportunity from a business standpoint is massive.
That's why we're really focused on industrial robots,
because you can train a robot, for example,
your task is going to go to the supermarket at night
and put the stuff back in the shelf.
That's a self-contained problem.
You're not training a robot to do everything.
I think the robot that will do everything
is going to take a little bit of time until we get there.
There was a half marathon in China of humanoid robots, and the highlights looked really funny.
Robots pulling on their face at the starting line, and robots taking their whole team,
holding onto ropes, and like flinging them into the side of the course, and people went pretty fast and pretty far with that,
the power the robot had as it sort of crashed out of the course.
But some of those robots finished pretty fast.
I won't say they beat my half marathon time, which eventually they will, but they were respectable
and their finish. And that included time for battery changes. And the argument has been that in China,
China is so close to the production process. Think about their cars, right? They have this electric car
boom because they've been building things with batteries and electronics for so long.
Demis Sassabas, the CEO of Google Deep Mine, recently said that China is only a couple months behind the state of the art Western models.
But it seems like they're ahead on robotics.
Do you agree with that argument?
Look, there are many things, I think, that China, it's remarkable, I think, we're to do.
And I think everybody talks about the China speed.
We know that, I think, from having a number of different partners in.
China using a technology from not only cars but also phones now robots and industrial and I think
there is some merit in the argument that you're closer to a very large industrial base and you can
prototype fast you can build things fast you can fail fast and and I think those things are
helpful in developing the technology but the technology is going to be required for robotics
it's very, very broad, right?
You go from advanced semiconductors.
I think that's one area that the China companies are a partner
with companies like Qualcomm and others.
You're going to have a lot of ecosystem, I think,
that is going to be important for training, a lot of softer.
But yes, this is fascinating.
Everybody is on a race, and things are moving fast.
Lastly, I want to talk about industrial AI, which is something that I think as far as the AI conversation gets probably the least ink, but is some of the most interesting stuff that's happening today.
I mean, even here at the space, we have a robot that we're looking at that was built in just a couple of weeks with a $50 Qualcomm chip and moving pretty well.
Talk a little bit about the applications of AI in the industrial space.
And maybe why you think people aren't paying so much attention to it.
It's just it's not sexy enough for like the headlines.
You know, I'll say it's probably there's so much attention on data center right now.
That is it probably takes all of the air, I think, in the conversation to the data center.
I'll probably even resonate.
Just the fact that we said we're building something for the data center, it got a lot of attention.
But the reality is the industrial opportunity for AI is massive.
It's massive because you can put AI processing on pretty much everything.
And you find that every single industry, every single vertical, has a massive number of use cases.
It's true in retail.
It's true in warehousing.
It's true in healthcare.
It's true in manufacturing.
In energy.
And we're actually seeing incredible model demand.
especially because if you actually have ability to process in real time
things that come from physical AI, motors, machines,
you know, all of those things you can put sensor.
But just to give an example, if we don't get too fancy with different machines,
just in computer vision alone, a camera, you can put a camera on a manufacturing line
and you train the model just to see if what's coming in the conveyor belt against the template,
you know, is what you expect that you do quality control with, you know, with just a camera.
You put the camera, for example, into looking at a shelf of a supermarket,
you now can have the ability to check inventory real time.
You can actually sell online what's in the store with a real-time, I think, manage inventory.
You can put the same camera on a smart CD
and you're reading license plates.
And I think it's a massive, massive opportunity.
Some of the many meetings are actually
where I'm having here at Davos
with industrial companies.
They're super interested in industrial AI.
And I think that's actually happening right now.
Okay, five minutes left, two questions for you.
One of the reasons why I'm so happy to be speaking with you
is because, in a sense, you can see the future, right?
because when something is going to be mass produced,
you're the first call that's being made from,
let's say, someone building an AI wearable.
You're working closely with meta,
so you have a pretty good understanding of the demand
that they're anticipating because they need your chips
to be able to build things out.
Thinking about the AI buildout
and maybe also the AI device buildout
and looking into the crystal ball that you have
of what the future looks like,
are things going to continue a pace?
Can they possibly keep moving as fast as they have been?
Look, I feel that the, and I think that question is really directed.
I think what's probably happened on the data center,
because on the device side, we're just at the beginning.
I think we're saying a big trajectory,
like for example, glasses continue to increase quarter over quarter.
But I think the broader question is to,
the speed on the data center.
And here's my answer.
If we go back to the year 2000, when the dot com crash, right?
I have that correction on the dot com.
Go back to year 2000.
And you think about what we fought back then, what the internet would be.
I will tell you that today, 25 years later, 26 years now,
is actually way bigger than people thought it would be.
So whatever they thought is in 2000.
the internet will be exactly way bigger right now.
And you can still buy pet food on this one.
Yes.
However, it didn't happen all in 2000.
It happened.
So I think what's going to happen is AI right now, in the long run,
it's going to be bigger than people think.
It's probably under hype for the long run.
Now, how fast this is going to get deployed and how pervasive and what we'll see.
Could we continue to build at the space?
It's possible.
Could the slowdown is also possible.
Well, we're excited about it.
And I think it's finally, and this is more for Qualcomm.
Finally, people just woke up that the edge opportunity is massive.
And I think all of this air that was all about data center,
some of it started going to the paying attention to the edge right now.
And I think we're just the beginning of that curve.
Okay. Finally, I got to ask you a Davos question.
Go ahead.
Here at Davos, we have the slopes behind us. This is real, for those wondering.
You know, the corporations have been through this really interesting journey.
There's been moments where they've been into what's called stakeholder capitalism,
where they think about the group of people beyond the shareholder.
And I think we're kind of in a moment now where there's more of a naked person.
pursuit of the bottom line. I'm not speaking about Qualcomm. I'm just saying broadly, it seems like
corporations are much more, they've sort of put away this illusion that they care about much else
than the bottom line. And I wonder if, you know, we're here at the right outside the world
economic forum, there's 48 conversations that will happen in this event that will be about AI.
People will be talking about how AI will be able to cure cancer or get our best chance at
curing cancer and empower the disempowered.
And so I'm curious, like, from your perspective,
do you think AI is going to be the new altruism
or the new corporate altruism?
And is that a good or a bad thing?
It's a complicated question.
Look, I think it's a technology, is a tool.
I think it's going to, like computers did it.
And it will continue to do.
I think it will help accelerate many things.
will help, you know, accelerate, for example, drug discovery.
As an example, it will help, you know, many things will increase productivity.
As I said before, it's probably going to democratize education.
It's going to change how we think about education.
This is something to keep changing.
It's going to be a tool.
I don't think it's going to be like this change, this society,
kind of thing. I'll tell how I'll give you a very personal answer. When I, and this is going to be
terrible because it's going to show my age. But when I got out of college, right, it was just the
beginning of the internet. Still, I remember going to my first job, and there was like a fax machine,
and you got to go to the fax machine, and you get the faxes that you got overnight, and
put the other faxes in there, and you have somebody still typing, you know, into the fact that
intercompany memos.
Like, we don't talk about this anymore.
I think when the internet arrive and email arrived, it was a revolution.
And it was, I think the AI is going to be that kind of revolution, almost like computers.
But it's going to be like us doing things with a computer, just more.
That's how I feel about it.
All right.
Well, it's been amazing following this space because every time I think I'm caught up,
There's something new, and I think that you're going to be right at the center of it with all the devices that are going to come out.
And, you know, maybe when OpenAI does release this family of devices, we can talk again about the state of the competition.
By the way, we have a great live audience with us.
Guys, make some noise so people can hear that.
To Cristiano and the Qualcomm team, thank you for having me here at your space at Davos.
And very excited to be engaging in a number of really great conversations.
about the state of AI.
I'm sure that our audience, by the end of them,
will have a really good understanding
of where things are going,
and this was a great way to kick it off.
So, Cristiano, thank you so much for coming on the show.
No, thank you. Thank you.
I really had fun having this conversation with you.
Thank you.
You too. All right, everybody.
Thanks for listening, and we'll see you next time
on Big Technology Podcast.
Thank you.
Thank you.
Thank you very much.
