Big Technology Podcast - NVIDIA’s New Roadmap, State of OpenAI, Apple Shuffles Siri Team
Episode Date: March 21, 2025Amir Efrati is the co-executive editor of The Information. He's back for our weekly discussion of the latest tech news. We cover: 1) Is there a market for NVIDIA's GPU chips outside of the cloud provi...ders? 2) Why NVIDIA is making the case that we're early in the AI buildout 3) Whether NVIDIA's focus on Robotics is real or marketing 4) NVIDIA's chip roadmap 5) Amazon's cheaply priced AI chips 6) Coreweave's revenue issues - what they mean 7) The state of OpenAI 9) OpenAI's talent retention issues 10) Will OpenAI make apps like Doordash obsolete? 11) Changes atop the Siri org at Apple. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Let's talk about what's on NVIDIA's roadmap after its GTC Bonanza and whether the company's right that the AI buildout is just beginning.
Plus, we'll dive into the state of open AI and a reshuffling at the top of Apple Siri project.
That's coming up with the information's executive editor Amir Afradi right after this.
Welcome to Big Technology Podcast Friday edition where we break down the news in our traditional cool-headed and nuanced format, boy are you in for a treat today.
because Amir Afradi, the co-executive editor of the information
and the author on many stories that we read on the Friday show
is joining us to break down the news on our Friday show.
Ron John Roy is out today.
He'll be back next week.
Meanwhile, we have a great show for you.
We're going to talk about Nvidia's new AI roadmap,
the state of open AI, based off of a lot of Amir's reporting.
And then also one more week, we'll talk about a shuffling atop the Siri organization
in Apple and whether the company will ever.
write the ship there. So great to have you on the show, Amir. Thanks, Alex. Great to be here.
Great to have you. I've been, I think I've been in your inbox for how long a year may be trying
to get you on. So I'm really happy to have you here. Like I said, we recite your stories all the time.
And I'm going to ask you so many opening eye questions. Hopefully we'll get to the last story.
Cool. I think I was in your inbox for a while, too, waiting back. But, yeah, very excited to get
going. I will take that. It is true. All right. So let's talk a little bit about the GTC
news from this week. We're going to get into the actual Nvidia roadmap in a minute, but I just
want to talk about the broader analysis here, because the main message from Jensen Wong, of course,
he was going to talk about the new chips they have, but basically his message was, we're just
beginning to see the buildout around AI and stay tuned because Nvidia has a lot to offer as we move
into reasoning and as we get bigger and broader adoption of AI. So I'm going to quickly read
the take from Dylan Patel at semi-analysis, and Dylan, by the way, is going to be on the show
in the coming week, so stay tuned for that. His take is this. AI model progress has accelerated
tremendously, and in the last six months, models have improved more than in previous six months.
The trend will continue because three scaling laws are stacked together and working in
tandems, pre-training scaling, post-training scaling, and inference time scaling. And GTC this year
is all about addressing the new scaling paradigms. Basically, he says that we're just
just seeing bigger and better models.
And last year's mantra was the more you buy, the more you save.
And this year's slogan is, the more you save, the more you buy.
I think what Dylan is saying there is basically that we are all now entering an era
where AI chips are much more efficient.
And if you save a lot, you'll be able to do much more.
And then therefore, you will be able to buy more Nvidia chips because adoption is going
to be through the roof.
Amir, I'm curious how you react to that.
Well, I think when you talk about Nvidia, what you're really talked
about is a handful of major cloud providers. And within that and sometimes separate from
that, big companies, like big users, I should say, of Nvidia's chips, meaning open AI,
you know, Anthropic X-AI and meta. And that's mainly it. And so the question then becomes
who else is going to want these chips.
So we've got the Blackwell series that's just starting to roll out.
And it's a big improvement on the Hopper series that was in such, such high demand two years ago.
You could not get enough two years ago.
There was a huge shortage as everyone clamored for those.
And we're in somewhat of a similar situation where a lot of very big companies want these chips.
the companies that are developing AI, but not that many others are similarly going to want it.
And it's interesting because a couple of weeks ago in the last Nvidia earnings call,
Jensen talked about this shift over time.
He predicted that a lot of businesses, just regular businesses, not necessarily technology companies
or software companies, are going to be adopting Nvidia's AI chips over time.
And that's his sort of road map.
And he said that that kind of follows other, you know, technology waves that came before.
And that may very well be true.
I think the question is how much time before we get to something like that.
And, you know, at the moment, that that's pretty unclear.
And so the, you know, the cloud providers that we talked to at Nvidia's GTC event were basically saying that, yeah,
there is not much interest in blackwell chips on the part of companies that,
aren't really big major AI developers themselves.
And, you know, that just raises some questions.
I don't think that will necessarily change in video sales.
They're, like, sold out and they're going to be sold out because, like I said,
you've got this mad rush and mad competition on, like, the frontier model side.
But, yeah, I don't think it's as straightforward as Dylan is making it out to be in the sense
that, you know, there's still a lot of technical challenges that the major AI labs are trying
to figure out. So I think there are definitely unanswered questions about how to make models
that, you know, generalize and that are good at many different things at once. And so, yeah,
I don't think we fully know the future, but Nvidia is going to be fine for a while. That's for sure.
So can you give an example of a company that is not among the major cloud companies that would
potentially buy a bunch of blackwell chips, I guess, to set up their own, would it be to set up
their own data centers for inference? Or what exactly would this group of companies look like?
And this is why the earnings call was so interesting. I think he brought up an example of an
automaker that would want to have AI chips on premises. And again, we're just not there. So there
are definitely automakers that use hopper chips for various reasons as many big businesses
have been trying out either building their own internal apps or trying to launch AI features
in their customer-facing apps. But yeah, I think they're just still sorting through that
and that takes time. It especially takes time if you're talking about reducing hallucinations
and just protecting people's data and making sure things don't leak and all that.
That just takes corporate America time.
And so I don't think anyone is really clamoring for these Blackwell chips
except for the really big AI developers themselves.
And to some extent, Nvidia itself, that's what's so interesting.
And we keep learning more about how Nvidia itself is actually one of the biggest customers
of its own chips, both for internal purposes.
and also because they have their own cloud service that they may or may not want to supercharge.
But they certainly spend a ton of money, billions of dollars, renting back their own chips,
which is something people don't talk about a lot.
Yeah.
And I guess we'll cover a bit when we get to CoreWeave.
But in the meantime, I want to continue to advance Dylan's argument because I think this is the argument
that Nvidia and the rest of the AI industry would make.
It's basically that the chips are getting much more powerful and much.
more efficient and the models are getting more efficient. And as you get more efficient,
you can unlock some new novel applications beyond maybe just putting the cars, putting chips in
cars. This is what Dylan writes. The inference efficiencies delivered in Nvidia's roadmaps on the
hardware and software side, unlock reasoning and agents and the cost-effective deployment of models
and other transformational enterprise applications, allowing widespread proliferation and deployment,
a classic example of Gervin's paradox, or as Jensen would say it, the more you buy, the more you make.
So do you buy this?
Maybe in the long run, there could be a period of uncertainty in between that.
I don't know that it's a straight line.
And I guess another way to look at this is how many companies are generating revenue from generative AI?
not that many.
Definitely many companies are using it to cut costs and cut some costs and that's
great or automate customer service and so on.
But in terms of companies that are actually selling generative AI, it's really very few.
You could probably count it on two hands, who actually has a significant amount of like
application revenue or model revenue.
So I think that just raises an interesting question about.
you know, about the trajectory of adoption and what, what, you know, most companies can or cannot do
with models, even though they're getting more efficient. Now, these things can change quickly
and, you know, the Deep Seek revolution is still just beginning. So there's a lot of reasons
to be excited. Like, Deep Seek is still proliferating. It really like opened up a lot of people's
eyes to your point and to Dillon's point. So we'll see in the coming
months like what it enables because people are still testing and experimenting and playing around
with it. But yeah, I think there's still unanswered questions about the exact trajectory
that we're on beyond, you know, open AI, anthropic XAI meta, you know, and these big kind
of spenders on the chips and these big companies that do have the ability to make revenue
from providing this technology. You know, Amir, this is clarifying something for me because
Jensen did spend a good chunk of the keynote trying to make the case that the AI
buildout is just beginning. This is from analyst Gene Munster. About a third of the two hour
plus keynote Jensen spent making the case that were still in the early, early in the AI buildout.
Investors are still skeptical given that Nvidia trades at 20 times earnings per share, I buy it.
So I think what you're saying is going to be the big question around generative AI in the next
couple months, maybe years, which is are we at the beginning of the buildout? And is this
can extend beyond the companies that you just named, the Open AIs, the Anthropics, et cetera.
Or is this kind of where the AI revolution, so to speak, nets out, where it just kind of ends
with a couple of chatbots and maybe some better enterprise efficiencies?
Yeah, no, I'm not ready to make a bare case on this.
And certainly, progress is occurring quite rapidly on a number of fronts, including on the
coding-coding side, as you've covered and you know very well.
But, yeah, I think we're still in this phase where we're waiting to see who else can
generate meaningful revenue from products that are powered by generative AI.
So I'm not saying it's not going to happen.
I'm just saying it maybe takes a little bit of time.
And maybe that's why we're hearing so much about robotics, because if they're able to unlock
robotics with like an LLM for the physical world, then the opportunity becomes even more
massive. And it's funny because I was on CNBC right as GTC was getting started. And they're like,
what are you expecting? And I say, well, listen, I'm expecting them to make a big case for, you know,
humanoid robotics and that's going to lead to artificial intelligence, general artificial
intelligence. And there was a little bit of a mention of it at the beginning at the end from Jensen,
but it actually came after the keynote in sort of the side stages that we got more
announcements from Nvidia on robotics.
A lot of people like Jan Lacuna,
who was on the show on Wednesday,
make the case that you basically need to understand the world as it is,
and you're never going to get to human level intelligence just with text.
And it was interesting as I read your recap of what was happening within the company,
what was happening within GTC that you made robotics kind of like a big part of your first sentence.
from your story and the information.
NVIDIA on Tuesday unveiled a slew of new software for robotics,
including a simulator it made with Disney and Google's help
during its annual conference for software and hardware developers.
The announcements aimed to catalyze the development of robots
with software powered by NVIDIA's artificial intelligence chips.
So I'm curious if you think that's the way to read it,
that this is what they see as the next big growth area,
and how real you think the robotics push is for, I would say,
video, but also the broader tech industry.
Yeah, it's definitely real.
And I think what really catalyzed it was Elon Musk and optimists, to be honest.
He really is a force of gravity, if you will.
And I think that definitely caught the attention of OpenEI, which started its own, restarted
its own robotics efforts as a result and many others.
So there was a reason to take it seriously, but it involves, it's such a,
It's such a difficult, it's such a difficult thing to pull off and also involves all kinds of components and motors and all sorts of things that you have to get right.
And I don't want to say that we're sort of like where self-driving cars were 10 years ago.
But I think 10 years ago, a lot of people are like, oh, self-driving cars are right around the corner and actually, because it involves the physical world, because it is physical.
AI because people's safety matters. And we have had, you know, at least one or two deaths
associated with self-driving cars, which is really not that bad. It just took a very long time
to get to where we are, where, you know, Waymo is outside my office every day. So I, it's very
exciting the robotic space, but we'll probably take a lot longer than you think. And whatever
For demos you're seeing figure AI, just take this with a huge grain of salt.
One other thing that we just wrote in summary on the robotics announcements from GTC
is that these were mostly incremental announcements.
And yeah, the little Star Wars robot that they trotted out.
I forget what it's called DBX or something.
Very cute.
That one was probably remote controlled, it seemed.
But yeah, I just, I wouldn't hold your breath.
I think there's a lot of good reasons to be excited.
There's a lot of reasons why investors are putting money into this,
but it's probably a 10, 10, you know, 10 year kind of bet.
Definitely.
And you see in Jensen's openings or one of his opening slides,
he has this like exponential curve.
And it starts from perception AI, moves to generative AI, then to a gentic AI,
and then physical AI.
And Jan comments, Jensen at GTC, the future of AI is physical AI.
I couldn't agree more, obviously.
So I think we're just going to, this is just going to be a narrative that we're just going to hear a lot more about, especially as that question that we asked in the beginning, how, what stage of the build out are we? As long as that's unclear, we're going to see more of this. It might, it might be a look over here type of thing. But yeah, what do you think about it, Amir? Well, just to just to round out the physical AI part. So I remember at, I think it was CES in, must have been 2017 or 2018.
Jensen was basically saying, we're making self-driving cars with Mercedes and we're making
this and that.
I think he even had like a prototype self-driving car based on Nvidia's own software.
You know, that was definitely way too far ahead of where things actually were.
And that's part of his job.
I'm not saying that that's a reason to doubt what he's saying, but I think we have some
past precedent for this when it comes to physical AI.
On the non-physical AI, on the digital AI, the digital.
AI front. Look, Open AI and Meta and Google are definitely very serious about massive
buildouts, XAI too. So you have at least four, five, six companies with a lot of resources
that are still completely and totally committed to the idea that bigger is better.
The funny thing about that is that, you know, similar to self-driving cars, it could be that the
rate of improvement does not continue in some exponential way. And it could be more
logarithmic or, you know, it could just be, you know, you put more into it and you get, you know,
a little more efficient over time. But I don't know that we're, I don't know that it's obvious
that it's going to be some exponential curve. I think a lot of these folks, especially the dyed
in the wool, you know, AGI kind of researchers definitely envision a time of like self,
you know, improving AI, right? And again, they are a hundred percent religiously committed
to the idea that they're very close on that. I just, I don't know that we have a ton of
evidence for that yet. But everything that's happening is great. And I'm a huge fan of all
these investments. So, you know, yeah, go for it.
keep at it okay before we move on just very quickly about the chip roadmap itself we don't have to spend a lot of time on this but we should at least note it this is again from your story the company gave some glimpses of its upcoming AI chips blackwell ultra do out later this year via rubin chips coming out next year and the next generation is going to be named after another scientist Richard Feynman so any like high level thoughts about what these you know generational improvements of chips enable companies to do
do? Is it just like a slightly more powerful chip to train your AI models on? Like,
how should we think about them? At the moment, yeah, that's pretty much it. I think as we saw with
Blackwell, there are inevitably going to be delays as NVIDIA continues to really try to
accelerate their rate of releases. It's just really hard to pull off, especially when it's like
a different architecture or like a totally different die configuration, which they're going
to be trying to do. So it's very complex to pull off with all their partners and with TSM
in particular. So there's definitely a lot of like nervousness and hand-wringing about like how long
it's going to take to perfect these next versions. And then for the companies that ordered a ton
of Blackwells, you know, the Microsofts of the world and Googles and so on, they're, you know,
pretty stressed just like hearing about these new chips because they haven't even figured out
the Blackwell chips and how they're going to make those work. And that's still like a TBD.
We have not had the rollout of Blackwell. And that's going to be a very massive undertaking
on the part of the, you know, data son of companies or the cloud providers that are going to
implement them. Yeah, it was one of those things. When I heard about this, the Ruben chips that are
set to come out next year, I was like, wait, so many people haven't even gotten the Blackwell chips.
I can actually, like, what are they going to do? Are they going to just double up one
orders or skip the blackwell it's hardware is tough it's not like software it can be delays it
can be messes and you can you have to sort of adjust from there it's not clean i'm glad you you brought
that up because adjustments happen all the time and these customers do put their orders off and say
actually don't give me the b200s i'll wait for the gb 300s or whatever um it's taking you too long
to do this we'll just wait for the next one so there is a lot of that um negotiation happening and
What's interesting about next year is that it's going to be a very, very important year in chips
because that is the year when TSM is going to launch its kind of next generation chip making
process.
You've got all kinds of companies that have been making huge investments and planning for that
moment because they want to launch their own AI chip.
Open AI is definitely on that list.
And there are others.
We know Bight Dance has been trying.
pretty hard. We've got meta. I don't know how big or serious their effort is, but it definitely
exists. You know, you've got efforts from, you know, from Amazon, certainly that are very much
worth paying attention to. And we wrote about this week. And, you know, they're like,
practically willing to lose money to get customers to use these things. These are the Traneum
chips that kind of directly compete with Nvidia and are offered through Amazon Web Services.
And so, yeah, you just have a lot of things converging on next year.
And, yeah, that's going to be pretty exciting to see.
Right.
And so let's just take one of those and kind of talk a little bit about it.
You mentioned that Amazon is willing to effectively lose money on its trainium chips.
This is from your story, Amazon uncuts and video with aggressive AI chip discounts.
One long-time AWS cloud customer said the company recently pitched them on renting servers,
powered by Traneum, which is Amazon's chip,
that would give them the same computing power
as NVIDIA's H-100 chips at 25% of the price.
Does that type of stuff, I mean,
we know that all these companies are building their own chips.
Is that stuff shake up the market?
I know you said we might have to wait
till next year to figure that out,
but what do you anticipate the impact of that being?
Yeah, it definitely could have some impact.
I think it's a little too early to tell,
but we just know how serious they are about it because
Nvidia, you know, nobody likes to deal with a monopoly
and, you know, or a company that has dominant share
because they then ask their customers to buy other things,
whether it's networking equipment or other parts
that those customers may or may not want to buy.
And that was a huge source of tension last year.
between NVIDIA and Microsoft in particular and some others that we reported on.
And it's actually part of the government investigation into NVIDIA that is underway.
So I think, you know, everyone respects NVIDIA and Jensen, unbelievable job kind of seeing into the future and building for it and hats off.
But they want to do what they can to be less reliant on them.
And actually the one company that also was very thoughtful in getting ahead of things is Google,
which you haven't talked about.
And their tensor processing units are something that you know, it's fair to say is a bit of a
competitive advantage for them because they have been less reliant on Nvidia.
And this is how they develop their AI, although they don't have an infinite amount of TPUs.
And so one of the reasons they had to merge their two AI groups, deep mind and
brain in 2023 is because they just didn't have enough TPUs to build large language models.
So it's not, it doesn't solve all their problems. They are a major Nvidia customer as well,
but the TPU chips are are a force to be reckoned with. I know that they have their own plans
to try to get customers, their cloud customers, I should say, to use these TPUs so that it's not
just Google using them. I don't think that there are a lot of customers clamoring for that
other than maybe Apple, which has a lot of former Google engineers,
but they're going to give that a shot too.
So, yeah, these are going to be really, really interesting efforts
because these alternative AI chips are supposed to end up being pretty good.
Now, Dylan Patel might tell you that Nvidia is just so far ahead of everyone else
and nobody is going to catch them.
And it's an open question, and Nvidia has all these advantages, of course.
but I just wouldn't discount the seriousness of some of these other companies.
And then it raises the question of exactly how much computing power are you going to need.
And if OpenAI or Google or Amazon, if they just make a lot of less powerful chips,
does that make up for the power shortfall?
If they are good, efficient chips, is that going to be enough?
So that's another question worth, we're thinking about and asking Dylan about.
Okay, that's fascinating.
Let's keep poking at the NVIDIA thesis because it's fun to do it.
So there's this company, CoreWeave, which is pretty interesting.
If I have it right, what it does is it stacks up on GPUs and rents them out.
And NVIDIA has a large investment in it.
It's planning to IPO.
So you have this story that I think Corey Weinberg wrote this week in the information about
the projections for CoreWeave. The early projections, as it shared with private investors
last fall, was that its revenue would quadruple to $8 billion, and the cash burn would shrink
to $4 billion. Instead, what's happening is revenue is going to be $4.6 billion. Cash burn is
going to be $6 billion last year. It's going to rise from $6 billion last year to $15 billion this
year. So instead of like doubling revenue or quadrupling revenue and shrinking,
losses. It is going to grow its revenue, but its losses are going to be incredible.
And I'm curious, some people have taken this as a sign that, like, basically it's all over
for generative AI. So can you help us sort through the fact and the fiction of what's going on
with Corweed? Well, at a high level, what's going on with Corweed is when you present
financial projections as a private company, there's a lot more leeway than when you present
financial projections as a public company.
So that's really the main point there.
When it comes to CoreWeave itself, it is a success story.
There's no doubt about that.
And they were kind of right place, right time and very quickly.
They were Bitcoin mining enablers.
And so they were big Nvidia customers to begin with.
and then invidia saw an opportunity very smartly to again diversify its customer base
and it knows and doesn't like dealing with these cloud providers because they're trying to
develop their own chips that compete with invidias so they're like let's help core weave
let's help and a bunch of other smaller data center and cloud providers
and give them allocation of these really in-demand hopper chips so that they can get revenue
And what ended up happening is, again, because most of the chip customers,
Nvidia chip customers are huge companies like Open AI or these huge users like Open AI and Microsoft
and a few others, Corrieve's business ended up being mainly renting out these chips to Microsoft
and a few others.
It's good work if you can get it.
Yeah, instead of Microsoft getting these chips directly, they had to go to Coral
Reeve to get them. So I'm not being cynical about it, but that, you know, CoreWeave is basically
upon an Nvidia's master plan. That being said, it really sounds like there is a place for them
in the market because as a smaller company, as a startup, that's really focused on AI computing,
they're just able to move a lot faster than, you know, than some of these big cloud providers
and in setting up data centers.
And so speed is really their forte.
And it's really, as, you know, just like physical AI,
anything hardware related, as you pointed out,
is really, really hard.
You know, atoms are much harder than bits.
And so a lot of the big cloud providers, even Microsoft,
they really struggle.
They really struggle to move quickly in the data center world.
CoreWeave doesn't have the same issues.
They don't have a huge broad base of customers.
that they have to cater to, they only have a few customers they have to cater to. They can move a lot
faster for all sorts of reasons. So there is a place in the market for CoreWeave. I think it's
just a matter of what is the proper valuation and can they raise money to continue their
buildout over time? I think those are really the questions. It's more of a question of like,
how much do you value this asset? As long as they continue to be strategic to NVIDIA, they'll be
okay. All right, I'm here. But on that point, they still have less revenue than expected, right? They were expecting, what were they expecting? Eight billion? They were only getting $4.6 billion. So is that a sign that the demand for their, for these services is lessening? I don't think so. No, I don't think so. I think these are lumpy things. And it's a matter, you know, they've had this issue before in, in 2023 when they were trying to expand very quickly. And they couldn't.
So they were mainly kind of real estate constraint more than anything else.
So I wouldn't read too much into that.
One thing that, oh, go ahead.
I was just going to say sometimes it's a supply issue.
Sometimes it's a demand issue.
And it seems like it's a supply issue here, not a demand issue.
Yeah, in this case, definitely a supply issue.
One interesting thing that, I don't know that they've disclosed this,
but I remember in 2023 and it really stood out to us, and it ended up being true.
They, you know, they were telling investors a couple of years ago that the Nvidia chips
that they were getting, these Hopper chips, they would be able to monetize those for six years.
That sounded like a really long time to us, and especially because, you know, you'd be like,
oh, they're going to be two more generations of chips after that.
but it's interesting that prediction even though it's only in a couple of years it seems to be
right because even the ampere chips which is the prior generation before hopper they're still in
use today opening i still uses them so um i think they definitely um you know if that continues to
to hold then it's not as if they are buying these fast fast depreciating assets these are assets
that can hold their value for for some time anyway.
So that's a little bit of good news for them.
Okay, Amir, so you mentioned OpenAI, and you are the co-executive editor of the information,
but you also do a lot of reporting.
And I think that among reporters, there's few that know Open AI better than you.
So I think that, like, as we go through this conversation about where Gen AI is heading,
what the state of NVIDIA is, we really need to talk about where Open AI is going.
So I'm just going to ask an open-ended question for you.
I mean, what is the state of open AI today?
Very curious to hear what you think.
The state of open AI today, how to start.
There are a few different vectors you could take.
I guess the easiest one to talk about is chaty-PT.
And that is a runaway hit.
I mean, you know, it is a strong brand.
It's growing like a weed.
dare I say it's like Google 20 years ago
and that is worth a lot of money
and I don't want to say that it completely replaces Google
but it definitely could replace some of Google
but it also just enables a whole slew of queries
and commercial opportunities.
I mean my wife is actually the chatbot evangelist in my household
and has nothing to do with me.
she's using chat GPT in more creative ways that I could ever think of.
This is not necessarily a creative example, but it's an important one.
She asked it to do a bunch of research on skin care products because she, I guess,
wasn't happy with hers, wanted a new stock, and it came up with a bunch of different answers
based on her parameters, and she is loving it.
She bought everything it recommended, and she's loving it, and she's so happy.
She's like, can you tell I'm glowing and all that?
And I'm like, yes, yes, of course you are.
So that just gives you an example of the kinds of things that it can do,
even if it's not eating into commercial stuff right now.
So I don't know.
I don't know how you discount that.
Like, that is an amazing kind of rocket chip.
And I'm not seeing Google, you know, catching up to that.
Certainly not their standalone Gemini chatbot.
So the question is, can they get these same types of queries within the main search product?
I don't know.
they're trying a bunch of stuff.
It's hard to do because they don't want to destroy their cash cow.
But that's an incredible thing.
Meta is trying to shoehorn its chatbot into all these social apps that you have on your phone.
I don't know exactly if that makes total sense or it only makes sense in some cases.
But it's definitely not leading to the kinds of queries that I think you're seeing on chat, GPT.
a lot of which are actually kind of coding related, right?
It's like the original, you know, really robust kind of coding assistant that is a big reason
why it's such a financial juggernaut in addition to being like a consumer juggernaut
through its ability to charge subscriptions to a lot of professionals.
So that's one side of Open AI.
And there's a lot of reasons for them to be excited.
Then there is the, like, talent side, which is still pretty good despite a lot of their talent losses.
I would say, though, that, you know, it's usually, you know, talent change, talent kind of personnel moves and the impact they have sometimes takes a little bit of time to play out.
And so Miramirati, who was the CTO and abruptly left last fall, that's the one that's been the most concerning to people at OpenEI and to Sam.
Malman because she's been able to land some, you know, some good folks from the research team.
She recruited just like everybody from the live streams, like the top echelon.
It's just like all now working at thinking machines.
Yeah, yeah.
And, you know, it definitely, you know, one of the one of the ways you know it's becoming a problem for open AI is when they send one of their biggest investors in to give a seminar to the employees telling them.
all the reasons they shouldn't go to a startup and how rare it is to have a great financial
outcome that would be better than what they would get by staying at open the eye.
Yeah, so can you talk about that? You had that story. It's very interesting. Josh Kushner went in
and was like if you join a startup and you get 1% and it, it, uh, IPO or it sells for a billion,
don't think you're getting $10 million. It's like a hilarious. I've never heard of a meeting
like this. Yeah. And it, what's interesting about it is,
It may be generally true what Kushner was telling people, but because these people are so rare and so unique, and, you know, it's like some of these folks are worth like $10 million right now, you know, in terms of the offers, the job offers that they would get.
And it's not unlike the self-driving car, you know, phenomenon from 10 years ago, very similar.
There are just so few people who had any expertise in developing the core kind of underlying software that they could get these kinds of offers and valuations and so on if they left these big companies to start their own thing or join a startup.
So we do have this interesting dynamic.
Mira is definitely trying to make it worth people's while to join her by giving them equity that is very, very cheap and could be very valuable, at least on.
paper very soon. So she's not an idiot. She knows, she knows, you know, what, money talks,
what people need to be able to come out of Open AI, given that it is a rocket chip.
So we don't need to go too far on this because there is still some great talent density at Open
AI. And I don't think it's, I don't think it's easy to say that they're completely decimated or
anything like that. But these are things. Before we move on, can I ask you one mirror question? I
what is she building? Because is she just going to try to build the same thing as Open AI? Or I could not
understand for the life of me what she was trying to do when I read the launch post. So maybe you
have some thought on this. Well, there are some things that are just harder to talk about if we're
trying to report on them or break news on them. But no, I don't think it's going to be the same.
And I think in some cases, they're just trying to bring really smart people together to come up
with the planet once they have for smart people, right? It's like it's a roadmap. Well,
they call it a lab, right? It's a lab. So the purpose of a lab is to do experiments and then
figure out what to do. Now, there are plenty of commercial opportunities if you want to like
be a consultancy and say, hey, we just came out of Open AI. We know what to do. You know,
you can go to aerospace company or you could go to, you know, any business under the sun and
offer your services and they'll probably take it. Or you can go to some of Open.
AI's customers and try to say, like, hey, we will do bespoke stuff for you as you help us learn
what products are going to be great in the market, right?
That's ruthless.
I mean, we'll see.
You know, we'll see.
You mentioned Deepseek earlier.
And, you know, I think one thing that those clearly enable is just a lot more experimentation
and a lot more, you know, of kind of building on top of these things to kind of get some
approximation of the state of the art open AI or anthropic models without having to buy those
Open AI or Anthropic model.
So, yeah, they definitely have some options, but I don't know.
Okay.
I want to ask you one more Open AI question before we go to Apple and Apple Intelligence.
There was a fascinating information story that you guys put together about the apps at the
other end of the agentic AI that Open AI is building and how they feel about the fact
that they're effectively going to be disintermediated by AI.
And I think this is like the first few examples.
we've seen of companies telling the AI companies, hey, wait a second, we don't just want
to be like an endpoint effectively in your master chatbot system. And you have some details
of a meeting between DoorDash and Open AI last fall, where I love the question that DoorDash
asked Open AI. They said, what happens if you take operator to the logical limit? And what happens
if only AI bots rather than people visit the DoorDash site.
One thing that they talk about is how they have ads on their site
and now instead of people seeing the ads, it would be robots.
But to me, I think the greater concern that we're going to see from companies like DoorDash
and others is like as this AI gets more advanced and can surf the web,
like why do you need the DoorDash, for instance, you know, maybe if your bot can now
just log on to a restaurant web page in order of delivery.
I mean, of course you still need the Dasher, I suppose.
but okay that's a robot now what does doordash exist for and so like there's been this kind of
discussion around this stuff about do we live in a world without apps as AI gets better and I think
in this story you're starting to see that the apps are starting to think about that as well now maybe
that's extreme and the like if you take it to its logical limit but in the meantime there could be some
serious consequences to companies that end up in this you know AI ecosystem maybe the same way that
app companies found themselves sort of, you know, by the leash in the Apple App Store,
but this might be even worse because they control everything. So what do you think,
Amir? Yeah, we're still really, really early in this. You mentioned operator,
this like web using, web browser using AI that's out there in Open AI is trying to get
other developers now to make similar products. We don't know where that's going. I still,
I still think the opportunity with that kind of technology
is more in the enterprise app world
when you are trying to like use different enterprise apps
at the same time and maybe you have this AI
that can help you work with all these things together,
transfer data from one place to another and so on.
But the consumer kind of uses of these web browsing agents
are kind of the things that the companies that have made these,
agents have talked about, so Open AI and Anthropic and Google to some extent. And yeah, I think
that retailers in particular are going, huh, well, I guess we sort of need to see where this is
going and to get ahead of it and to see if we need to block these things because they'll eat
our lunch, no pun intended in DoorDash's case. So, yeah, I think, you know, Open AI, like every
technology company or any company wants to be the, you know, front end of the world.
It wants to be, you know, the conduit through which everything happens.
That's sort of what these companies end up desiring at the end of the day.
And already, as I mentioned, with a chat GPT example and my wife, Open AI already has a product
that's starting to, you know, impact commerce.
So all the retailers and publishers, website publishers and other folks,
they already have to sort of contend with that and think about that because, you know,
just like when Google was coming up, people are going to need and want this referral traffic
and they're going to want to want to, you know, if not game the system,
but want to make sure that they're getting as much traffic and as many customers as possible
once consumers like change their habits and start experiencing the web in a new way.
Same thing happened with social, obviously, and these things kind of ebb and flow over time.
But that's sort of the starting point here.
These, like, web-using agents, I'm still not totally seeing it when it comes to consumer uses,
but it's quite early, and I'll definitely take it seriously.
But I think, yeah, I think that, you know, you've already seen Reddit.
block some of these things from traversing its site. So it's not a stretch to think that
others will. And from the perspective of Open AI and the companies are creating these agents
that basically can like take actions on your behalf or do things that are a little bit more
complex and just like reading and regurgitating something, they can actually do a transaction.
They think that this is the future. They think that, you know, you're just going to tell your
AI, I want to, you know, I want to buy this kind of meal or I'm hungry for this and just go do it for me and have it show up in my door, right? Or, you know, I want this kind of information, like, tell me what happened to the world and, you know, set it and let it go and we'll do it. So they sort of see it as part of this vision of like so much more being automated. I think if you talk to like the Walmarts and Amazon's of the world, they'll say, yeah, for some things, it makes sense to automate purchases. But like,
Not for a lot of other things.
Like if you're going to plan a party and if you, like, have taste and you care about the details,
like you're not going to lead that to an AI, right?
At least not right now.
So these things are early.
These things are buggy.
But there's clearly going to be tension between retailers and AI companies, just like there has already been tension between news publishers or content publishers and AI firms.
And that is going to continue.
There's already a lot of bad blood there, and that will continue.
Yeah, I just think this is going to be a huge story moving forward,
and we're just seeing the beginnings of it.
So it was nice to see that you guys had some coverage inside a meeting where this is being discussed.
We're here with Amir Afradi as the co-executive editor of the information.
We have to talk about Apple Drama, and we'll do that right after this.
Hey, everyone, let me tell you about The Hustle Daily Show,
a podcast filled with business, tech news, and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show, where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite podcast app, like the one you're using right now.
And we're back here with Amira Frotty, the co-executive editor at The Information talking all about the world of AI.
And what would a discussion about artificial intelligence be in March 2025
without a discussion of what the heck is going on at Apple?
More drama at Apple.
We've done two straight episodes leading with the Apple story.
I figured this week we would take a break.
So let's conclude with what's going on at Apple because it does seem like the urgency is high.
And we haven't seen firings yet, but we are seeing a reshuffling.
This is from Bloomberg.
Apple shuffles AI executives ranks in bid to turn around Siri.
Chief Executive Officer Tim Cook has lost confidence in the ability of AI head John G. Andrea to execute on product development.
So he's moving over another top executive to help Vision Pro creator Mike Rockwell.
Rockwell is going to report to Software Chief Craig Federee, removing Siri completely from G. Andrea's command.
I think there was an M.G. Siegler tweet about this where he just kept on repeating, resist the urge to make a joke.
about the Vision Pro head taking over Apple Intelligence because we all know the Vision
Pro hasn't done well, but at least they shipped that product in the way that they said they
would, whereas Apple Intelligence has been a disaster. I don't personally think that this leadership
change is going to make a difference because to me, the inability to build something good with
Apple Intelligence in Siri, like I've been saying on the show for the past couple weeks, is a culture
thing. And by putting Mike Rockwell in charge, you're not going to change the culture.
of Apple, which is something that really needs to change, something that needs to be less
secretive and more collaborative, like I wrote about in big technology this week, to be able to build
AI. But maybe, maybe this will like light a fire under their butts. It's hard to really think
that this is going to make a big difference. But I'm curious what you think, Amir. Yeah, you made an
excellent point. And I think a couple of years ago, as this was starting, Apple was trying to be a little
bit more transparent in its research arm in terms of talking about the large language model
related research they were doing. But yeah, I don't think they've carried that too much
further. And to your point, you know, it is it is not easy to attract AI talent to every place.
And I think it's been very strange to see their struggles here because they have had a long time
to do this. Now, it does start with the person who's in charge. And one thing that is interesting
about John Jane Andrea is that he really wasn't like a big fan or a big believer in large
language models. And I think it took him a long time to kind of get fully on board. And, you know,
they were sort of running into this situation where at WWDC last year, they're actually
promoting chat GPT, right, and Open AI as being a leader here in this kind of foundational
technology. Now, I don't know that they've actually pushed that much traffic to Open
AI and, you know, kind of provided their users with a lot of chat GPT action. And I think there's
just this inherent problem with privacy and data or like, I think for a long time, their whole
mission was to put Siri all on device or as much on device as possible, right, and really
have it not need to talk to any servers. And so then it's like, okay, are we entering a totally
different era where you actually do need server access because that's where the AI chips are
that need to process whatever I need to process. So it's honestly, it's been really strange to see
this. It's been really strange to see them have struggles with some of the features that
they've been trying to launch and having to delay a lot of these things. But I think it's
combination of leadership being very slow on LLMs and being slow on ramping up investment
in talent. And then, yeah, some of the inherent structural, organizational and kind of
structural constraints that you pointed out. Plus, you know, how incongruent these LLMs
or LLM services tend to be with the idea of like hyper privacy, hyper, you know, data
localization and things like that.
So, yeah, it's a combination of factors.
But very strange.
The whole thing has been really odd.
It's weird.
I'll tell you one of the most interesting contradictions about the AI moment right now.
You need to be big to play.
We know that, right?
Like the companies that are at the lead are the ones that have the most money.
But the advances have been made by the smaller, more nimble companies.
Open AI was an upstart, built chat GPT, deep seek, hedge fund with a bunch of GPUs, built
deep seek R1.
You just can't have big company attitude if you're doing this stuff because you don't want
to cut people off from new information.
You need to be experimental and need to be nimble and be ready to try the cutting edge
and not have to go through processes.
And it just seems like that's plaguing Apple in a big way.
And Google's had that but has gotten over it, but Apple has not gotten out of it.
And it's going to be hard for them to do.
Can we just point out that, you know, when Siri launched, how,
off to them because, yes, it had its limitations, but it was a cultural moment, definitely
in the West and in the U.S., and it was an earthquake inside of Google, for sure. It really freaked
everyone out there because Apple had done something that just captured people's imagination.
And I think it's just incredible. It's been like 10, 12, 13 years since the launch of Syria,
and the improvements to Siri have just been so lackluster.
It was interesting, though, that in all of this, you know,
it definitely took some time for accountability to set in.
And I think, you had one big signal that there was a shakeup coming was Kim Varath,
who's their like Software Taskmaster, you know, went in, went into the, you know,
to the Apple Intelligence kind of section of the company and the Siri folks.
and, you know, was really brought in to figure out what to do on the product side and help people get on track.
And then, you know, Mark German at Bloomberg, who's done a great job reporting and all this,
then reported on kind of Mia Kulpa by Robbie Walker, who joined quite a long time ago to work on Siri that sort of took the fall for some of the problems.
This was sort of an internal Mia Kulpa.
And then I have to say, German's report yesterday.
yesterday on this change with Rockwell and Gia and Andrea. We had heard about this earlier this
week and we're working on reporting on this as well. But the fact, you know, to be able to print
in publication a sentence like Tim Cook has lost confidence in John Dehan Andrea, that is not
something you write lightly and you have to imagine, and I'll keep this vague, but you
have to imagine that there is a reason why German was able to write those words just in the
way that you wrote them. But yeah, it's really, really, you know, quite a shift over there,
but it's definitely logical given all the problems they have and how far behind they are,
but they still have an opportunity. Nobody has really like just knocked it out of the park
on voice and voice AI. It's definitely happening. It's definitely
coming. I know Elon obviously is trying with the GROC app. And I forget what he calls the voice
assistant. I think it has a different name than GROC. But Google has a, you know, Google definitely
has options here with their Android platform. So Apple can still do amazing things that are very,
very useful if they kind of make Siri innovative again. So I wouldn't like totally count them out
in every way. But man, this has been such a weird,
crazy. Not something you expect from Apple. Okay, so we have a Discord chat for our paid big
technology subscribers. It's real fun. And the Discord's been buzzing all week about someone saying,
I think it was Mac Rumors calling this the Windows Vista moment for Apple. Do you think that's fair?
You're asking me to tell you whether Apple is going to be languishing for a decade.
that that is difficult to do yeah I don't I don't want to stick my neck out on that I really
don't but I and I don't have the Apple intelligence features I'm very happy with my iPhone
really one of the greatest purchases I've I've ever made and so it's it's hard to feel too
down on them and you know one one thing
that I've actually been thinking more about than, you know, lately is the upcoming kind of Google
antitrust penalty trial and what that could mean for Apple. I think that's really the big deal.
If, if for some reason the government is going to be able to prevent Google from paying Apple
$20 billion a year for search referral traffic through Safari, which preloads the Google search
service, then it's worse than the Windows Vista situation. It's like, you know, they get cut
down in size. So I don't know whether or not the market has priced in that possibility or
exactly what the government is going to be able to do and what, yeah, what Tim Cook has up his
sleeve to kind of deal with that. But that's a very pressing short-term concern for them to lose
those profits.
Oh, yeah.
All right, Amir.
Thank you for coming on.
Can you share where people want to find,
where people can find your and your team's reporting and how they can subscribe to
the information?
Sure.
It's just theinformation.com.
I'm on X at Amir.
And Amir at the information.com is my email if you ever want to reach out.
But yeah, we, you know, we focus on stories to tell you things,
hopefully that you didn't know before.
That's the bar we try to meet every time.
try to get inside all the companies that people care about.
And, yeah, we take AI quite seriously.
We have a newsletter that is insanely popular and insanely full of great reporting called
AI Agenda, run by Steph Palazzo.
So we'd definitely encourage people to check that out, too.
And I will just second all of this and say thank you for writing so many great stories
that we're able to cite here on the Friday show and for coming on the show yourself.
So thanks for coming on, Amir.
Thank you very, very much.
I'm so glad we could do it.
Me too.
All right, everybody.
everybody. Thank you for listening. We'll be back on Wednesday with another flagship interview.
And on Friday, Ron John Roy will be back to break down the week's news. We'll see you next time on Big Technology Podcast.