Big Technology Podcast - New ROI Questions For AI, Microsoft’s Empire Plans, Jassy’s Amazon Comeback
Episode Date: July 12, 2024Tom Dotan from the Wall Street Journal joins us for our weekly discussion of the latest tech news. We cover 1) What slow growing GenAI consumer usage says about the field 2) OpenAI's five levels of AI... sophistication 3) Why enterprise rollouts of AI technology are moving slow 4) Is AI a startup game or scaled player only discipline 5) How small models fit in 6) Bing's failed run after Google 7) Sequoia's $600 billion question on AI 8) Can we use leftover GPUs to break out of the simulation 9) Microsoft plan to build an AI empire 10) Amazon's comeback under Jassy after a rough start --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Tough questions about the AI business are starting to be aired, loudly.
Microsoft Satya and Adela is seeking to build an AI empire beyond open AI,
and Andy Jassy's Amazon is thriving after a bumpy start.
All that and more is coming up right after this.
Welcome to Big Technology Podcast Friday edition where we break down the news in our traditional cool-headed and nuanced format.
We have a great guest for you today.
Helping me anchor the show is Tom Duton.
He's a reporter for the Wall Street Journal covering Microsoft and
all things AI. Tom, great to see you. Welcome back to the show. Thanks. It's a, it's a big
honor. I'm feeling cool-headed. All right. Well, let's see if we can maintain it through
the course of this episode. One of the things that has my head a little bit in a place where I'm
scratching it is this disconnect between the actual fruits of the AI moment and the promise. And I
spoke about it with Matt Wood from AWS on Wednesday, and it just so happens that many posts
are starting to show up with these similar themes. So Bend Dick Devons, the analyst, also a friend
of the show, has just put out this post called AI Summer, making the point pretty well that
like we've been promised a lot, but is it adding up? So he said about chat GPT. For consumers,
chat GPT is just a website or an app, and it could ride all the infrastructure.
built over the last 25 years, so a huge number of people went off to try it last year.
The problem is that most haven't been back. If you ask what they used it for, it turns out
that most people played with it once or twice, only to go back every couple of weeks.
And here's his bottom line. If this is the amazing magical thing that will change everything,
why do most people say, in effect, very clever, but not for me, and wander off with the shrug?
and why hasn't there been much growth in the active users as opposed to the vaguely curious in the last nine to 12 months?
I think it's a really good point.
I think he really nails sort of the big question about this moment in the AI quote-unquote boom.
And I'm curious what you think.
Does he say in this post what the latest active user numbers are?
I have not kept up with it.
And I know that last year they were talking about 100 million and they kept changing 100 million what.
like it was 100 million weekly active users initially it was daily active users i haven't seen a new
figure does does benedict have a new one there he doesn't have a new one in the post the most recent
number for chaty pt from opening i is still 100 million users yeah so people say like they're whispering
at least what i've heard in sort of the backrooms is it's sort of up to 200 million now or maybe a little bit
more but we haven't seen that sort of exponential growth that you might expect if it is as like ben
to saying this, like, unbelievable invention that's about to, like, mirror human reasoning.
If it's $200 million, they should have put it out by now, because this $100 million figure
has been it for a while.
And I remember last year they had a developer day conference where I think a lot of people
were expecting a few days after that, Sam Altman got fired, by the way.
Maybe it's because he didn't have that 200 million number.
If only it were that cleared, if only that's what it were about, I would have saved my
Thanksgiving if it were that simple.
I apologize to you, man.
That was a rough one, right.
Real low point, personally, for me, let alone Sam.
I don't care.
But for me, it sucked.
Yeah, there hasn't been a new number in a while.
And I think that, to a degree, is a red flag because if this is supposed to be, and this
was initially touted as the fastest growing consumer app of all time and the fastest to get
to 100 million, which it means something.
But if the gap between 100 million and 200 million is starting to get longer and longer and we're
not seeing.
the same sort of growth we saw at first, I think it's reasonable to temper expectations and say
you can't just map it alongside every other consumer app like Facebook or Uber or something like
that where you saw this exponential growth that continued past $100 million and say like,
oh, it's going to be the next that. Like if it really has reached a ceiling, it doesn't mean
it's a disappointment, but it's certainly not the thing that people initially thought it was going
to be. Right. And maybe it's just because the models need to get a little bit better. I don't
know if you saw this, but Bloomberg had this report that open AI had a bunch of different levels
on the stages of artificial intelligence as we make our way towards superintelligence. So I shared
this and a lot of people got mad at me because they were like, oh, you're just sharing open
AI propaganda. First of all, I'm just putting it out there, not like, you know, saying that
this is to be believed or, you know, sort of standing by what Open AI is saying. But I think we
should talk about the different levels. So for them, they say level one is chatbots, AI with
conversational language. Level two is reasoners, human level problem solving. Three is agents,
systems that can take action. Level four is innovators, AI that can aid an invention. And level
five is organizations, AI that can do the work of an organization. And they say that we are
past level one approaching level two, at least as what they say internally at Open AI. Is this a matter
of like we're not using these chatbots as much because they're just not useful to the
point where like when they maybe when they get to level two as reasoners or level three as agents
those day those you know active user numbers will go through the roof yeah i guess it's all a matter
of utility right the other things that they're trying to achieve here are more difficult but also
more useful to people so i actually kind of question their strategy in releasing this kind of scale
publicly because if you were not able to actually hit these new levels over the next year
few, because the technology hasn't grown in sophistication, I mean, it's all very arbitrary,
right? So like, there's an argument to be made that, like, they can claim they've hit level
three, level four, whatever that means, because they've decided they've hit these levels.
But if the technology that underpins all this stuff isn't going to get more powerful and
sophisticated over the next couple of years, then you've basically just told the world,
yeah, we don't really have it.
Like, there is promise with this technology,
but we're not going to be able to deliver on it
because we can't, you know,
the scaffolding that needs to exist
in order for this to get there
is not achievable yet.
So I actually, I mean,
I imagine they put it out
because they think they will,
but if they don't,
they kind of dug a hole for themselves.
Doesn't it remind you a little bit
of like the five stages of self-driving cars?
Self-driving cars.
Yeah, grief, maybe that too.
But like, but really, like,
we've been waiting to get sort of like level five autonomy and just kind of stuck on level two
forever. What did that mean again? What did level five autonomy mean? I know what you're talking about,
but I can never remember what that meant. I'm just going to give you directionally what it is.
Like level one is driving a car with a steering wheel and level five is like fully autonomous
vehicles. Right. Well, what's interesting with that is I remember covering tech back when people
were talking about that. That was like 2015, 16. And Uber or, well, Uber was very big into it and Google.
and self-driving cars seemed like it was kind of a, kind of a mistake.
It's just kind of a dud.
They never really were able to get it on the streets.
And, you know, now in San Francisco, the Waymos are everywhere.
And I got to say, it's pretty great.
They're amazing.
Yeah, it's a very good experience.
I'm embarrassed to say how much I like them.
Oh, yeah, I'm with you on that.
Yeah.
I mean, it's, I think it's the coolest tech I've tried in recent in the past few years.
Yeah.
Yeah.
And so, I mean, there's probably still many levels to go for it to be what the promise was, you know, a decade ago or so.
But like, they've really achieved it.
It's at least more than I thought they were going to.
But yeah, there is definitely a parallel between the two.
And, you know, I guess if you want to give open AI or AI in general the benefit of the doubt, like self-driving, it got there or it got there more than a lot of skeptics thought it was going to.
And so, you know, maybe just give them more time is the smartest way to think about it.
I don't know.
Right. And then the question is, okay, is this just a patience game? Right? Because the first group of entities that are going to start deploying this stuff is going to be the consultants, really, or the B2B partners of these companies that are trying to put it into play in an enterprise situation. And I spoke with a bunch of Amazon folks this week and Bendick highlights this in his post that it really is sort of a moment where the rubber is meeting the road for AI and it's going to take longer.
than a lot of people thought.
So here's just a couple of stats
that I'm pulling out from his post
about the consultants.
So he said,
Bain tried to split pilots,
experiments, and trials.
And he said,
everyone had a bunch of tests
as far as AI goes,
but far fewer people
are trusting something
in their business to this yet.
Accenture,
he said proudly announced
that it had already done
300 million,
a generative AI work for clients,
and that it had done 300 products.
And he says,
even an LLM can divide
300 by 300. That's a lot of pilots, not deployment. And I just published this in big technology
today, citing from a Gartner poll of organizations interested in artificial intelligence. And
the Gartner poll had more than a thousand organizations responded. Only 21 of the companies
surveyed by Gartner said they had Gen AI in production. That's this year. And the rest were
either piloting or exploring the technology. And this is what Bendix says about.
this, what happens when the utopian dreams of AI maximalism meet the messy reality of consumer
behavior and enterprise IT budgets? It takes longer than you think and it's complicated.
Yeah. Yeah. I think that's spot on. And it's something I've been thinking about a lot
reporting on the space because I cover enterprise software companies for the journal. And that's
ground zero, right, for adoption of this stuff. Like these are the guys with the big budgets.
you know, Microsoft is pushing their co-pilot, their agents, as hard as they can to all their
customers. And, you know, I don't have data yet on how well that's doing on its own, but I can
tell you on the startup level, there have been a lot of companies that we're building
enterprise software, AI enterprise software that are in a tough spot right now because they're
just not, there's not a lot of business and not a lot of revenue yet. People aren't buying it
yet. And I feel confident enough saying that there was a miscalculation by a lot of
of investors in AI that there was going to be quicker adoption to this. I really think they saw
the rise of chat GPT and the fact that it reached this 100 million user level within a record
amount of time and just assume that everything was going to follow in that footstep, in those
footsteps. And it was going to be, you know, not only they're going to fall in those footsteps,
but the revenue is going to, that spigot's going to turn on crazily fast. And I think we've
seen there's so much hesitancy and slowness and just frankly lack of capability.
in the technology that has stopped a lot of big enterprises
from throwing huge $100 million-type budget buys
at this stuff.
And the repercussions of that slowness are playing out right now.
It's that simple.
Like, there are companies that had made projections on startups,
I mean, like projections on what revenue would look like
based on uptake of the software, and it didn't happen.
And like, in a sense, I think it was arrogance and hubris
on the part of investors.
to just assume that it was going to happen because, I mean, this is always the issue with Silicon Valley, right?
It's like, oh, the early adopters here.
So they just assume everything's going to be like that.
And this tech takes a long time.
New, if you really believe this is the revolution that, you know, you're claiming it is,
it's just going to take longer than, than you initially thought.
And you need to be able to, as an investor or, like, Silicon Valley, a denizen or just a general, like, advocate of technology, like, swallow that pill and be,
honest about it that like this is slower than it was expected. Right. And I definitely want to get
into the numbers because that's another post that came out of Sequoia this week. But before we do,
I think one of the last questions that Benedict posts is that's quite interesting is sort of what
this technology is. Is it a product? Is it a technology? So here's what he says. Stepping back,
the very speed of which chat GPT went from a science project to 100 million users might have been
a trap, which is basically what you're saying, a little like natural language processing was for
Alexa. Large language models look like they work and they look generalized and they look like a
product. A science of them delivers a chat bot and a chat bot looks like a product. You type something
in and you get magic back. But the magic might not be useful in that form and it might be wrong.
It looks like a product but it isn't. And then he basically says there's two options that we're the two
different paths that we might be on. He says you could also suggest that these startups are a collective
of Silicon Valley bet that LLMs are a technology, not a product, and that we need to go through
the conventional process of customer discovery towards product market fit. Or the other thing is,
and this is, he says, it's the thing that really drives a bubble, is the idea that history is
over and LLMs will be able to do everything, and that in that case, we wouldn't need any of
these companies. So I'm curious what you think. I mean, it seems really like we are going to be in
one, the option one, which is that this is a process. It's going to take a long time to find
product market fit. And that's why people like Matt Wood at Amazon Web Services this week
is telling me the name of the game is incremental and patience as opposed to like what people
within the, you know, sort of sphere of influence back in 2022, we're talking about C-Change
revolution. Even like Google, for instance, is not. I mean, Google's fine. Bing has with AI
hasn't just taken it over, right? No. It hasn't really made a dent.
So I'm curious what you think.
I think it's always funny to have big tech executives like Matt would say the name of the game is incremental and slow because they can afford to be incremental and slow because they have giant profitable businesses to allow them the time and space to see something play out over a number of years like a decade.
Whereas a startup can't do that, right?
Like you can't go into Andreessen Horowitz, Benedict's former employer or Sequoia or any of these guys and be like, we got a business plan.
and that business plan is incremental and slow.
Right.
Like that's not the way the tech industry works.
And so I think a lot of the hype around this technology was driven by the financial needs
of investors to see fairly fast returns and uptake on this technology.
And like we've all sort of been a victim to that in a certain sense.
But can I make a counterpoint here?
Because hasn't this also like largely been a scale game?
It's been a moment dominated by the Microsofts, which have invested in OpenAI and the Google
which have their own, you know, AI research houses and Amazon's with their investment in Anthropic
and Nvidia, which can make the chips.
And you throw it, okay, so who's the upstart?
Is it meta, which is like, you know, sort of come from nowhere to make its own open source model?
Like, it does seem more than most moments in technology that this has been a moment that favors scale and the big guys.
Oh, yeah.
No, it's been a huge benefit to the entrenched tech giants.
they have gotten, I mean, Microsoft has gained more than a trillion in market cap.
Nvidia is now one of the largest companies in the world.
You know, Oracle, which is a company that you didn't think much about in the cloud game,
has, like, benefited tremendously all of a sudden because of this.
So the spoils have, like, absolutely gone to the incumbents,
which is, again, kind of funny in, you know, this industry that's supposedly driven by disruption
and, you know, a changing of the guard every couple of years because new technology comes
out, you know, we also should be fair that, like, Open AI is a real, they may not be profitable
probably, but they have real revenue. They're in the billions because they do run, like,
the most popular, you know, chat, chat, GBT and GBT4, and Anthropic is doing pretty well.
Like, there is real money that is going into a lot of these companies. Like, I don't want to
overlook that. But I think the sort of ecosystem-wide disruption that, that, that, you know,
the startup and venture capital community sort of expects when there's a technological shift
has been pretty slow and tricky. And I think in the next couple of months, we're going to see,
we've already seen some flameouts of these startups, like inflection, had this bizarre
aqua hires type situation over at Microsoft. But it's straight up a situation of a company
that didn't work out. Adept, which was an enterprise software AI.
agent company, they sold to Amazon in a very similar way. And I think we're going to see more
of that in the next couple of months. So as like a kind of startup ecosystem, it's not, it's not
the healthiest. So there's something else here, which is sort of gets to this second half of what
Benedict was suggesting. And I'd love to hear your thoughts on it. This idea that like LLMs will
inevitably scoop up whatever is built on top of them. We've talked a lot about like how it's like a chat
gpt is everything just a chat gpt wrapper right and so like the other side of this you know are we just
waiting for product market fit is like do startups inevitably end up just being eaten by the big
models as they get bigger and is that another advantage towards scale well that's i guess something
i've been writing about recently i mean this idea of scale and uh and that's like just size of model
you know like these these giant models uh like gpt4 or when anthropic is built or gemini
which is Google's large language model.
These are very powerful, large technologies.
They have to run in, like, you know, massive cloud infrastructures.
But they also do a lot more than what individual businesses really need.
Like, if you're trying to build, like, a chatbot to give financial advice or something,
you don't need GPT4 to at the same time give financial advice and also give you, like, a recipe for flan or something.
Okay, but, and I brought this up with wood, like,
Like Bloomberg spent all this money to train Bloomberg GPT with, I think they used Amazon technology to do it.
And then GPT4 ended up giving just as good financial answers as the custom Bloomberg bot once it came out.
That's the, this is sort of the balance here.
Well, yeah, I mean, I think, I mean, to go like into like the specifics of these small models and large models, no one is should reasonably say a small model is better.
That's not true.
What a small model can do is just be attuned specifically to the needs of its user.
And so if you have like the small model plus trained on specific data sets, it'll be effective and like more cheap to run.
It's really just like an efficiency play.
But you could also train a large model on or tune a large bottle on specific data and it'll be just as good.
I don't know this Bloomberg example, the Bloomberg GPT.
It's like a chat bot where you can talk with Bloomberg data, basically.
financial data that's fun yeah um yeah i i think like the large model is in one sense a very cool
demo technology and it can do lots of stuff and you know all the things that benedict evans
points out in terms of its capabilities and shortcomings like it's all it's all true in there
the one thing it is also is incredibly expensive to run and uh kind of a loss leader for um for these
companies like just the the cost to build is in the hundreds of millions of dollars
um you know each some of these companies like each inference each time that you put a
request into the API for them loses money for the company because it costs more to like
in process you know cloud computing costs than it is you know then they charge customers
uh to be able to do it and so there's a real desire i think on a lot of businesses to find a way
to do this more cheaply and so that's why they kind of have
been focusing on these smaller and dumber models that are good enough to pull off the task,
but are also efficient and not going to lose you a bunch of money. So is the idea that they can
sort of outflank the bigger models by doing the same things cheaper? Or is there also, is that
what it is? Yeah, I mean, like the analogy that I used in the story that we published a couple weeks ago
was like, you don't need to drive a tank to go pick up groceries at the store. And there's
In some counties in the U.S., that's sort of the way that people do it.
Maybe it's recommended.
I don't know.
I guess like that Humvee era, people were doing that pretty actively in the suburbs.
You definitely didn't need to do that, though.
But whatever, you don't need a Lamborghini to get the mail.
I mean, you can think of a million different analogies.
But the point is like the cost of running these things when what you need, the output from it is so hyper-specific and doesn't need a huge model,
is leading people toward using these smaller models.
And it's a funny moment to me because it also runs in the face of like AGI and this desire for these companies to build, you know, a replication of human level intelligence.
Because if you think about it as like a researcher and you got into generative AI, a lot of people were like, like, let's go, dude.
Let's go like replicate human level intelligence.
Let's build the biggest possible model we can.
We'll run it on supercomputers, the likes of which have never been seen before.
and we're going to like we're going to do it we're really going to make like a robot brain
we're going to make her and uh that's fine they haven't done it yet clearly but the business
imperatives are also saying yeah but why don't you build a not human level intelligence why
don't you build a small dumb thing relatively uh that can you know be give Bloomberg financial
advice uh and and that it's just not that inspiring to those people i imagine like they kind of
they didn't get into this game because they were going to build like capable, competent chatbots.
They really thought they were building something, you know, transformative here.
Yeah.
And it's always the economics.
That's what points the direction of the technology.
Like as much as people would like to say, it's not about business or whatever.
Business is what drives the development of this tech.
I think so.
I mean, if you can build AGI, it would probably be good for business.
But in the short term, you're right.
Like, it's a lot of these change management, incremental, get it in the way.
workflow type of thing that's not exactly as inspiring to somebody as, you know, building a human
brain might be. Right. And it's also probably harder to make those arguments to enterprises,
to like IT directors, to CIOs and all those people to make, you know, huge investments. What
we're talking about is incremental productivity. Right. incremental capabilities. But do you think there's
going to come a point where this technology is going to get good enough where like you won't have to
like, for instance, spend all that time, uh, standardizing the data.
where it just sort of is good enough
that it can sort of work
much more easily than it does today?
Yeah, I do, honestly.
I don't, I don't, I'm not stupid enough
to give a timeline for it.
Right. But I, like,
and I guess that would be my slight push back
to Benedict Evans. I know he was just raising the question,
not really taking aside, but
Chad GBT is pretty good at a lot of stuff.
Like, I forget this from time to time
because I don't use it all that much,
But when you ask it to do certain types of things, very specific at times, like, as I was writing a story, I wanted, like, a quick summarization of what EC2 is, this, like, AWS technology that is an elastic computing thing, just like extremely inside baseball, almost esoteric thing.
You know, I Googled it and was not finding a great, like, one or two sentence summarization of what this product was.
When I put it in a chat GPT, it gave me a pretty good.
good thing. And I fact-checked it because I'm, you know, I'm not that stupid. Like, I think we all
should do that. But when it does things like that, you do sort of see a bit of the magic that got
people excited. Coding, of course, is maybe a whole other category, category where it's shown
capabilities. So, like, I get kind of annoyed at times with the maximalists on the other side who
just say, like, this is a dead end. This didn't prove, this didn't do anything. This was a huge
boondoggle waste of money on the part of Microsoft and everyone else going into large language
models. I don't think that's true either. I think there's something really there. But the timeline
like we've been saying is maybe going to be a lot longer than we initially thought. Yeah. And on the
Bing front, I mean, you cover Microsoft. We brought up the Bing challenge to Google. What do you think
happened there that it hasn't been able to be successful? I mean, it didn't work. It did not change
the way people search online.
I should say it's obviously caused effects.
You know, it completely messed up Google, like the Bing Chatbot.
Like the chaos that the initial release of Bing Chat caused within Google was tremendous
and all the reorgs and internal pressure and code reds and stuff that they've had to do
because Microsoft has, you know, jumped ahead in this world has been real.
But the market share did not shift.
I'm sorry, you can show me any number of different.
third-party studies and claim that they're, like, these things are noisy, but bottom line,
this was not like a huge reordering of the guard in terms of search.
And I think that was a surprise to Satya.
I really think, you know, I was there in January, February of 2023 when they rolled out Bing
Chat or Bing with Chat and Sam was there.
And I think this was really positioned as like an iPhone level moment where we would all look
back at this and think, oh my God, how did we ever live?
before Bing Chat existed and it just didn't happen.
And I think it's totally reasonable to call out Microsoft and Satya,
who I know is like, and I've written about him,
like, you know, considered one of the most effective CEOs
of the current era and, you know,
gets a lot of credit for turning Microsoft around and stuff.
But like this should absolutely, for the time being,
be categorized as not the win that he thought it was going to be.
Totally.
Yeah, and like we'll see what happens.
There's this new team.
there that's supposed to be, you know, rebuilding it all and re-skinned it and seeing if
there's going to finally get some consumer attraction. But it's just we should be honest about
that and say that this didn't work out. Right. Can I actually ask you a quick question
on your usage of this stuff? I mean, you know, you put together this run of show thing beforehand
whose listeners can get a little behind the scenes where you kind of listed out the different topics
you wanted to cover. I mean, that's easily the kind of thing that,
you could have put into chat GPT, right? Or at least any of these large language models and
spit out like, hey, summarize Benedict Evans' post and talk about the other things we're going to do.
Like, you could have gotten this thing to spit out a summarization of this outline of the episode.
Didn't look like you did. I mean, do you ever consider using this stuff? Like, is this something that
could change your workflow at all? No, from my workflow, like, I need to be reading the stuff.
Like, I need to be in the documents and picking out the points.
I mean, I'll do things like upload lots of interviews to Claude and then ask it, like, you know, what am I missing or, you know, this week I took a bunch of interviews uploaded it to Claude.
And then I uploaded my story and I said, what did my story leave out?
And like, you know, it's much more effective after the fact versus proactive because like I just think that the human touch is still more important.
And honestly, I think I can do a much better job.
than the AIs of like picking out what we should be discussing versus what we shouldn't.
So you don't mind because it would obviously be faster. You don't mind spending the extra
20, 30 minutes putting the outline together. Oh, I spend longer than that. And I also think that
like that's what makes that's without that work ahead of the time like the show wouldn't be
as good as it is. Or the show would be worse. Let me put it that way. Yeah. Well that's such
That's interesting because that's sometimes the argument from these guys is like, well, it's not as good as a human, but it's only, you know, it's only, it's 98% or it's 90%. And if we're willing to like deal with that, then you actually get increased efficiency and only a slight degradation and quality, which I think is kind of cynical. If you think about the march of human progress, like with podcasts, it is zero sum. Like if you have a choice of listening to a show that's like 100% of what it is or 90% of what it is, and there's a,
another show that comes in and it's 95%.
Like what you've just given up
in efficiency, you've lost your entire
audience because people will go to the show
that's 95% as good versus
90. So it's basically
like the job is just to make it
the best quality show possible
and sort of
you know. And that's why like
the branding for this tech like Microsoft has
always been like it's a co-pilot. Like it's
something to sit alongside you. It's not
autonomous. Which is again
ironic because the different
levels of what Open AI is going for is not just a co-pilot. It's a pilot. They really want
this stuff to be able to handle all tasks on their own. And so I mean, the technology doesn't allow
for that yet, but we'll see like at this current moment, it's a co-pilot that people aren't
all that excited about at a grand scale. And there's an acknowledgement that there's also like
a quality gap there. Yeah, definitely. It's a hard sell. It is a hard sell. And so it sort of is a
great lead into our last segment of this half, which is talking about this post that David
Khan from Sequoia came out with AI's $600 billion question. And he basically said effectively
that AI is going to have to generate $500 billion more in return to make up for the massive
amount of investment that's going into it. And he cites some risks there here for the lack of
pricing power. Basically, like, if this is all going to be commoditized, you won't be able to
make a lot of money off of the models that your investment incinerates, maybe because of training
or because people will lose money in speculation. Depreciation, the models get worse over time,
and then the winners and losers. Like, if you're not a winner, you're in rough shape. And here's
what he says. He says, speculative frenzy is a part of technology. And so they should, they are not
something to be afraid of. Those who remain level-headed through this moment have a chance
to build extremely important companies.
But we need to make sure not to believe in the delusion
that has now spread from Silicon Valley
to the rest of the country and indeed the world.
The delusion says that we're all going to get rich quick
because AGI is coming tomorrow.
And so we need to stockpile the only valuable resource,
which is GPUs.
In reality, the road ahead is going to be a long one.
It will have ups and downs,
but almost certainly it will be worthwhile.
I think that's sort of like the most, like,
cool-headed take on this entire moment.
that I've read so far yeah well that's the theme of the podcast indeed I I guess that is
it's also the optimist take on it and it does still rely on an assumption or a null
hypothesis that the technology will improve at at some sort of a rate and you know that goes
against what the real critics of it like Gary Marcus will say which is like deep learning
large language models are going to hit a ceiling and you can't just throw more data and more
scale, more GPUs at this to, you know, reach the level of breakthroughs that need to occur for
it to be as valuable as that extra. What was the number? Like 500 billion? 500 billion revenue. Yeah.
That's meaningful, though. It's a lot of money. Yeah. That's a lot. I can't think of that many
$500 billion businesses that just sort of were created in any period of time. Yeah. So that's like a,
that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's,
That should be intimidating and daunting.
Indeed.
Yeah, I mean, if you think about the level of investment that's gone on, you're going to need massive returns.
And I think like right now, like people are starting to say, okay, where's the revenue?
And that question, I think, so we're just seeing the beginning of it this week.
I mean, it's been building, but we're really starting to see the beginning of it now.
And I do think maybe not the end of this year, but next year that question is just going to get louder and louder and louder.
Yeah.
You know, it's been interesting for me to watch, given what I cover, which is cloud computing,
is the buildup of cloud computing and data centers around the country, around the world,
in order to meet the demands for, you know, compute for AI compute.
We're sort of, I wouldn't say quietly, but not enough people, I think, pay attention to the fact
that we're in the midst of one of the largest infrastructure buildouts in history right now.
If you think about just dollars spent to build these kinds of things, right?
I mean, the AI revolution turned NVIDIA into a $3 trillion plus company because they're selling GPUs for the most part to large cloud computing companies, which are then putting these things into data centers all over the country.
And this is all in advance of consumer demand, right?
Like, what is the signals and data points that these guys are using in order to invest this heavily into this stuff?
There's not a lot.
and what I think about a lot at times is what happens a year or two from now if I'm not even saying the whole thing flames out but it levels off and the infrastructure was built in excess of that what do we do with these things like what happens to all of these data centers that are full of GPUs that are basically lying dormant now because there's not as much demand from consumers and businesses to use this stuff well here's my question to you I mean
Do you think that the tech industry is going to find a good use for them, even if, let's say, LLM's level off?
Like, we're in a lot of these GPUs starting out as gaming chips, and then they were able to, like, you know, kind of pivot into crypto and then pivot into AI.
Yeah.
Like, does the GPU eventually find a very, you know, impactful use in the tech world?
I mean, it was also, they're also being used to train the algorithms for recommendation algorithms for things like real.
reels. So I would bet that there's going to be a use that, you know, even if LLM's sort of level off,
that will be productive. But like the question is, is it going to be as valuable of a use?
That's another question. I don't know. Yeah. Yeah. And these things cost a lot of money and it's a
depreciating asset. But the crypto thing is interesting because, and this happened a lot with
Ethereum mining. I wrote a story about this with my colleague Berber, Jen at the Wall Street
journal last year, but Ethereum mining required GPUs. And there was a change in the nature of
Ethereum mining that I don't really want to explain right now. But basically, it obviated the need
for GPUs. And so there were these mining rigs that were sitting dormant all over the world
because they didn't need them anymore to create new Ethereum coins. And that was a good investment
then that they could sell those. Yeah, well, they've tried to like retrofit these machines to be
able to do AI inferencing and you know I don't think it's they're really capable of doing that
I'm sure people that listen to the show will think there are some that will think differently but
my my sense is that they can't but I don't know with you know these top-of-the-line Nvidia GPUs if
there's immediately going to be another use case for them that's valuable it actually to me and
I've not thought through this enough this is a good sci-fi premise and I don't know what it is
but like America over-invested by the tens of billions of dollars levels in
in GPUs and they are now like in excess of these hyper-capable chips sitting in warehouses all over the country that have tapped into the nation's power grid, which is also being like actively retrofitted to be able to power these things. And yet they're not being used right now. Could there not be some sort of entity or life form or like self-perpetuating technology that somehow takes advantage of this dormant technology for its own uses and devices? Is that not a decent premise? I haven't thought through it yet.
I mean or yeah I don't know there's so many interesting things you can do I mean I like really geek out over this like well can you take this technology and then use it like can you what if you hooked all those up to a neuralink right and yes like combined it with human brain or give human brain access to it or I mean I have a conversation I tease this on Twitter but I'll talk about it now that that's coming up in a couple of weeks with Nick Bostrum where we talk at the end of the conversation about basically like can we use more advanced
AI to, if we're living in a simulation, sort of rip apart the wall in the simulation and go out
and meet the people that are running it. Oh, because we've developed this technology that's so
powerful. Yeah. That like we can, it's like a counter. We can counteract the existing
simulation. So we've like quietly been building, like we've been quietly been building the
escape route, the matrix team. Exactly.
able to exist inside the Nebuchadnezzar ship.
Now, I'm not saying, I believe in this, but I'm saying it's fun to think about.
Yeah.
Yeah.
I think it's all possible, and we should consider it seriously.
And I frankly think in the following presidential debates that we have, both Biden and
Trump or whoever the Democratic nominee should be asked these questions.
I agree.
Yeah.
I agree.
That is truly what America needs right now is a consideration from the top level of government
about simulation theory.
why don't we do some more conversation about the state of Microsoft
and then the state of Amazon and its comeback.
So we'll do that after the break.
Back right after this.
Hey, everyone.
Let me tell you about The Hustle Daily Show,
a podcast filled with business, tech news,
and original stories to keep you in the loop on what's trending.
More than 2 million professionals read the Hustle's daily email
for its irreverent and informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show,
where their team of writers break down the biggest business headlines in 15 minutes or less
and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite podcast app, like the one you're using right now.
And we're back here on Big Technology Podcast with Tom Duton.
He's a reporter at the Wall Street Journal covers, Microsoft, Salesforce, AI, business technology.
Great to have you here, Tom.
You just wrote recently about Satya Nadella's, now that we're off the topic of the simulation,
although I'd imagine we might end up getting back there before the end of the show.
Hopefully.
Yeah, but you wrote about Satya's attempt to build an AI empire
that doesn't necessarily rely entirely on open AI.
It's a very interesting subplot that's sort of opening up in this broader Microsoft story
is like people used to think it was just Microsoft and Open AI,
but clearly Microsoft is high trying to hedge with other outside investments.
and its own models internally that could sort of do what Open AI does.
You want to take us inside that story?
Yeah, basically, I mean, if we go far enough back, Microsoft makes us bet in 2015 on OpenAI,
which was this upstart AI company that hadn't really built anything of consequence yet,
but Microsoft uses them as a kind of banner customer for Azure so they can build up their own
supercomputers. Eventually, Open AI builds these very effective large language models that Microsoft
ends up investing even more money in, and then it ends up being the foundation to a whole new
set of technology that Microsoft puts out there under these co-pilots and chat bots and things
like that. And both these companies, their success rides in tandem, Microsoft gets this huge
benefit of the AI revolution and their stock price goes up tremendously. Open AI keeps raising more money
and all that stuff, but the codependence that these companies have on each other is problematic,
mostly because they are led by headstrong people that have their own desires to be successful
in their own right and not be dependent on the other.
And the clearest case of that was the Sam Altman board debacle, Tom's Thanksgiving
ruining event of 2023.
And it ends up kind of being a very,
brutal wake-up call to Satya that they are entirely reliance, technologically speaking,
on a company that they do not control. And in the years that they had been investing more in
open AI, they basically neglected their internal AI efforts. And a lot of people at Microsoft
left the company in the last few years because they weren't getting any resources. All the resources
are being spent on open AI. And, you know, one argument on what's happened to Microsoft in the last
seven months or so, is that after the Sam Altman debacle, Satya realized that he needed to
build more stuff internally. They couldn't be reliant on just one company. And so earlier this
year, they had this very interesting acquisition, which I kind of mentioned earlier, of this
company called Inflection, which is a, I would argue, failed AI startup that's led by
Mustafa Suleiman, who was... One of the DeepMind co-founders.
So do you think that bringing Infliction in and Mustafa in was a direct reaction?
to the Open AI board situation?
Yes, I do.
The board situation, maybe not directly,
but I do think the desire to hire this team,
and that's what they got,
and immediately one of the things that they start getting to work on
is their own large language model
that is hoping to be at the same capability level of Open AIs,
is that they could one day have it as a backup plan,
as a plan B, as a hedge,
should they, for whatever reason, need to do that.
I think they, even though they emerged successfully
from that whole Sam Altman fight,
the risk of exposure was made very evident for them.
And one of the things that Satya is very good about,
and I wrote about it my piece,
is shifting strategy when it needs to be shifted
and not being too reliant on one idea.
one baked in belief of how the company should be run.
And even though he had so successfully ridden the open AI relationship
to the place that the company is at right now,
that should be rethought when circumstances change.
And circumstances, I don't even say they really changed.
I think they were just made more evident.
Right.
Yeah.
And so that's been an interesting thing to watch play out.
Now, I don't want to get too dramatic about it
and say like the Open AI Microsoft relationship
is on the verge of collapse.
they're not going to work with each other anymore,
but Microsoft is in a slightly better position now
than they would have been had they not done any of these things.
How is Mustafa doing within the company?
I mean, he comes in and he takes over effectively consumer AI
within Microsoft.
What's his performance been like?
I say it's been okay.
I think there's a lot of people who are rankled
by the foreign bodies that have come into Microsoft.
And they, as we wrote in the story, kind of exists semi-autonomously from the rest of Microsoft.
They don't even use teams.
They use Slack to communicate.
And there was a lot of internal...
That's a heresy.
Yeah.
I have no dog in that fight.
I don't really like either teams or Slack.
So it's hard for me to get wrapped up in that fight.
But, you know, it mattered to people there.
I think there was a lot of, you know, political infighting that happened, which
isn't surprising in large companies. I will say there are a lot of people who are Mustafa
skeptics out there. There are people who think that he has gotten a lot of credit for the
work that other people at DeepMind did. I'm not taking aside here. I want to be very clear
about that. I'm just speaking, you know, or I'm verbalizing the points of view from people that I've
talked to. But I think there is a lot that people are hoping to see from that team.
to justify, not the amount of money that they spent,
but to really justify this as like a legitimate effort within Microsoft
and that they really do, they really are on the verge of building something that is
competitive with Open AI to be the hedge that Satya, I think, wants it to be.
Yeah.
And then on the outside, Satya is also investing in other companies.
He put $1.5 billion into an Abu Dhabi base.
AI startup. He also invested some millions in Mistrawl. What's that all about? And is that going to
change the relationship with OpenAI? The Mistrawl thing was interesting. That is this open sourced
French AI startup that it was a very small investment. I would argue that one was probably
more about just having more options on Azure for developers. So when developers want to build AI
tools and they go to Azure, they want more than just open AI. And investing in mistral, having
them be listed there, that was, I think that was more the play there. The Abu Dhabi-based
company, that's called G-42. They're fascinating. I think Microsoft may have bit off a little bit more
than they could chew with that one. What's the story there? It's a good question. I mean,
basically, they got themselves ensnared in a kind of cross-border geopolitical fight with them
because there is some concerns that G-42 is a little bit too close to China.
And the U.S. government has been kind of all over them in terms of making them throw out, you know,
pieces of hardware that were made by Chinese component makers for fear that it could be spyware
or somehow cause some sort of vulnerabilities.
And, you know, Microsoft's investment stands and they'll defend it and it'll move forward.
But there are hearings going on yesterday and even today as we record this where a lot of U.S. lawmakers
are kind of investigating Microsoft's involvement in this company that has some, you know, direct
and indirect ties to China. So I'm very interested to see that one play out. But, you know,
from what you and I are talking about, this was also just a broadening of bets. And, you know,
Microsoft putting their money in technology that could one day be competitive with Open AIs.
Yeah. So the empire building thing. It's an interesting question. So your headline was Microsoft,
Nadella is building an AI empire. Open AI was just the first step.
do you think he's going to be successful in building that empire I think that goes back to like
the earlier segment about whether this technology is going to pan out yeah I would say if it pans out
Microsoft is in a better position than anyone because it's it's an enterprise technology
Microsoft is the enterprise company you know they've got the relationship with the
most successful builder of this technology in open AI although that
of itself is getting kind of interesting because of Apple, and it's like, you know,
relationship with Open AI, which we can talk about if you want.
I thought you were going to say Anthropic, but that will be for our next segment.
Sure, sure.
We can do Anthropic later.
I think if this turns out to be a real thing and a real business, yeah, I think Microsoft
is in a really good spot, and Satya's maneuvering around that has looked very intelligent
and prescient.
but you know it's still pretty early and there's not a lot of revenue and so it would be I think kind of foolhardy to call a winner at this point when like there's just not a lot of money there yeah a lot of data left to collect and either it'll be it'll be like a wild buildup or a bunch of spectacular collapses yeah yeah it's hard to see it really being something in the middle not at that level not at that level of investment I would think right yeah
So, all right, you know, focusing lastly on Amazon and Anthropic, right, their big partners, Matt Wood this week talked about how Amazon completed.
It's $4 billion investment in Anthropic, speaking of it, money and needing to see it return.
And it's kind of this interesting story of with Amazon, which is that basically Jeff Bezos leaves Amazon mid-pandemic after spending pretty, I don't know if you'd say recklessly, but wildly,
trying to build up infrastructure mid-pandemic.
He leaves in 2021, and effectively goes to Andy Jassy's longtime top deputy and says,
all right, you're the CEO of Amazon now.
Congratulations.
Also clean it up.
And if you remember, those first few quarters that Jassy took over, maybe even the first year plus,
it was not steady.
It was bumpy.
And the company's infrastructure spending didn't seem like it was about to be justified.
built for a pandemic level era of online shopping when people went back to shopping in
person. And then just there was a hangover. And they had to cut a lot of costs. I mean,
and geez, they cut a lot of jobs. I mean, they slashed almost 27,000 jobs in rolling layoffs
that I believe are still going on. They had divisions like one medical being forced to cut
dramatically. They were going to lose a hundred million. They asked them to cut their losses by
$100 million, which is massive. Prime video, they've gone from like looking for blockbusters to
just looking for profitability. And then Amazon all of a sudden turns it around. Operating
income in the last quarter was $15.3 billion, which is the largest quarterly profit in the
company's history. This is all coming from a business insider article about the turnaround.
And next thing you know, they hit all-time highs.
Amazon reaches $2 trillion for the first time.
And it looks like everything is hunky-dory inside Jossi's Amazon.
But now I'm going to, I'll add the butt, and then I'm going to turn it over to you because there's always a butt.
And in this case, there's a serious one, which is that, and we'll have more on this in big technology coming soon.
So hopefully, well, anyway, I'll just share it.
But like, the mentality might shift because that mentality of,
We're not going to make a lot of profit within Amazon.
We're going to fund these moonshots.
We are going to look for those blockbusters in Prime Video and look for the moonshots
is starting to dissipate as this cost-cutting market-friendly Amazon takes its place.
And there is this part of the story that talks about Jassy's downsides.
And it says Jassy's new approach may have some downsides.
Amazon employees used to leave a meeting with Bezos feeling inspired and more ambitious
about their projects. Some of the people said, this is according to people talking to
Business Insider, in meetings with Jassy, there's a much greater emphasis on mundane topics
such as bottom line. Of course, Amazon disputes this, but it seems undeniable that there is
that growth, right, which is a lever I think Bezos always could have pressed, and then
there's the risk, which is why he didn't press it. So I'm curious what you think hearing about
hearing all this stuff. You know what's funny hearing you describe all this stuff and like the, you know,
on background gripes from Amazon employees.
It reminds me a lot of, for a brief period,
while I was actually a business insider,
I covered Uber.
And this was in the post-Travice Kalanick era Uber,
run by Dara, Khos Rhaoui, who's still the CEO.
And I wrote a bunch of stories about this,
which I stand behind.
But, you know, it wasn't hard to find people
from a different era, the Travis era,
complaining about the Dara era saying,
it's not inspiring.
We're not reaching for the stars anymore.
We're not talking about,
self-driving cars or, you know, owning the entire taxi industry or whatever crazy ideas that were being discussed while the company was on their way up. And instead, it's all about incremental changes and bottom line stuff. And, you know, people just don't want to work there anymore. They'd rather work for at the time it was like a crypto, you know, oh, I'd rather work at OpenC than I'd rather work at, you know, at Uber. That's where the real future is, you know, NFTs. And, you know, I hold on. Let's see. What is Uber's market cap today?
151 billion yeah okay they were like at like 60 billion not that long ago when I was covering them
and that was when all these gripes were coming out and like Dara has proved himself to be
the absolute right CEO doing the right strategy at the time obviously there were a lot of
bumpiness there but like yeah he had to do cost cutting he had to like reduce the ambition
that was irrational some would argue to making the company like a real big boy
company. And it's funny to hear this playing out at Amazon, which is obviously at a massively
different scale than Uber, but these are the kinds of things that people will complain about
at a company, you know, from one day to the next, one CEO to the next is like, oh, we used to leave
these meetings inspired about changing the world. And now we're just talking about the bottom
line. It's like, well, you know what? You also run a business. You also have stock. You have shareholders.
You're also past the initial growth of e-commerce being a novel concept. And now it's
just about, you know, making sure that you hit your quarterly numbers and have defendable new initiatives.
And I just, I don't want to give all the credit to Andy Jazzy, mostly because I just don't know.
I don't cover Amazon.
But like, sometimes those adults in the room type CEOs that are not the most inspiring ones are the ones that actually know how to run the business the best.
And maybe Tim Cook, to a degree, is a version of this at Apple, you know, after Steve,
Jobs. Yeah, they haven't really released revolutionary products since Steve Jobs died, but
like they know how to make that shit work. You know, they know how to make the assembly lines
and supply chains like operate in tandem. And that's meaningful. And so it sounds to me like
the same thing is happening at Amazon. There was also like you were mentioning like just a
right size and they needed to happen after the pandemic that there was this overbilled out of
fulfillment centers. And, you know, there was a deal.
dip in how much people were buying stuff online.
And so that was like a painful episode that Jassy, I guess, had to bear the brunt of.
But I don't know, Amazon's still a monopoly.
They still basically own all of online, you know, most of online e-commerce or, you know what I mean?
Online commerce.
And they still have the largest cloud computing business out there by a wide margin.
So it's going to look good once like, you know, the fundamentals of the American.
economic return for sure and i'm just looking at uber stock so july 2022 it was at 21
a share today 72 dollars a share so more than 300 times i mean of course that was at the bottom
but it is sort of crazy where they are now compared uh to where they were and when those questions
were coming up yeah and they're a boring company i'll straight up say it anyone from uber call me if you
if you disagree um but like they're they're boring it's the same yeah it's the same service it's been for the
last 10 years. It's a right hailing service. But Amazon is not boring. That's the thing.
Like Amazon has gotten to the place where it is by like having this day one mentality and always
being willing to like, you know, not not just rely on the bread and butter. Like the reason why
they are that cloud services company they are today is because they've had this willingness to
reinvent. And that's sort of where the question is. Yeah. Sure. And I mean, you could argue that
not having the day one mentality makes you vulnerable to a disruption and innovator's dilemma and
that stuff. And, you know, if AI does turn out to be a massive business, it's possible that
Amazon will not, you know, kind of enjoy the fruits of that as much as Microsoft did, maybe
because of their conservative, less inspiring, whatever. So, yeah, there are risks to that,
for sure, but that's still based on a hypothetical. Right. Amazon still has like their self-driving
car unit, right? Zooks. I'm embarrassed him and I don't know.
Yeah, they do.
I can tell you they do.
So they're still doing stuff like that.
Are they still doing that drone shit?
Are they doing like drone deliveries?
Oh, yeah.
I mean, they're doing, oh, they're doing drone delivery.
They're also doing, I think it's either them or Bezos that's doing a Starlink competitor.
Okay.
Okay.
I mean, that's a great business for SpaceX, for Elon.
So if they're in that, like that's not terrible.
Yeah, I mean, I'm sure there are still pockets of Amazon that still do.
you know some of the more out there stuff that gets people inspired and day one mentality and
stuff like that but I don't know I it's hard for me and I say this is a reporter like it's hard
for me sometimes to disentangle like the gripes of overcompensated white collar employees that
just don't feel inspired by their jobs with the fact that they're like working for a company now
that's like firing on the cylinders that they need to be firing on in order to work yeah no I I look
I guess, like, what I would say is there is wisdom to these employees, but you have to be able
to pull it out from the discontent.
Right.
That's the thing.
It's tough to parse.
Just point of specific examples to me.
Like, what is the specific thing that someone else capitalized on or invested in at a different
company that Amazon chose not to because they lack this killer instinct?
No, I mean, I don't think Amazon's had a long period of time where they lack killer instinct,
and I wouldn't even argue that they've locked the killer distinct necessarily right now.
Like I just think that these things, sometimes these changes take place over decades.
And they're correctable also.
Like Microsoft, for instance, you know, their desire to sit out like cloud computing and whatever it was and mobile.
Like they've sort of rebounded from that quite well.
Yeah.
Just took, you know, a lot of money and change in leadership.
Yeah.
And that's doable.
Yeah.
And there are examples throughout business history.
of companies that like absolutely lost their fire and are you know in the dustbin of history
totally sun microsystems or i don't know ibn bm companies that once were the most powerful
that just are you know totally totally got overtaken by upstarts or or faster moving competitors
so there's always risk there yeah but there's also i don't know you tell me how you feel but
like, I think the entrenchedness of big tech companies now is greater than it's ever been.
Oh, definitely.
And the, like, the, you know, what do they always call it?
Like, regulatory, what does it call, or like regulation kind of empowers the incumbents?
Capture.
Regulatory capture.
Like, that's only increased over, you know, the last decade or two.
And the ability for these companies to turn it around is just easier than it was because, like,
there's just fewer competitors.
Yeah.
And VCs won't invest.
Well,
there's also like VCs want to invest in companies that are in their lane
because they probably won't be able to be acquired.
Acquisitions are with big tech are much more heavily scrutinized.
Yeah.
But I think there's something to be said.
I mean,
I guess it's sort of the theme of your show looking at like big technology.
But like is disruption as we know it where like a major company could be completely
destroyed by a shift in technology. Is that antiquated? Are we not going to see that happening a lot
in the next decade or two? No, I wouldn't say so. I think that there's always a chance.
I feel like the incumbents today have this advantage because of scale and scale makes a huge
difference, but you really never know where the next thing is going to come from. And very easily
you could be behind the eight ball. Like it didn't work right now in terms of like AI search.
And so Google's still sitting pretty, but like it could have, it could have just been that was the user preference.
And then, you know, there goes a good chunk of, it didn't.
No, but I'm saying, you know, you don't know.
Like, and sometimes it will feel super innocuous to you.
Like I'm reading the Steve Jobs book now.
And Jobs, the Walter Isaac Sim book.
Okay.
And Jobs goes to Xerox and effectively convinces them to give him the graphical user interface.
And they're like, all right, sure.
and they hand over, sort of like the key differentiator for Apple in its early years.
So you can say, okay, that was just like a nice deal that they made,
but people will come out on top.
And I don't think that anything is guaranteed to last, especially now.
Yeah, but again, you're pulling from an example from the 1970s.
Right.
Just like, what's the last company that's come up through the Silicon Valley ecosystem
that's been truly disruptive and absolutely, you know, just bodied an important.
incumbent to a point where it's irrelevant now you're right it hasn't happened and it's and sort of
gets me back to what you were one of the companies you were talking about earlier where like even
oracle seems to be getting a second life because it was big enough and was able to invest and sort of
built leaned on relationships to grow but maybe invidia is that is that company fair and that's fair
and you could look at something like an amd or an intel maybe is a better example of a company
that was once a giant that is now struggling.
Yeah, Intel for sure.
Yeah, okay, that's fair.
And Nvidia, I mean, you think about like, okay, so in the Nvidia case is fascinating
because you're like, well, wouldn't like all the big tech companies just be able to build
their own chips, like how difficult it is it to build an accelerator that does AI
training and inference?
Actually, it's quite hard.
And they haven't been able to.
Well, they are all building them.
They're building them, but they're all just spending so much money with Nvidia still.
Yeah, right.
I mean, that's a fascinating dynamic maybe for a different episode.
Definitely.
Yeah.
But, yeah, I don't know.
I think about this a lot because we're sort of also in the midst of a presidential election.
And, you know, it's not a major.
It's what?
That's happening.
On the peripheries.
Yeah.
Yeah, I'm still trying to figure out like, all right, how do we put that in the show?
I'm sure we.
The presidential election?
Yeah.
I don't want to, you know, sort of let politics dominate the show, but also can't ignore it.
So we'll find some way.
Oh, it's worked for All In.
Yeah, they're doing okay.
Yeah.
But like, I don't know.
There's got to be some people that don't want to listen to All In and hopefully they're here.
I mean, there are more people that don't want to listen to All In than do.
So, there's a large market if that's your differentiator.
I hope so.
Anyway.
Tom, great to see you.
Thanks for coming on.
It's such a good show.
Yeah.
Thanks, Alex.
Awesome having you on.
All right, everybody, Tom Doaton from the Wall Street Journal.
Where can people find your work?
I work at the Wall Street Journal.
So if you go to WSJ.com or use the WSJ app and you read stories about Microsoft or Enterprise
Software, it's probably written by me.
So that's a good place to start.
Awesome.
Well, I know that I do it.
I'm there daily on the Wall Street Journal app and then on the website.
Your team is just doing great work.
And it's been great reading your stuff.
So thanks for coming on, Tom.
Thanks, man.
All right, everybody.
Thank you so much for listening on Wednesday.
We have a one-on-one interview with myself and Klarna CEO, Sebastian Shimeon Kowski.
We're talking about whether they actually replaced 700 customer service reps with large language models.
Don't miss it.
It's a real fun conversation.
All right, that'll do it for us here.
We'll see you next time on Big Technology Podcast.