a16z Podcast - AI Eats the World: Benedict Evans on the Next Platform Shift
Episode Date: December 12, 2025AI is reshaping the tech landscape, but a big question remains: is this just another platform shift, or something closer to electricity or computing in scale and impact? Some industries may be transfo...rmed. Others may barely feel it. Tech giants are racing to reorient their strategies, yet most people still struggle to find an everyday use case. That tension tells us something important about where we actually are.In this episode, technology analyst and former a16z partner Benedict Evans joins General Partner Erik Torenberg to break down what is real, what is hype, and how much history can guide us. They explore bottlenecks in compute, the surprising products that still do not exist, and how companies like Google, Meta, Apple, Amazon, and OpenAI are positioning themselves.Finally, they look ahead at what would need to happen for AI to one day be considered even more transformative than the internet.Timestamps: 0:00 – Introduction 0:17 – Defining AI and Platform Shifts1:50 – Patterns in Technology Adoption6:04 – AI: Hype, Bubbles, and Uncertainty13:25 – Winners, Losers, and Industry Impact19:00 – AI Adoption: Use Cases and Bottlenecks24:00 – Comparisons to Past Tech Waves32:00 – The Role of Products and Workflows40:00 – Consumer vs. Enterprise AI46:00 – Competitive Landscape: Tech Giants & Startups51:00 – Open Questions & The Future of AIResources:Follow Benedict on LinkedIn: https://www.linkedin.com/in/benedictevans/ Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Chad GPT has got 8 or 900 million
witty out of users.
And if you're the kind of person
who is using this for hours every day,
ask yourself why five times more people
look at it, get it, know what it is,
have an account, know how to use it,
and can't think of anything to do with it
this week or next week.
The term AI is a little bit like the term
technology.
When something's been around any of it for a while,
it's not AI anymore.
Is machine lighting still AI?
I don't know.
In actual general, you say AI seems to mean new stuff.
And AGI seems new, scary stuff.
AGI seems to be a little bit like this.
Either it's already here and it's just more software or it's five years away and will always be five years away.
We don't know the physical limits of this technology and so we don't know how much better it can get.
You've got Sam Altman saying, we've got PhD level researchers right now.
And Demis Sibis says, no we don't.
Shut up.
Very new, very, very big, very, very exciting, world's changing things tend to lead to bubbles.
So, yeah, if we're not in a bubble now, we will be.
Is AI just another platform shift or the biggest transatlose?
transformation since electricity.
Benedict Evans, technology analyst and former A16Z partner,
has spent years studying waves like PCs, the Internet, and cell phones
to understand what actually changed and who captured the value.
Now he's turned that same lens on AI,
and the picture is far more complex than benchmarks or hype cycles suggest.
Some industries may be rewritten from the ground up.
Others may barely notice.
Tech giants like Google, meta, Amazon, and Apple
are racing to reinvent themselves before someone else does.
Yet for all the excitement, most people say,
still struggle to find something they truly need AI for every single day.
I disconnect, Benedict thinks is an important signal about where we really are in the curve.
In today's episode, we get into where bottlenecks emerge,
why adoption looks the way it does, what kinds of products still haven't shown up,
and how history can actually guide us here.
And finally, what would have to happen over the next few years for us to look back and say
AI wasn't just another wave, it was bigger than the internet.
Benedict, welcome back to the AISNs and Z podcast.
Good to be back.
We're here to discuss your latest presentation, AI Eats the World.
So for those who haven't read it yet,
maybe we can share the high-level thesis
and maybe contextualize it
in light of recent AI presentations.
I'm curious how you're thinking has evolved.
Yeah, it's funny.
One of the slides in the debt reference
is a conversation where I had with a big company, CMO,
who said we've all had lots of AI presentations now
to Google one.
We've had the Google one and the Microsoft one.
We've had the Bain one and the BCCG one.
We've had the one from Accentia
and the one from our ad agency.
So now what?
So there's sort of 90-odd slides.
So there's a bunch of different things I'm trying to get at.
One of them is, I think, just to say,
well, if this is a platform shift or more than a platform shift,
how do platform shifts tend to work?
What are the things that we tend to see in it?
And how many of those patterns can we see being repeated now?
And, of course, some of the patterns that come out of that are things like bubbles,
but others are that lots of stuff changes inside the tech industry.
And there are winners and losers.
people who were dominant turned up becoming irrelevant, and then there were new billion-tillion
dollar companies created. But then there's also, what does this mean outside the tech
industry? Because if we think back over the last waves of platform shifts, there were some
industries where this changed everything and created and uncreated industries. And there are
others where this was just kind of a useful tool. So, you know, if you're in the newspaper
business, the last 30 years looked very different to if you were in the cement business, where
the internet was just kind of useful, but didn't really change the nature of your industry very much.
And so what I tried to do is give people a sense of, well, what is it that's going on in tech, how much money are we spending, what are we trying to do, what are the unanswered questions, what might or might not happen within the tech industry.
But then outside technology, how does this tend to play out, what seems to be happening at the moment, how is this manifesting into tools and deployment and new use cases and new behaviours?
And then as we kind of step back from all of this, again, how many times have we gone to all of this before?
It's funny, I went on a podcast this summer, and I sort of opening line, I said something like, well, I'm a centrist.
I think this is as big a deal as the Internet or smartphones, but only as a bigger deal as the Internet of smartphones.
And there's like 200 YouTube commented asunderlies saying, this more, and he doesn't understand how big this is.
And I think, well, the Internet was kind of a big deal.
It was kind of a big deal.
And, you know, I sort of finished the day by looking at elevators because I live in an apartment building
my hand and we have an attended elevator which means it's there's a hand there's no buttons
there's an accelerator and a brake and the dormant gets in and drives you to your floor this streetcar
and in the 50s i just deployed automatic elevators and then you get in and you press a button
and they marketed it by saying it's got electronic politeness which means the infrared beam
and today when you get into an elevator you don't say ah i'm using an electronic elevator
It's automatic.
It's just a lift,
which is what happened with databases
and with the web and with smartphones.
And I kind of think now,
it's just funny,
I've done a couple of polls on this
in LinkedIn and threads.
So is machine learning still AI.
The term AI is a little bit like the term
technology or automation.
It only kind of applies when something's new.
When something's been around for a while,
it's not AI anymore.
So databases certainly aren't AI.
Is machine learning still AI?
I don't know.
And there's obviously there's like an
academic definition where people say this guy's an idiot and of course I'm going to explain the
definition of AI but then in actual general usage AI seems to mean new stuff yeah and AGI seems
you know like new scary stuff yeah it's funny there's I was thinking about this that there's an
old theologians joke that the problem for Jews is that you wait and wait and wait for the
messiah and he never comes and the problem for Christians is that he came and nothing happened
you know the world didn't change there was still sin all practical purposes nothing hamped and
And AGI seems to be a little bit like this.
Like either it's already here.
And so you've got Sam Maltman saying,
we've got PhD level researchers right now.
And Demis Seibis says, what?
No, we don't.
Shut up.
And so either it's already here and it's just more software
or it's five years away and will always be five years away.
Yeah, yeah.
Let's compare back to previous platforms, just because some people look at,
you know, something on the internet and say,
hey, there were net new trillion-dollar companies,
Facebook and Google.
that were created from it and just sort of all sorts of new emerging winners,
whereas they look at something like mobile and say,
hey, you know, there were big companies like Uber and Snap and Instagram and WhatsApp,
but these were billion-dollar outcomes or tens of billion-dollar outcomes,
but really the big winners were, we're in fact Facebook and Google.
And so in some sense, mobile perhaps was sustaining,
you feel free to quibble with the definition of sustaining disruptive,
but sustaining in the sense that maybe more of the value went to incumbents,
companies that existed prior to the shift.
I'm curious how you think about AI in light of that in terms of is more of the gains
going to come to net new companies like OpenAI Anthropic and others that follow or are more
of the gains going to be captured by Microsoft and Google and meta and companies that existed prior.
So I think there's several answers to this.
One of them is like you kind of have to be careful about like framings and structures
and things because you end up arguing about the framing and the definition rather than arguing about
what's going to happen.
and they're all useful but they've all got holes in them
and what mobile did was
there's a bunch of things that it changed fundamentally
it shifted us from the web to apps for example
and it gave everybody on the world a pocket computer
so even today there's less than a billion consumer PCs on earth
and there's something between 5 and 6 billion smartphones
and it made possible things that would not have been possible without it
whether that's TikTok or arguably I think things like online dating
And you can map those against dollar value.
You can also map those against kind of structural change
in consumer behavior and access to information and things.
And I think you could certainly argue
that Mehta would be a much smaller company
if it wasn't for mobile, for example.
So you can kind of argue
that puts and calls on this stuff a lot.
There's certainly not all platforms you have so the same.
And, you know, you can do the sort of standard sort of teleology
of say, well, there were mainframes and then PCs
and then the web and then smartphones.
But you kind of want to put SaaS in there somewhere.
and you kind of want to put open source in there
and maybe you want to put databases.
And so these are kind of useful framings,
but they're not predictive.
They don't tell you what's going to happen.
They just kind of give you one way of understanding
what's seen some of the patterns that we have here.
And of course, the big debate around generative AI
is just another platform shift
or is it something more than that.
And of course, the problem is we don't know
and we don't have any way of knowing
other than waiting to see.
So this may be as big as PCs or the web or SaaS or open source or something
or maybe as big as computing
and then you've got the very overexcited people living in group houses in Berkeley.
He's thinking, this is as big as fire or something.
Well, great.
But does this print new companies?
I mean, you go back to the mobile, there was a time when people thought
that blogs were going to be a different to the web, which seems weird now.
Like Google needed like a separate blog search.
This was seriously, this was a thing.
There was a time when it was really not clear.
And I think you kind of generalize his point, you go back to the internet in the mid-90s.
We kind of knew this was going to be a big thing.
We didn't really know it was going to be the web.
So before that we didn't know it was going to be the internet
They knew there were going to be networks
It wasn't clear it's going to be the internet
Then it wasn't clear it was going to be the web
Then it wasn't really clear how the web is going to work
And when Netscape launched
Like Mark Zuckerberg was in junior high
Or elementary school or something
And Larry and Associate were students
And like Amazon with a bookstore
So you can know it but not know it
And you could make the same point about smartphones
Like it was we knew everyone was going to have
an internet connected thing in their pocket
But it was not clear it was
basically going to be a PC from this has been
PC company from the 80s and a search engine company.
It was not clear it wasn't going to be an Occhio on Microsoft.
See, I think you have to be super careful in making
kind of deterministic predictions about this.
What you can do is say, well, when this stuff happens,
everything changes.
And that's happened five or ten times before.
I'm curious how you got conviction in this idea,
or we got some prediction that, hey, AI is going to be as big
as the internet, which of course is pretty big.
but I'm not yet, I'm not yet at the conviction that it's going to be any bigger.
I'm curious what sort of inspires that sort of, you know, sort of statement and then also
what might change your mind either way, you know, that it might not be as bigger the internet
because, of course, the internet was obviously very big, but also that, hey, perhaps it might be
bigger.
Well, so I think, you know, I don't want to, I made a diagram of kind of S curves kind of going
up and someone said, well, what's the access on this diagram?
I don't want to get into, you know, is it 5% bigger than Internet, or is it 20% bigger?
I think the question is more like, is it another of these industry cycles or is it a much more fundamental change in what technology can be?
Is it more like computing or electricity as a sort of structural change rather than here's a whole bunch more stuff we can do with computers?
I think that's sort of the question.
And there's a funny sort of disconnect, I think, in looking at debates about this within tech, because I watch this, this is one of the,
Open AI live streams a couple of weeks ago.
And they spend the first 20 minutes talking about
how they're going to have like human level, PhD level AI researches like next year.
And then the second half of the stream is,
and here's our API stack that's going to enable hundreds and thousands of new software developers
just like Windows and, in fact, literally quite Bill Gates.
And you think, well, those can't kind of both be true.
Like either I've got a thing which is a PhD level AI researcher,
which by implication is like a PhD level CPA.
or I've got a new piece of software that does my taxes for me
and well which is it
either this thing is going to be like human level
and some and that's a very very challenging problematic complicated statement
or this is going to let us make more software
that can do more things the software couldn't be
and I think there's a real like schizophrenia
in conversations around this
because like scaling laws and it's going to scale it all the way
and meanwhile I'm going to hear look how good it is at writing code
And again, like, well, is it writing code
or do we not need software anymore?
Because in principle, if the models keep scaling,
nobody's going to write code anymore.
You'll just, I say to the model, like,
hey, can you do this thing for me?
Is it a little bit of a hedge or like a sequencing thing?
Well, some of it's a sequencing thing.
But, you know, in principle,
if you think this stuff is going to keep scaling,
like, why are you investing in a software company?
Yeah.
Because, you know, we'll just have this, like, gold in a box
that can do everything.
And I think this is kind of the,
funny kind of challenge, and this is, I think, the fundamental way that this is different
from PV platform tips, is that with the internet or with mobile or being deemed with mobile
mainframe, it's like, you didn't know what was going to happen in the next couple of years,
you didn't know what Amazon would become, and you didn't know how Netscape was going to work out,
and you didn't know what next year's iPhone was going to be, and 10 years ago when we cared about
that, but you kind of knew the physical limits. Like, you knew in 1995, you knew that telcos were
not going to give everybody gigabit fiber next year. And you knew that the iPhone wasn't going to
like have a year's battery life and unroll and have a projector and fly or whatever.
But we don't know the physical limits of this technology
because we don't really have a good theoretical understanding of why it works so well,
nor indeed do we have a good theoretical understanding of what human intelligence is.
And so we don't know how much better it can get.
So you could do a chart and you could say, well, you know, this is a raid map for modems
and this is a road map for DSL and this is how fast DSL will be.
And then you can make some guesses about how quickly telcos will deploy DSL
and then you can say, well, clearly we're not going to be able to replace broadcast TV.
We're streaming in 1998.
But we don't have an equivalent way of modeling this stuff
to know what is the fundamental capability of it going to look like in three years,
which gets you to these kind of slightly vibes-based forecasting,
where no one really knows.
So, you know, Jeff Hinton says, well, I feel like.
And Demis de Sassavis says, well, I feel like, but no one knows.
And then Carpati goes into our cash.
podcast and says, I feel like, you know, it's a decade out.
Yeah, I know.
Well, I saw this, this meme of what's his name,
Ilyosges, but he says, like, the answer will reveal itself.
And somebody, like, memed, I'm going to say Photoshop,
but of course he wouldn't have been Photoshop, turned him into a Buddhist monk
wearing like an orange, like an orange outfit.
The future will reveal itself.
Well, but this is the problem.
We don't know.
We don't have a way of modeling this.
Yeah.
And so let's connect this to sort of the, you know, the upfront investment that
some of these companies are making, because we don't know, is there a risk of overinvestment
leading to some potential bubble-like mechanics? How do you think about that question?
Well, deterministically, very new, very, very big, very, very exciting, world's changing things
tend to lead to bubbles. And you don't think anybody would dispute that you can see some bubbly
behavior now. And, you know, you can argue about what kind of bubble. But again, like, that
doesn't have very much predicted power.
And, you know, one of the features of bubbles
is that when everything's going,
everything goes up all at once
and everyone looks like a genius
and everyone leverages and cross-leverages
and does circular revenue
and that's great until it's not.
And then you get kind of a ratchet effect
as it goes back down again.
So, yeah, if we're not in a bubble now, we will be.
I remember Mark Andreessen saying, you know,
1997 was not a bubble, 98 was not a bubble,
99 was a bubble.
Are we in 97 now?
or 98 or 99, you know, if we could predict that, you know, we'd live in a parallel universe.
I think, you know, to the, I suppose maybe kind of two more specific, more tangible answers to this.
The first of them is we don't really know what the compute requirements of this stuff are going to be.
And forecasting that, except like more, and forecasting that feels a lot like trying to forecast like bandwidth use
in the late 90s.
Imagine if you were trying to do the algebra on that.
You'd say, well, there's many users.
How much bandwidth does a web page use?
How will that change?
How will that change?
As bandwidth gets faster, what happens with video?
What kind of video?
What bandwidth?
What bit rate of video?
How long do people watch a video?
How much video?
And then you could build the spreadsheet
and it would tell you what bit rate,
what global bandwidth consumption would be in 10 years,
and then you could try and use that to back calculate
how many routers is this going to sell.
And you could get a number, but it wouldn't be the number.
You know, there'd be a hundredfold range of possible outcomes from that.
And you could, you know, you could make the same point about algebra of consumption now.
So, you know, right now we have a bunch of rational actors saying,
well, this stuff is transformative and a huge threat.
And we can't keep up with demand for it now.
And as far as we know, the demand is going to keep going up.
And, you know, we've had a variety of quotes from all of the hyperscalers,
basically saying the downside of not investing
is bigger than the downside of over-investing.
That kind of thing always works well
until it doesn't.
And I saw a slightly strange quote from Mark Zuckerberg
saying, well, if it turns out that we've over-invested,
we can just resell the capacity.
And I thought, let me just stop you there, Mark.
Because if it turns out that you can't use your capacity,
everybody else is going to have loads of spare capacity as well.
Yeah.
All these people now who are desperate for more capacity,
if it turns out we can get the same results for hundreds of the compute,
that will be true for everyone else too, not just you.
So, yeah, you know, in a investment cycle like this,
you tend to get over investment.
But then after that, there's very limited predictions
you can make about what's going to happen.
I think the more useful kind of way to look at this
is to think, well, you've got these kind of transformative capabilities
that are already increasing the value of your existing products,
if you're Google or Meta or Amazon,
and you're going to be able to use them to build a bunch more stuff.
And why would you want to let somebody else do that
rather than you doing it,
as long as you're able to keep funding and selling,
what you're building.
And it will turn out that, you know, we have an evolution of models in the next year
that means you can get the same result for hundreds of the compute that you're using today.
Bearing in mind that it's already going down, like, to pick your numbers, 20, 30, 40 times a year.
But then the usage is going up.
So you're in this very, as I said, it's like trying to predict bandwidth consumption in the late 90s, early 2000s.
You know, you can throw all the parameters out, but it doesn't get you,
something useful. You just kind of need to step back and say, yeah, but is this internet
thing any good?
Well, yeah, because I'm curious if the bottlenecks are, if you see them as more on the supply
side or the demand side, you know, more technical constraints or is just, is AI any good?
Are there enough use cases to justify the type of spend? What are you seeing and what are you
predicting? So, maybe two answers to this question. The first of them is, I think we've had
a sort of a bifurcation of what all the questions are. So there are now very, very detailed
conversations about chips and then very, very detailed conversations about data centers and about
funding for data centers and then about what is a new enterprise SaaS company built on
AI? What margins will it have? And how much money does it need to raise? And so there are venture
capital conversations. And so there are many different conversations within which, like, I don't
know anything about chips. You know, I can spell ultraviolet, but like, I don't know what
like an ultraviolet processes.
It's like it's more, more violets, I don't know.
And so you've got this, you know, it's like the Milton Friedman line.
No one knows how to build a pencil.
You've got the right, you know, we've got this, you know, it's turned into deployment.
I think a second answer might be, I think there's two kinds of AI deployment,
generative AI deployment.
One of them is there are places where it's very easy and obvious right now
to see what you would do with this, which is,
basically software development, marketing, point solutions for many very boring, very specific
enterprise use cases, and also basically people like us, which are people who have kind of very open,
very free form, very flexible jobs with many different things and people who are always looking
for ways to optimize that. And so you get people in Silicon Valley who are like, you know,
I spend all my diet time in chat, GPT, I don't use Google anymore. You know,
replaced my CRM with this, and then obviously people who write, if you're writing codes,
this works really well if you're in marketing, you know, all these stories of big companies
where, you know, they're making 300 assets where they would have made 30. And then Accenture and
Bain and McKinsey and InfoS and so on sitting and solving very specific problems inside big
companies. Then there's a whole bunch of other people who look at it and they're like, it's okay.
and you go and look at the usage data
and you see
okay chat GPT has got
8 or 900 million weekly active users
5% of people are paying
and then you go and look at
all the survey data and you know it's very fragmented
and inconsistent but it all sort of points
to like something like 10 or 15% of people
into the developed world are using this every day
another 20 or 30% of people are using it every week
And if you're the kind of person who is using this for hours every day,
ask yourself why five times more people look at it, get it, know what it is,
have an account, know how to use it,
and can't think of anything to do with it this week or next week.
Why is that?
Is it because it's early?
And it's not like a young people thing either, incidentally.
And so is that just because it's early?
Is it because of the error rates?
is it because you have to map it against what you do every day.
And one of the analogy I was used to use,
which isn't in the current presentation,
I've been used in previous presentations,
is imagine you're an accountant
and you see software spreadsheets for the first time.
This thing can do a month of work in 10 minutes,
almost literally.
You want to change, you want to recalculate that DCF,
that 10-year DCF with a different discount rate.
I've done it before you finished asking me to,
and that would have been like a day or two days or three days,
days of work to recalculate all those numbers. Great. Now imagine you're a lawyer and you see it.
And you'd think, well, that's great. My accountant should see it. Maybe I'll use it next week
when I'm making a table of my billable hours. But that's not what I do all day. And an Excel
doesn't use, do things that a lawyer can do every day. And I think there's this other class
of person that's like, I'm not sure what to do with this. And some of that is habit. Some of that is
like realizing no, instead of doing it that way, I could do it this way.
But that's also what products are.
Like every entrepreneur who comes into A16Z, when I was there from 2014 to 2019,
and I'm sure now, like, you could look at any company that comes in
and say that's basically a database.
That's basically a CRM.
That's basically Oracle or Google Docs.
Except that they realize there's this problem.
or this workflow inside this industry
and worked out how to use a database or CRM
or basically concepts from 5, 10, 20 years ago
and solve that problem for people in that industry
and go in and sell it to them
and work out how they can get it to use it.
And so this is why, you know,
you look at data on this,
depending on how you count you,
the typical big company today has 4 to 500 SaaS apps in the US.
Four to 500 SaaS applications.
And they're all basically doing something
you could do in Oracle or Excel or email.
And that's the other side.
I'm monologuing, I'm afraid,
but this is the other side of what do you do with these things.
Do you just go to the bot and ask it to do a thing for you?
Or does an enterprise salesperson come to your boss
and sell you a thing that means now you press a button
and it analyzes this process that you never realized you were even doing?
Yes.
And I feel like that's, I mean, that's why there are AI software companies.
Right. Really? And isn't that what they're doing? They're unbundling chat GPT,
just as the enterprise software company of 10 years ago was unwundling Oracle or Google or Excel.
Do you have the view that, you know, what Excel did for accountants, you know,
sort of AI is not doing for coders and developers, but hasn't quite figured out that sort of, you know,
daily critical workflow for other job positions, and so it's unclear for people who aren't
developers, you know, why I should be using this for many hours a day?
I think there's a lot of people who don't have tasks that work very well with this.
And then there's a lot of people who need it to be wrapped in a product and a workflow and
tooling and U.S. and someone to come and say, hey, have you realized you could do it with this.
I had this conversation in summer with Ballagy
who's another former A16 Z person
and he was making this point about validation
that can you, because these things still get stuff wrong
and people in the Valley often kind of handwave this away
but there are questions that have specific answers
where it needs to be the right answer
or one of a limited set of right answers
can you validate that mechanistically?
If not, is he deficient to validate it?
with people. So, you know, the marketing use case, it's a lot more efficient to get
a machine to make you 200 pictures and then have a person look at them and pick 10 that are good
than to have people make 10 good images or 100, you know, even if you're going to make
500 images and pick 100 that are good, that's a lot more efficient than having a person make
100 images. But on the other hand, if you're doing something like data entry, and I wrote
something about this about open, open hour launch deep research, open area launched deep research,
their whole marketing case is it goes off and collects data about the mobile
market. I used to be a mobile analyst. The numbers are all wrong. Their use case of look how
useful this is, their numbers are wrong. And in some cases, they're wrong because they've literally
transcribed the number incorrectly from the source. In other cases, it's wrong because they've used
a source that they shouldn't have used. But like if I'd asked an intern to do it for me,
then an intern would probably have picked that. And to my point about, you know, verification,
if you're going to do data entry, if I'm going to ask a machine to
copy 200 numbers out of 200 PDFs,
and then I'm going to have to check all 200
of those numbers. I might as well just do it myself.
So you've got
like a whole swirling
matrix of
how do you map this
against existing problems,
but the other side of it is
how do you map this against
new things that you couldn't have done before?
And this comes back to my point about
platforms here. Because I see people
looking at chat TVT or looking at generative AI
saying, well, this is useless because it makes mistakes.
And I think that's kind of like looking at like an Apple 2 in the late 70s
and saying, could you use these to run banks, to which your answer is no.
But that's kind of the wrong question.
Could you build professional video editing inside Netscape?
No.
But that's the wrong question.
And later, yeah, 20 years later you can.
But that meanwhile, it does a whole bunch of other stuff.
The same with mobile.
Like can you use mobile to replace, you know, your.
you know, your five-screen professional programming week.
No, therefore it can't replace PCs.
Well, guess what?
Five billion people have got a smartphone
and seven or 800 million people
have got a consumer PC.
So it kind of did, but did a different thing.
And the point of this is, like, the new thing,
this is, you know, the disruption framing you mentioned earlier.
The new thing is generally not very good or terrible
at the stuff that was important to the old thing.
But it does something else.
Right. And a lot of the question is,
okay, it may not be very good at doing,
there's a class of old tasks
that Generative AI is good at.
There's also many more old tasks
that Generative AI is maybe not very good at.
But then there's a whole bunch of other things
that you would never have done before
that Generative AI is really, really good at.
And then how do you find those or think of those?
And how much of that is the user thinking of it
faced with a general purpose chatbot?
How much of that is the entrepreneur saying,
hey, I've just realized that there's this thing
that I can do that you couldn't do before.
And here you are, I've given you a,
product with a button that will do it for you. Right. And that's why there are software companies.
Right. And on mobile, you know, some of the new use cases, you know, we're getting in strangers
cars, you know, we mentioned left an Uber or sort of, you know, dating people you met via an app or
sort of, you know, lending your spare bedroom out, you know, et cetera. And those were net new
companies that, you know, were built around those behaviors. And I think for either still have
questions of, you know, what are those net new behaviors? We're starting to, you know, we're
starting to see some in terms of, you know, people engaging in talking with, you know,
chatbots instead of humans or, or in addition. And then there's a question of,
are these done by the model providers that they currently exist or are these done by, you know,
net new companies both on, you know, sort of enterprise and a consumer? Well, this is always
the question is how far up the stack does a new thing go? And, you know, I was talking about this
with another former 16 person who pointed out that like in the mid-90s, people kind of
of argued that, well, you know, the operating system does all of it. And Windows apps are
basically just kind of thin wind 32 wrappers. And, you know, Office is basically just, you know,
a thin wind 32 wrapper, like all the important stuff is being done by the OS, whether it's,
you know, the document management and printing and storage and display, which are all stuff that
used to be done by apps. Like on DOS, the apps had to do printing. The apps had to manage
a display. We moved to Windows, like 90% of the stuff that the app used to do is now being
done by Windows. And so Office is just like a thin wind 32 wrapper. And all the whole
stuff it's been being done by the OS. And it turned
that, well, that was, again, it's like frameworks are
useful, but that's not made, maybe not a useful way of thinking
about what's going on. And the same thing
now, like, how much
does this need
single, dedicated
understanding of how that market it works
or what that market is and what you would do
with that? I mean, I remember
when we were at A16Z, there was an investment
in a company called Evalor, which is
cloud,
legal discovery in the cloud.
And so machine learning happens,
And so now they can do translation.
Are they worried that lawyers are going to say,
well, we don't need you guys anymore.
We're just going to go out and get a translate app
and a sentiment analysis app from AWS.
No, that's not how law firms work.
Law firms want to buy a thing that Solisers
want to buy legal discovery, software management.
They don't want to go and write their own,
but I do API calls.
I mean, very, very big law firms might,
but typical law firm isn't going to do that.
People buy solutions, they don't buy technologies.
And the same thing here, like how far up the stack
do these models go?
how much can you turn things into a widget?
How much can you turn things into an LM request?
And how much no does it turn out that you need that dedicated UI?
The fun thing is you can see this around Google,
because Google had this whole idea that everything would just be a Google query,
and Google would work out what the query was.
And guess what?
Now you want to use Google flights is not a Google query.
You know, they use a certain point.
And one of the interesting things about this,
And I think it's interesting to think about what a GUI is doing
that some of what a GUI is doing
and the obvious thing that a GUI is doing
is that it enables Office to have 500 features
and you can find them all.
Or at least you don't have to memorize keyboard commands.
You can now have effectively infinite features
and you can just keep adding menus and dialog boxes
and eventually you run out of screen space for dialog boxes,
but you can have hundreds of features
without people needing to memorize keyboard commands.
But the other side of it is
you're in that dialogue box
or you're in that screen in that workflow
in Workday or Salesforce
or whatever the enterprise software is
whatever any software or the airline website
or Airbnb or whatever it is
and there aren't 600 buttons on the screen
there's seven buttons on the screen
because a bunch of people at that company
have sat down and has thought
what is it that the users should be asked here?
What questions should we give them?
What choices should there be at this point in the flow?
And that reflects a lot of institutions
knowledge and a lot of learning and a lot of testing and a lot of really careful thought about
how this should work. And then you give somebody a raw prompt and you just say, okay, you just
tell the thing how to do the thing. And you're like, but you've kind of got to shut your eyes,
screw your eyes up and think from first principles, how does this all of this work? It's kind of
I always used to talk about machine learning as giving you infinite interns. You know, imagine you've got
a task and you've got an intern and the intern doesn't know. And the intern doesn't know.
what venture capital is.
How helpful are they going to be?
And they don't know
that companies published quarterly reports
and that we've got a Bloomberg account
that lets us look up multiples
and that then you should probably use
pitch book for this data
and rather than using Google.
This is my point about deep research.
Like, no, you should use this source
and not that source.
do you want to have to work that out from scratch
or do you want a bunch of people
who know a lot about this stuff
who have spent five years working out
what the choices should be on the screen
for you to click on it?
I mean it's the old user interface
saying the computer should never ask you a question
that you should have to work out
that it should know by itself.
You go to a blank royal chatbot screen
it's asking you literally everything.
It's not just asking you one question
it's asking you absolutely everything
about what is it is that you want
and how you're going to work out
how to do it.
And so, you know, you're mentioning
you wrote about how Chetbtee isn't sort of
a product as much as a chatbot
disguises as a product.
I am curious, you know,
when we sort of look back
at this sort of the, you know,
platform shift, do you think that
there will be another
sort of iPhone
or Excel-esque product that kind
of defines the feature
the sort of platform shift in a way that chat GPT won't
or is it sort of that the world has to catch up
to how to use chat chaty or something like chat.
So both of these can be true
because there was a lot of like it took time
to realize how you would use Google Maps
and what you could do with Google
and how you could use Instagram
and all of these products have evolved a huge amount over time.
So some of it is like you grow towards realizing
what you could do with this.
Like you realize that's just a Google query now.
You realize that you could just do it like that
and you realize I spent hours doing this
and I just realized
oh I could actually just make a pivot table
the other side of it is
then but you're still then
expecting people to work it out
themselves from first principles
and you know
it's kind of useful to have somebody
really 100,000 10,000 really clever people
sitting and trying to work out what those things are
and then showing it to you as a product
I think another side of this is like
you know there are always these precursors
so like there were lots of other things before Instagram
you know YouTube didn't start as YouTube
it started as video dating I think
there were lots of attempts to do online dating that all kind of
worked until Tinder kind of pulled the whole thing inside out
and so there were always lots of things
what's the phrase local maxima in fact this is where we were
particularly with the iPhone
before because I was working in mobile
for the previous decade.
It didn't feel like we were waiting for a thing.
It felt like it was kind of working.
Like every year, the network's got faster
and the phone has got better
and you got a little bit better every year
and we had apps and we had app stores
and we had free G and we had cameras
and stuff seemed to be, you know,
every year it was a bit better
and then the iPhone arrives
and it just, you know, blow the chart,
kind of, you know,
you've got this line doing this
and then there's a line that does that.
Although remember, also the iPhone took
like two years before it worked.
because, you know, the price was wrong and the feature set was wrong
and the distribution model didn't quite work.
And so, yeah, you know, you can think your, you know,
you can think everything's going well and then something comes along
and you realize, oh, no, no, no, no, that, which is the same for Google.
You know, like search was a thing before Google.
It just wasn't very good.
So there were lots of social stuff before Facebook and, you know,
that was the thing that catalyzed it.
So, you know, I just think deterministically, this whole thing is so early
that it feels like, of course they're going to be, you know,
dozens, hundreds of new things.
Otherwise, they've seen Z who just kind of shut down
and give the money back to the LPs because
the founding models will just do the whole thing.
And I don't think you're going to do that, at least I hope not.
No, no, no.
If we have any regrets from the last few years, it's not going bigger.
I think we didn't fully appreciate how much specialization
there would be across sort of, you know,
whether it's voice or image generation
or take any sort of subsector that there would be,
you know, net new companies created that would be better
than the model providers
that there would be even multiple model providers
that in every category
one thing we've always
in the Web 2 era
we've always been on the category winner
right and the category winner
would take most of the market
but these markets are so big
and there's so much expertise
and specialization
that in that one there can be winners
in every category
it's not just sort of the model providers
take everything but that even in every category
including the model providers
there can be multiple winners
and increasing specialization
and the markets are just big enough
to contain multiple winners.
I think that's right.
And I think, you know,
the categories themselves aren't clear.
And, you know, many, you know,
things you think this is a category
and it turns out, no,
it was actually that whole other thing.
And the categories kind of get unbundled
and bundled and recombined in different ways.
I remember I was a student in 1995.
And though I think I had like four or five
different web browsers on my PC.
web servers on my PC
because I mean Tim Berners-Lee's original web browser
had a web editor in it
because he thought this was kind of like a network drive
and it was a sharing system
and didn't really a not really a publishing system
so you would have your web pages on your PC
and you'd leave your PC turned on
and that would be how your colleagues would look at your word documents
or your web pages
and so again like we just don't know
and I just kind of keep coming back to this point
I feel like most of the questions we're asking at the moment
are probably the wrong question
and picking up on
on a strand within what you just said, though,
the interesting, one of the things I'm sort of thinking about a lot
is looking at open AI,
because I'm sort of fascinated by disconnections,
and we've got this interesting disconnect now,
which is that, you know, if you look at the benchmark scores,
so you've got these general purpose benchmarks
where the models are basically all the same.
And if you're, yes, if you're spending hours a day,
and then you've got this opinion about,
oh, I like Claude's tone of voice more than,
I like GBT, and I like GPD 5.1 more than GPD 4.9
Or whatever the hell it's called.
if you're using this once a week
you really don't notice this stuff
and the benchmark scores are all roughly the same
but the usage isn't
it's basically the only
the Claude has basically no consumer usage
even though on the benchmark score
it's the same
and then it's chat GPT
and then halfway down the chart
it's meta and Google
and the funny thing is
you know that you read all the AI newsletters
then like meta's lost
they're out of the game they're dead
Mark Zuckerberg is spending a billion dollars
of researcher
to get back in the game.
But from the consumer side,
well, it's distribution.
And the interesting thing here is that you've got,
what I'm kind of circling around is,
if the model for a casual consumer user,
certainly is a commodity,
and there's no network effects or winner takes all effects yet,
those may emerge, but we don't have them yet.
And things like memory aren't network effects,
so stickiness, but they can be copied.
Um,
How is it that you compete?
Do you just compete on being the recognized brand
and adding more features and services and capabilities
and people just don't switch away?
Which is kind of what happened with Chrome, for example.
There's not a network effect for Chrome,
and it's not actually any better much,
maybe it's a bit better than Safari,
but you know, you use Chrome because you use Chrome.
Or is it that you get left behind on distribution
or network effects
that emerge somewhere else.
And meanwhile, you don't have your own infrastructure.
So I suppose what I'm getting at is
you've got these 8 or 900 million weekly active users,
but that feels very fragile
because all you've really got
is the power of the default and the brand.
You don't have a network effect,
you don't really have feature lock-in,
you don't have a broader ecosystem,
you also don't have your own infrastructure,
so you don't control your cost base,
you don't have a cost advantage.
You get a bill every month from Satya.
So you've kind of got to scramble as fast as you can
in base of those directions
to on the one side build product
and build stuff that on top of the model,
which is our earlier conversation,
is it just the model?
You've got to build stuff on top of the model in every direction.
It's a browser.
It's a social video app.
It's an app platform.
It's this.
It's that.
It's like, you know, the meme of the guy with the map with all the strings on it, you know.
It's all of these things.
We're going to build all of them yesterday.
And then in parallel, it's infrastructure.
Like, you know, we've got to deal with Open AI.
Sorry, deal with Invidia, with boardcom, with AMD, with Invidio, with Oracle, and with Petri dollars.
Because you're kind of scrambling to get from this amazing technical breakthrough and these 800, 900 million.
to something that has like really sticky,
defensible, sustainable business value and product value.
Yeah.
And so as you're evaluating the competitive landscape
among the hyperscalers,
what are the questions that you think are going to be
most important in determining who's going to gain
durable competitive advantages
or how this competitive is going to,
a competition is going to play out?
Well, this kind of comes back to your point about sustaining advantage,
and we talked about Google.
Like, if we think about the shift to, particularly shift to mobile,
for meta, this turned out to be transformative.
Like, it made the product way more useful.
For Google, it turned out mobile searches just search.
And maps changed probably, and YouTube changed a bit.
But basically, for Google search, Google search is search.
And the web search is just means more people doing more search more of the time.
And the default view now would seem to be, well, Gemini is as good as anybody else.
Next week, like the new model, I haven't looked at the benchmarks for GPD 5.1, which is out today.
Is it better than Gemini? Probably. Will it still be better next month? No.
So that's a given. Like, you've got a frontier model, fine.
What does that cost? It costs you, pick a number, $250 billion a year, $100 billion a year.
what's this
earlier conversation
about CapEx
okay so
Google can pay
that because they've got
the money
they've got the cash
rate from everything else
and so you do that
and your existing products
get you optimise
search
you optimize your ad business
you build new
experiences
maybe you invent
the iPhone of AI
maybe there is no
iPhone of AI
maybe someone else does it
and you do an Android
and just copy it
so fine
it's a new mobile
we'll just carry on
search as such
AI is AI, we'll do the new thing, we'll make it a feature,
we'll just carry on doing it.
For meta, it feels like there are bigger questions
on what this means for search,
on what it means for content and social and experience and recommendation,
which makes it all that more imperative
that they have their own models just as it is for Google.
For Amazon, okay, well, on the one side, it's commodity infra
and we'll sell it as commodity infra.
And on the other side,
maybe step, maybe step,
back, if you're not a hyperscaler, if you're a web publisher, a marketer, a brand, an
advertiser, a media company, you could make a list of questions. You don't even know what
the questions are right now. What is this, what happens if I ask a chatbot a thing instead
of asking Google, even if it's Google, from Google's point, you know, what I'll ask Google's
chatbot? It's fine. But as a marketer, what does that mean? What happens? If I ask for a
recipe and the LLM just gives me the answer, what does that mean?
if my business is having recipes.
Do you have a kind of split between,
and this is also an Amazon question,
how does the purchasing decision happen?
How does this decision to buy a thing
that I didn't know existed before happen?
What happens if I wave my phone at my living room
and say, what should I buy?
Where does that take me in ways
that it wouldn't have taken me in the past?
So there's a lot of questions further downstream,
and that goes upstream to meta
and to stomach extent for Google.
It's a much bigger question
in the long term for Amazon.
Do LLMs mean that Amazon
can finally do really good
at-scale recommendation and discovery
and suggestion in ways that it couldn't really do in the past
because of this kind of pure commodity retailing model
that it has?
Apple sort of off on one side.
Interestingly, they produced this incredibly compelling
vision of what series should be two years ago.
It just turned out that they couldn't make it.
Interestingly, nobody else could have made it either.
You go back and watch the Siri demo that they gave, and you think, okay,
so we've got multimodal, instantaneous, on-device tool-using, agentic, multi-platform e-commerce
in real-time with no prompt injection problems and zero error rates.
Well, that sounds good.
I mean, has anyone got that working?
Like, no.
Open-Eye.
Open-A.I don't think Google or Open-A-I could deliver the Siri demo that Apple gave two years ago.
I mean, they could probably do the demo, but they couldn't, like, consistently
reliably make it work.
I mean, that demo, that product isn't in Android today.
And Apple, I mean, Apple to me has the most kind of intellectually interesting question,
which is, so I saw Craig Federewigi make this point, which is like, we don't have
our own chatbot, fine, we also don't have YouTube or Uber.
Explain why that is a different, which is a harder question to answer than it sounds like.
And of course, the answer is if this actually fundamentally changed.
the nature's computing, then it's a problem.
If it's just a service that you use like Google,
then that's not a problem,
which is kind of the point about where to Siri go.
But the interesting counter example here
would be to think about what happened to Microsoft in the 2000s,
which is the entire dev of that environment gets away from them,
and no one builds Windows apps after like 2001 or something.
But you need to use the internet.
To use the internet, you need a PC,
and what PC are you going to buy?
Well, like Apple's not really a player at that time
and of just getting back into the game.
Linux is obviously not an option for any normal person,
so you buy Windows PC.
So basically Microsoft loses the platform more and sells an order of magnitude more PCs.
Well, not selling them, but they're in order of magnitude more Windows PCs
as a result of this thing that Microsoft lost.
And then it takes until mobile that then they lose the device
as well as a development environment.
So who has this kind of question is if all the new stuff is built on AI
and I'm accessing it in an app that I download from the App Store,
to what extent is this a problem for Apple
and what would have to
you would need a much more fundamental shift
in what it was that was happening
for that to be a problem for Apple
and even if you take like the
you know not the like the full like
the rapture arrives and we all just kind of go
and live sleeping pods like the guys in Up
not up yes
what is it the one with the robot that's capturing
the trash which one is that? Wally
Wally Wally yeah
you know the guys in the pods in that movie
Maybe we will be people, maybe we'll be like that in which case, fine.
But there's a sort of a midcase which is like the whole nature of software changes
and there are no apps anymore and you just go and ask the LLM a thing, fine.
What is the device on which you ask the LLM a thing?
Well, it's probably going to have a nice big color screen
and it's probably going to have like a one-day battery life.
Probably just a microphone, probably a good camera.
No.
Kind of sounds like an iPhone.
Yeah.
Am I going to buy the one that?
it's a tenth of the price and just use the LLM on it?
No, because I'll still want the good camera and the good screen
and the good battery life.
So it's not, there's a bunch of kind of interesting strategic questions
when you start poking away, well, what does this mean for Amazon?
There's a completely different questions to what does it mean for Google,
or what does it mean for Apple, what does it mean to Facebook, or what does it mean to
sales force, what does it mean to, you know, Uber?
and then right back to what we were saying
at the beginning of this conversation
what does this mean for Uber?
Well, their operations get X percent more efficient
and now the Ford detection works
and, you know, okay, maybe they're autonomous cars,
different conversation, but presumed no autonomous cars
that's a whole other conversation.
Otherwise, as Uber, what does this change?
Well, not a huge amount.
I want to sort of zoom out a little bit
this whole framing.
So you've been doing these presentations for a while.
now you've, you know, you've bumped them up two times because there's so much is changing.
And one of the things you do in each presentation is you're famous for asking, you know,
really great questions and chronicling what are the important questions to be asking.
I'm curious as you reflect, you know, maybe post, you know, Chad GBT in 2022 or GPD3, rather,
the questions you were asking then and you reflect on to now, to what extent do we have some
direction on some of those questions or to what extent are they the same questions or
or new and different questions or what is sort of your, you know, if I woke up in a coma
after reading your, you know, your original presentation, let's say, you know, the one after
GPT3 launch came out and then seeing this one now, what were the sort of most surprising
things or things that we learned that updated those questions?
So I think we have a lot of new questions this year.
So I feel like, you know, you could make a list of, as it might be,
half a dozen questions in spring of 23, like open source, China,
invidia, does scaling continue, what happens to images,
how long does open AI's lead remain?
And those questions didn't really change in 23 and 24.
And most of those questions are kind of still there.
Like the Invideo question hasn't really changed, you know,
the answer on China.
The answer on, you know, how many models will there be?
The answer is, okay, there's going to be,
anybody who can spend a couple of hundred,
you can spend a couple of billion dollars
can have a frontier model.
That was pretty obvious, you know, only 23.
It took a while for everyone to understand that.
And big models and small models,
will we have small models running on devices?
No, because the small models,
the capabilities keep moving too fast
for the small models to shrink the small model onto the device.
But those questions kind of didn't change
for two, two and a half years.
I think we now have, I think,
of more product strategy questions,
as you see real consumer adoption
and Open AI and Google
building stuff in different directions,
Amazon going in different directions,
Apple trying and obviously failing in
then trying again to do stuff.
There's some sense of like
there is something more going on in the industry
than just, well, let's just build another model
and spend more money.
Yeah.
There's more questions and more decisions now.
There's also more questions
outside of tech in certainly on like the retail media side of how do you start thinking about
what you would do with this and again you know classic framing in my deck is like step one is you make
it a feature and you absorb it and you do the obvious stuff step two is you do new stuff
step three is maybe someone will come and pull a whole industry inside out and completely redefine
the question and so you could kind of do like an imagine if here of like step one is
You know, you're a manager at a Walmart in the Bay Area or D.C. or whatever it is.
Step one is find me that metric.
Step two is building a dashboard.
Step three is it's Black Friday and I'm running, managing a Walmart outside of D.C.
What should I be worried about?
Like, and that might be the wrong one, but it's like, you know, step one for Amazon is you bought light bulbs.
So here's, so you bought bubble wrap.
so here's some packing tape.
But what Amazon should actually be doing is saying,
hmm, because either person's moving home,
we'll show them a home insurance ad,
which is something that Amazon's correlation in systems wouldn't get
because they wouldn't have that in their purchasing data.
And we're still very much at the like,
we're still on the step one of that,
but thinking much more what would the step two, step three be?
What would new revenue be for this
other than just like simple dumb automation?
What would new things that we would build with this be?
Where would this actually, like, might actually kind of redefine
or change what the market might look like?
And that's obviously a big question for anyone in the content business.
Yeah.
What does it mean if I can just owe and ask an LLM this question?
What kinds of content were predicated on Google rooting that question to you?
And what kind of questions, what kind of content isn't really that question?
Like, do I want a Bolognaz recipe or do I want to hear Stanley Tucci talking about cooking in Italy?
Like, do I just want the, do I want that scoo or do I want to work out which product I should buy,
which is Amazon is great at getting you the scoo, terrible at telling you what sco you want?
Do I just want the slide deck or do I want to spend a week talking to a bunch of partners from Bain about how I could think about doing this?
Do I just want money or do I want to work with A16Z's operating groups?
Like, what is it that I'm doing here?
And I think the LLM is starting, thing is starting to crystallize that question in lots of different ways.
Like, what am I actually trying to do here?
Do I just want a thing that a computer can now answer for me?
Or do I want something else that isn't?
Because the LMs can do a bunch of stuff that computers couldn't do before.
Right.
is that thing that the computer couldn't do before my business?
Or am I actually doing something else?
We're about to figure out in a much more granular way.
What is the true job to be done for many and many of these?
Yeah.
And going back to the internet, there was, you know,
the sort of observation about newspapers is that newspapers looked in the internet
and they talked about, you know, expertise and curation and journalism and everything else
and didn't really say, well, we're a light manufacturing company
and a local distribution and trucking company.
And that was the bit that was the problem.
And until the internet arrived, like, that wasn't a conversation you thought about.
And then the internet suddenly makes that clear
and suddenly creates an unbundling that didn't exist before.
And so there will be those kinds of like you didn't realize you were that before
until an LLM comes along and points to,
someone comes along with an alarm and says,
I can use this to do this thing that you didn't really realize
was the basis of your defensibility or the basis of your profitability.
I mean, it's like the joke about, you know, US health insurance
that, like, the basis of US health insurance profitability
is making it really, really boring and difficult and time-consuming.
That's where the profits come from.
Maybe it isn't.
I don't know.
I don't know that history.
But for the sake of argument, say that's your defensibility.
Well, an LLM removes boring, time-consuming, mind-numbing tasks.
No.
So what industries are protected by having that, and they didn't realize that?
And these, you know, it's like you could have asked these questions
about the internet in the mid-90s
or about mobile a decade later
and generally you'd have half of the questions
you'd have been the wrong questions in hindsight
I remember as I was a baby analysed in 2000
everyone kept saying what's the killer use case for 3G
what's a good use case for 3G
and it turned out that having the internet in your pocket
everywhere was the use case for 3D
but that wasn't the question that people were asking
and I'm sure that will be the thing now
is there's so much that we will
that will happen and get built
where you go and you realize,
oh, that's how you would do this.
You can turn it into that.
Yeah.
And I'm sure you've had this experience,
you're seeing entrepreneurs.
You know, you get every now and then,
they come in and they pitch the thing,
you're like, oh, okay.
You can turn it into that.
I didn't realize it was that.
Yeah, no, 100%.
My last question, they'll get you out of here,
is if we're talking two or three years from now
or you're doing a presentation,
you say, oh, this is actually bigger
in the internet, or maybe this is
like computing, what would need to be true?
What would need to happen?
What would evolve our thinking?
I mean, I kind of, you know,
sort of come back to my point about, you know,
the Jews and Christian and the sire came,
nothing happened.
we forget. I mean, there's maybe two ways, very brief ways to think about this.
One of them is I think we forget how enormous the iPhone was and how enormous the internet was.
And you can still find people in tech who claim that smartphones aren't a big deal.
And this was the basis of people complaining about me, like this idiot, he thinks, like, generative AI as big as those silly phone things.
Come on.
I think another answer would be like
I don't want to get into the argument about
what is the grace rating capability and benchmarks
and all you know you can see lots of five hour long podcasts
of people talking about this stuff
but the stuff we have now
is not a replacement for an actual person
outside of some very narrow
and very tightly constrained guardrails
which is why, you know, Demis's point that it's absurd to say
that we have PhD-level capabilities now.
We would have to be seeing something
that would really shift our perception
of the capability of this stuff.
So that it's actually a person
as opposed to it can kind of do these people-like things
really well sometimes, but not other people.
other times. And it's a very tough conceptual kind of thing to think about because, you know,
I'm deliberate, I'm conscious I'm not giving you a falsifiable answer. But I'm not sure what a
falsifiable answer would be to that. When would you know whether this was AGI? You know, it's the
Larry Tesla line. AI is whatever doesn't work yet. As soon as people say it works, people say, well,
that's just not AI, that's just software. It's a, you know, it's a, and it becomes like a kind
a slightly drunk philosophy grad student kind of conversation as much as it is a technology
conversation like what would you have you ever considered eric that maybe we're not as
either as a thought all i can say to want to give a tangible answer this question is what we have
right now isn't that will it grow to that we don't know you may believe that you may believe
leave it will. I can't tell you that you're wrong. We'll just have to find out. I think that's
good place to wrap. The presentation is AI, it's the world. We'll link to it. It's fantastic.
Benedict, thanks so much for coming on the podcast to discuss it. Sure. Thanks a lot.
Thanks for listening to this episode of the A16Z podcast. If you like this episode,
be sure to like, comment, subscribe, leave us a rating or review, and share it with your friends
and family. For more episodes, go to YouTube, Apple Podcast, and Spotify.
follow us on X, A16Z, and subscribe to our substack at A16Z.com.
Thanks again for listening, and I'll see you in the next episode.
As a reminder, the content here is for informational purposes only.
Should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security,
and is not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
For more details, including a link to our investments, please see A16Z.com forward slash disclosures.
