The Knowledge Project with Shane Parrish - Benedict Evans: The Patterns Everyone Else Misses
Episode Date: September 2, 2025Benedict Evans has been calling tech shifts for decades. Now he says forget the hype: AI isn't the new electricity. It's the biggest change since the iPhone, and that's plenty big enough. We talk abo...ut why everyone gets platform shifts wrong, where Google's actually vulnerable, and what real people do with AI when nobody's watching. Evans sees patterns others don't. This conversation will change how you think about what's actually happening versus what everyone says is happening. ----- Approximate Timestamps: (00:00) Introduction (01:04) What's your Most Controversial Take On AI? (05:11) Platform Shifts - The Rise Of Automatic Elevators (10:07) Profit Margins In AI (26:37) What Are The Questions We Aren't Asking About AI (39:41) What Benedict Uses AI For (44:21) Thinking By Writing (47:35) Can AI Make Something Original? (52:31) Advice for Students In The Age Of AI? (59:32) Who Will Win The AI Race? (1:11:09) What Is Success For You? ----- Thanks to our sponsors for this episode: SHOPIFY: Sign up for your one-dollar-per-month trial period at www.shopify.com/knowledgeproject ReMarkable for sponsoring this episode. Get your paper tablet at reMarkable.com today NOTION MAIL: Get Notion Mail for free right now at notion.com/knowledgeproject ----- Upgrade: Get a hand edited transcripts and ad free experiences along with my thoughts and reflections at the end of every conversation. Learn more @ fs.blog/membership ------ Newsletter: The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it’s completely free. Learn more and sign up at fs.blog/newsletter ------ Follow Shane Parrish X @ShaneAParrish Insta @farnamstreetLinkedIn Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
It seems to me right now you could do like a double-blind test of the same prompt given to GROC, Claude, Gemini, Mistral, Deep Seek.
I bet most people wouldn't be able to tell which is which.
Benedict Evans is a technology analyst known for his insightful takes on platform shifts in the tech industry.
He sees AI differently than others.
He spent decades spotting patterns others miss and dives into how people really use AI.
Why is it that somebody looks at this and gets it and goes back every,
every week, but only every week.
The very high-level threat to Google is that you have this moment of discontinuity in which
everybody resets their priors that we consider their defaults.
And so it's no longer just the default that you go and use Google.
There's this sort of question for Apple around, does this net actually change the experience
of what a smartphone is, what the ecosystem is?
Does it end up kind of getting Microsofted in the sense that...
I want to start with your most controversial take on AI.
It's funny. I suppose my take on AI, controversial take on AI,
rather like my controversial take on Crypto is being a centrist,
in that it seems to me very clear this is like the biggest thing since the iPhone,
but I also think it's only the biggest thing since the iPhone.
And there's a bunch of people who think, no, it's much more than
that. It's a minimum. It's more like computing. And then you've got people going around saying,
no, this is more like, you know, the electricity or the industrial evolution or, you know,
transhumanance or something. My sort of base case is to say, this is kind of another platform shift,
and all the new stuff will be built around this for the next 10 or 15 years. And then there'll be
something else. And so the impact on employment will be kind of like the impact on employment
from the other platform shifts and the impact on the economy and product.
and intellectual property, and there'll be a whole bunch of different weird, new questions,
just like there were a bunch of different weird knee questions before, and then in 10 years
time, it'll just be software. Put this in historical context for us with other platform shifts.
Everybody's saying this time is different, which everybody does at each platform shift,
I would imagine. What's the same? Well, that's a, there's a famous book about financial bubbles
called This Time Is Different, because people are,
always say this time is different, and it always is.
Like, the dot-com bubble was different to, like, the late 80s,
and the Japanese financial bubble was different to, you know,
pick any other bubble you want.
They're always different.
But that doesn't mean that or a bubble.
And the same thing here, I have a diagram.
I use a lot from 1995.
This research firm made a diagram of something called,
they called cyberspace.
Because it wasn't clear it was just going to be the internet.
It was clear that everyone was going to have.
some kind of computer thing
connected to some kind
of network. But remember the phrase information
super highway? Yeah. Which sort of
conveys that it would be centralized and
controlled by cable companies and phone companies
and media companies, which is sort of how everything
had all was previously worked. It wasn't clear
no, it was going to be going to be the internet. It wasn't
clearly the internet was going to be kind of radically decentralized
and permissionless and anyone could do what they wanted.
It wasn't clear the internet was going to be the web
and only the web because there were all
these other things going on. If you look at Mary Meeker's
first big public internet report from
1995, she has a separate forecast for web users and email users, and she thought email users
would be way bigger. It wasn't clear like that was all one thing. And then it wasn't clear that
it was about the browser. It wasn't clear that the browser wasn't where the value capture
was, because Microsoft crowbarred its way into dominance in browsers, but that turned out
not to matter. And then all the value is in site advertising and social, which were five years later
and 10 years later. And so, like, you can be very, very clear that this is the thing and then
still be completely unclear how it's going to work.
same thing with mobile internet.
Just funny, mobile internet now, it's kind of like saying black and white television on
colour television.
Desktop internet, mobile internet, black and wide TV, colour TV.
No one really says mobile internet anymore.
It's like talking about e-commerce.
You're starting to have people talk about physical retail and retail.
And but it wasn't clear, you know, I was a telecoms analyst in 2000.
And I was very clear mobile internet was going to be a thing.
It was not clear that there would be basically small PCs.
Like that was the fundamental shift of the iPhone is.
it's a small Mac.
It's not a phone with better UI.
It's a small Mac.
And it wasn't clear that the telcos would get no value.
It wasn't clear Microsoft and Nokia would get no value.
It wasn't clear it would take 10 years before it took off.
And it wasn't clear it would replace the PC as the center of the tech industry.
I mean, everyone was talking about, well, what's a mobile use case?
What would you do?
You'll do some things on your mobile phone, but what?
But obviously your PC will be how you use the internet.
And of course, that's not how it worked.
and so we kind of forget because now we don't see it
because now it just kind of became part of the air we breathe
how weird and strange and different all these things are
there's something I love talking about which is the rise of automatic elevators
so until the 50s elevators were manually operated
they were basically vertical streetcarts they were trams they were pub trains
and you have a driver who has a lever with an accelerator and a break
if you've been into a New York co-op he may have seen one of these
They call it an attended elevator.
There's a lever.
You push it that way to go down, middle to stop, that way to go up.
And then in the 50s, Otis creates the autotronic, I think it's called the autotronic elevator,
which had electronic politeness, which basically meant the infrared thing that stops the door closing.
But if you get into an elevator now, you don't say, oh, I'm going to use an automatic elevator with electronic politeness.
It's just left.
We kind of forget how weird and different all the other things were.
And yes, this is new and weird and different in a bunch of kind of strange, confusing,
confounding ways we can probably talk about.
But we sort of forget that other things
were weird and strange and different, too.
Is this the first major platform shift
where the incumbents have an advantage
because they have the data?
I'm pretty sure people thought Microsoft
had an advantage on the internet
and Google and Meta had an advantage on mobile.
And everyone thought IBM was going to win PCs.
Once IBM made a PC, that was it.
It's all over now.
And we kind of forget.
yet that there were PCs before and then IBM made one and that kind of became the standard but
then IBM lost it. So what happens with the incumbents do they grab on to using the technology
instead of adopting it because adopting it would mean killing the golden goose? Like what happens
in a platform shift with incumbent? The master of my college at Cambridge said that history
teaches there's nothing except that something will happen and you know there's always the example
on the counter example. So with any new kind of any new platform shift,
And a platform, the term platform shift itself is, you know, it's a useful term,
but you have to be careful not to be trapped by your terminology
and get into this sort of arguments about, well, is it a platform shift or is not a platform shift?
And how do you define a platform?
Shut up.
Like, you know, the thing is, with any of these sort of fundamental technology changes,
the incumbents always try and make it a feature, and they try and absorb it.
And the same thing outside of technology, existing companies try and absorb it
and they use it to automate the stuff they're already doing.
And then over time, you get new stuff,
you unbundle both the incumbents in tech
and you unbundle existing companies
because of something that's possible
because of this new technology.
So you can always kind of jump into the new thing.
And sometimes the new thing kind of really is just a feature.
And sometimes it's a fundamental change
in how everything works.
And sometimes that sort of contingent,
you know, there's this whole sort of parlor game,
like drinking game that historians play about,
of historical inevitability, you know, what would have happened if that battle had been lost
or if that politician had been assassinated or not assassinated? And it depends. Sometimes the
answer is, well, no, nothing, then then everything would have been completely different. And
sometimes the answer is, well, no, then, you know, what if Napoleon had won at Waterloo?
Well, then he'd have lost another battle six months later. Like, nothing would have changed.
The whole environment had changed. What if, you know, the revolution hadn't happened in spring in
1917, then you would have happened in the summer or the autism? Sometimes it's like really
clear. I mean, I always think the Kodak example here is kind of interesting. Tell me a bit.
Because you know, like, it's like the cliche that people say, oh, Kodak had digital cameras and
they didn't get it or they ignored it or they didn't want to do it because it would destroy their
business. But then you go and look at it and that's like, well, that was 1975. And the thing they
had was the size of a refrigerator. Like, no, that was not a consumer product. And it took
until the late 90s before the technology was actually viable as a consumer product. So, of
course, like, they didn't do it in the 70s because you couldn't.
What actually happens is once it starts happening, Kodak go all in on digital cameras.
At one point, they were the best-selling digital camera vendor in the USA.
And if you look at their annual ports at the time, they think this is going to be great because they're going to sell way more photo printers.
So they're selling these inkjet photo printers.
People are going to produce way more photos.
So they're going to take way more.
They're going to print them all.
Two things that screw Kodak.
One of them is smartphones.
And you can't argue that what actually screws Kodak is not the camera.
It's the social media and it's not printing.
more.
That's what killed.
That's one side.
The other side of it is that film was this high margin product where they had a bunch of
unique intellectual property and digital cameras are a low margin commodity where they
were competing with the entire consumer electronics industry with no differentiation.
And so even if they go, even at the point being, even if you go all in into that market,
it's still a crappy market where you've got no differentiation.
So you can kind of, you know, you can.
you can kind of put all of these things on the table and shuffle them around and say,
well, in hindsight, obviously, if I agree, we were screwed.
And in hindsight, obviously, Google is going to be able to make the jump.
And in hindsight, and in hindsight, and...
Yeah, maybe.
Is there a parallel between the second point you made about Kodak and Google today where, you know,
they have a high margin in search business and a low margin AI business?
So I think it's...
I'd be nervous about knowing what the margins are in AI because we've had, you know,
depending on who you ask, like the price.
to get a given result is probably come down by two orders of magnitude.
But then that was the state of the art two years ago.
And now there's a new thing, which is more extensive.
And so there's a awful lot of kind of shifting planes.
And there's a lot of algebra and all the variables
and the algebra are all changing at the moment.
So it's kind of hard to quite to know what that is.
I think the obvious Google threat right now
is that Google,
shows you a bunch of links and results and ideas.
And those could now be solved in a different way.
Well, let me kind of go back a second.
The very high-level threat to Google is that you have this moment of discontinuity
in which everybody resets their priors and we consider their defaults.
And so it's no longer just the default that you go and use Google.
And for this search or that search, like maybe Bing is 10% better on that search.
In fact, as we saw from the Google TAC trial last year, actually, Google is still the best search engine by quite a long margin relative to the other traditional search engines.
But what we have now is like a reset of the playing field.
And Google has a whole bunch of advantages as to why they might win in that playing field.
But there's a reset both of what the product is and how you sell it and your org structure around selling it.
And do you have the right politics and the right org structure to build that and the right incentives?
and internal conflicts.
And then the consumer behavior
kind of gets reset as well.
My understanding of AI
is that so much is data driven.
Like you have proprietary data sources.
You have better data
as you can train better model.
That gives you one of the key inputs.
So I think it's actually the opposite,
which is that everyone's kind of using the same data,
which is you need such an enormous amount
of generalized text,
that the amount that Google has
or that meta has is not actually
enough to move to be a kind of a fundamental difference in what you can train with so you don't think like
youtube as a repository is an advantage for like a significant so it depends push back yeah so it depends
so the models that we're training now we're training on text so that's not really being trained on
YouTube we saw this lawsuit around book copyright with meta that they downloaded a torrent of pirated
books because guess what that's they don't have enough text and it's not the right kind of text
they don't have lots of pros they've got lots of short snippets of text so i think the generality of
lms is you just need such an enormous amount of data that everyone kind of needs all the text that there
is and all the text that there is is kind of equally available to anyone so we've kind of like
the data is a level playing field effectively yes because you need so much more and it's also not
necessarily the kind of data that you have. So obviously Google has, you know, an enormous
repository of scrape data because they read the web all the time, but anyone else with a
billion dollars can go out and do that. Right. Or you can go and download the common call from
AWS. How far away do you think we are from autonomous sort of AI making AI better? So
no human intervention, but AI sort of going out in the real world, getting feedback, adapting itself,
and making itself better.
There is this sort of like people parody of like the fomph.
Like suddenly, magically this thing grows and becomes amazing and learns everything.
I don't think we're at that stage now.
I don't think anyone really knows when it would happen.
So it's very sort of impressionistic.
I think another answer might be you kind of have to be very careful looking at headlines
and thinking like what exactly is that telling me?
So Anthropic has done a bunch of things where they say like the AI was threatening to blackmail me.
Yeah, I saw that.
And you read the story and you think, okay, you asked what's basically a story generating machine.
Please tell me a story of what you would do in this situation where most people would probably say X.
And the machine says probably X.
And you say, my God, it said it would do X.
It's like, well, yeah, it would blackmail.
It's based on human behavior.
It would blackmail you.
Okay, how would it do that?
Yeah.
What do you mean it would blackmail you?
It's kind of like, like, the reductuary ad absurdum of this is, you write, this is a point somebody else made that, like, you write murder is good on a piece of paper, and you put it in a photocopio, and you press go.
And you say, my God, the machine says murder's good.
Well, no, you told the machine to say that.
And that's what these anthropic studies are.
there basically you tell the machine to say a thing and then it says it like well you haven't
proved anything you know people talk a lot about product market fit sales tactics or pricing strategy
the truth is success in selling often comes down to something much simpler the system behind the
sale that's why i use and love shop pay because nobody does selling better than shopify they've
built the number one checkout on the planet and with shop pay businesses see up to 50 percent
higher conversions. That's not a rounding error. That's a game changer. Attention is scarce. Shopify
helps you capture it and convert it. If you're building a serious business, your commerce platform
needs to meet your customers wherever they are on your site, in store, in their feed, or right
inside their inbox. The less they think, the more they buy. Businesses that sell more sell on Shopify.
If you're serious about selling, the tech behind the scenes matters as much as the product. Upgrade your
business and get the same checkout that I use.
Sign up for your $1 per month trial period at Shopify.com slash knowledge project, all
lowercase.
Go to shopify.com slash knowledge project upgrade your selling today.
Shopify.com slash knowledge project.
Do you ever struggle to stay focused?
There's a reason I reach for my remarkable paper pro when I need to think clearly.
If you're looking for something that can help you really hone in on your work without all the
distractions. Remarkable, the paper tablet might just be what you're looking for. Remarkable
just released their third generation paper tablet, remarkable paper pro. It's thin, minimalist,
and feels just like writing on paper, but comes with powerful digital features, such as
handwriting conversion, built-in reading light, productivity templates, and more. It's not just
another device. It's a subtractive tool. No notifications, no inbox, no apps, just you,
your ideas, and a blank page that feels like paper, but smarter.
Whether you're in a meeting or deep in a creative session, this device is built to keep you
in the zone.
In a world built to steal your focus, this tablet gives it back.
It's become the place I do my deepest work, and it travels light, slim enough for any
bag, powerful enough for any boardroom.
Not sure if it's for you, no worries.
You can try a remarkable paper pro for up to a hundred days with a satisfaction guarantee.
If it's not the game changer you were hoping for, you'll get your money back.
Get your paper tablet at remarkable.com today.
Where do you stand on regulation of AI?
So I think regulation of AI is sort of the wrong level of abstraction.
Talking about regulating AI as AI is the wrong level of abstraction.
It's like saying we're going to regulate databases or regulate spreadsheets or regulate cars.
Well, we do, but not like that.
when you regulate stuff, there are trade-offs.
You learn about this in your first year in economics class.
Like regulation has costs and consequences, and it's not necessarily, you know, there's always a trade-off.
And often you're making product decisions or engineering decisions that do actually have trade-offs.
There's like a three-way trade-off of like, what's good for the product, what's good for the consumer, what's good for competition, what's good for the company, what's good for the consumer.
I think the regulatory stuff is interesting in the framework of multiple.
country sort of competing for super intelligence. How would you advise a country to prepare for
AI if I'm the president of the United States and I call you and I say Benedict, you have five
minutes. I need, what do I need to prepare for? What can I do to put our country in the best position
possible for AI? Well, what's your objective? Is your objective to have a nice press release?
No, it's to dominate AI. A long time ago, I used to get these questions about like how can we
replicate Silicon Valley. And I always feel like the answers to those questions as much as
possible is, like, you can't, I mean, occasionally there are things you can do, like you can create
funding structures, you can make it easy, you can, you know, you can try and jumpstart startup
ecosystems, you can try and jumpstart funding availability. But most of the answer is things like
getting out of the way. I think that the idea of trying to create national champions is very
hard. Now, that almost kind of becomes an economist's question rather than a technology question,
How do you create national champions?
Where is that work?
Where is that not work?
I'm sure there's a bunch of books and papers about where does industrial policy work, where does it not work?
From a technology analyst perspective, I think of this in terms of, A, what are you doing that would make this harder?
And B, think of this as just more startups.
What are the things that we're doing that make it harder to develop that ecosystem?
Without picking a winner, it's not about picking a company and backing them.
If you do like this, this ridiculous law that California had a year or two ago,
if you treat this as like nuclear weapons and you say this is incredibly dangerous
and we need to have it under extremely tight control so that nobody does anything bad with it.
Which is basically the EU approach.
Go back to your economics class.
Policies have tradeoffs.
To govern is to choose.
You're making a choice when you do that and you're choosing that has costs.
Personally, like most people in tech, I think the idea that,
that this is all going to kind of produce bioweapons
and take over the world and kill us all.
It's just idiotic.
I think it's just a bunch of kind of
childish logical fallacies within that.
But you have to be conscious.
You say if we're going, you know,
the kind of the Biden approach to generative AI
very explicitly was to say
this is sort of social media 2.0.
Like social media to one was terrible
and destructive and bad.
And I think there's a, I don't agree with that.
I think there's a huge dose of mole panic within that.
But be that as it may,
if you make a decision,
that says we are deliberately and explicitly going to make it really hard to build models
and really hard to start a company that builds models
and really hard to do anything with any of this stuff,
then guess what?
It's kind of like, you know, the mayoral election in New York today.
Like, if you make it really hard and expensive to build houses,
houses will be more expensive.
You've made that choice.
If you do that, you cannot then complain that houses are more expensive.
You can choose that, but you can't complain.
Why do you think as a society we don't understand that?
Part of this is that, like, in most non-emotive fields, we kind of do.
You understand, people understand that, you know, if you make, you know, more employment regulation tends to produce slower growth, but more protection for employment, and you're choosing a trade-off.
People kind of, I think everybody on both sides of that equation understands that that's the trade-off and you're choosing one versus the other.
The point is, you can have a fully functioning free market, and you can regulate some of the negative externalities of free markets that anybody in any point,
of the economic spectrum understands
so negative externalities to free markets.
You can also have like a government
provided alternative.
You can have the government
do the fire department.
Where you have some of the kind of
the most obvious gaps
between the US and Europe,
it seems to me sometimes,
are in places where you kind of have neither.
So the US neither has
a government-controlled healthcare system
nor a free market healthcare system.
Do you see what I mean?
You have neither government-controlled housing,
which you have in like in weird places
like Singapore, nor a free
market in housing. So you kind of
break the free market, so you stop the
price signaling. This is like the great inside of Hayek
is that pricing is a signal. Pricing is
an information system. It's telling people what's
wanted. It's not just a signal
of worth. It's a signal of demand. There's
a fascinating book I read a while ago called
Red Plenty, which is about
Soviet Central Planning in the 60s,
70s, 80s. And it's about
sort of what happens
when you have a central planning that
just cannot cope with the level of complexity
of a sophisticated economy in the 60s and 70s,
as opposed to, let's make grain and tractors
and locomotives in the 20s and 30s and steel,
which really kind of works.
But once you actually have a sophisticated industrial economy,
central planning can't handle the complexity.
And so you try and create incentives
and structures around that while not having pricing.
And that just doesn't work.
I suppose there's a sort of a generalized point,
which is like a market economy is a system.
And if you pull a lever here,
something will move there.
And you can't just pull a lever here
and say, well, I don't want that to move because it's a democracy.
It will move anyway.
And so you have to understand how the system works
and understand what consequences you want from that
and what your parameters are within this.
One of the things that I admire about you
is that you're sort of known for spotting patterns.
I have a theory on how to learn pattern matching.
And I'd love to hear your pushback on this.
My theory on how we learn is, I call it the learning loop.
We have an experience.
We reflect on that experience.
and we create a compression,
and that compression becomes our takeaway.
So we can watch a movie, read a book,
and you come away with a compression of it.
But you can work backwards from that compression to the experience.
But what we consume most of the time is other people's compressions.
So like when people read your newsletter,
they're consuming a compression of the work that you've done,
but not the actual raw work.
So in a way, it's an illusion of knowledge
if you haven't done the work in that area.
And it's funny, I have a draft thinking about like what LLMs do to web search and publishing and discovery in e-commerce and like big foot of hand wavy, fuzzy, all of that stuff.
And I was sort of thinking about this and there's a book written by a French academic sort of 20 years ago or something called How to Talk About Books You Haven't read, which sounds very kind of snide.
but kind of his point is that like
there's the book you read when you were 17 and you really didn't get it
and if you read it now you'd get it
and there's the book that like he's got this kind of list of like
there's the books that everybody else has read so it's to say you've read them
there's the books that like you've read three other books by that writer
so you don't really need to read this one too
you get it like you do you need to read another Malcolm Gladwell book
like if you've kind of got the Malcolm Gladwell experience
so there's this sort of generalized sense of pattern
and accumulation of what you've said
seen what you've half seen, what you half remember. There's also, I think, you know, what your
viewers, listeners might notice is I kind of have two modes, two or three modes. I have a mode that's
sort of discursive and slightly rambling and free associating and I'll kind of spiral off in different
directions and hopefully come back to the point. And then there's a mode where I want to try and
pin the thing down and break it apart and say what are the two, three, four things that are happening
here, which is what you see in the slides, is no, like, what is it? It's this and then this and then
that. And as that capture you, that, that's a way of trying to understand what this is.
I try and work out what I think about this, how I understand it, how you can break it apart
by kind of pinning it down. The thing about data is, and the thing about the slides and
the analysis is, is like, I'm always asking, who cares? And I'm always asking, yes, but what
actually matters here. Why are you showing me this slide? Why am I showing you this chart?
And so you kind of have to ask, like, well, what are the actual questions? What are the questions
we're not asking on AI that we should be asking? I mean, we're asking, okay, well, there are some
people who are saying, all the value catchers are going to be in their models. There's this kind of
funny split between people who are just talking about the models getting better and everybody else.
He's saying, well, all the value is going to be in the application layer and, you know, what are
the companies and let's fund cursor and let's fund all of their staff and why isn't there
a consumer breakout yet and other people are saying what do you mean there isn't a consumer
breakout everyone's using chat dpt which answers well not exactly which is my data point
some people are using chat dptu most people look at it and don't get it still just fascinating
there's this kind of core where's the value capture question then there's like a bunch of
questions we could have asked two years ago where we don't have an answer will the error rate
ever be controllable or manageable will you ever get to remold that knows when it's wrong which
to me seems like, given us a statistical system, seems like a contradiction in terms,
but maybe it was a bunch of, you could make a list of like a dozen questions we could have
asked in early 23. We didn't really have answers to any of those. I mean, there were some people
who were asking, are these things commodities, will China catch up to a true answer? Even then was
obviously yes, of course, which is what happened, which Deep Sea kind of demonstrated. But we don't
have that many new questions since then. The thing that I puzzle about right now is, first of all,
There's this whole nexus, as I said, of like, what is LLLM's?
You know, we have infinite product, infinite retail, infinite media.
How will you choose what to buy?
What happens if I go to an LLM and say, what matters should I buy?
What life insurance should I get?
How does that?
Well, that poses dozens of questions we don't know yet.
Then there's a question around like the differentiation in the LLMs as product.
Like it seems to me right now you could do like a double blind test.
of the same prompt given to GROC, Claude, Gemini,
mistral, Deepseek, do a double-blind test.
I bet most people wouldn't be able to tell which is which.
That question of like, is there product differentiation?
Can there be product differentiation around the LLM as consumer product?
Because right now the models are commodities,
but chat GPT is way, way, way, way, way more usage.
So chat GPT is at the top of the outstore rank.
Gemini bubbles between like 50 and 100.
none of the others were in the top 100
same in Google trends
same in the usage numbers same in the revenue
there's revenue for corporate APIs corporate is a whole
other story but as a consumer thing
it's like chat GPT is now like the brand
it's the default it's the Google you use it
because you've heard of it and none of the others
have broken it is that where we are now
is that but then if you look at the products
not any of the models all the same
the underlying models are the same the product's all the same
right it's really hard to tell the difference
except like they've got different color schemes and different
icons different branding
Different branding, but the iPod is all the same.
And this reminded me of looking at browsers,
in that browsers are all the same.
The rendering engine underneath might be different,
just as the LLM might be different.
But you've got an input box, an output box,
and the output box wonders what the rendering engine give you.
And the only innovation in browsers in the last 25 years
is basically tabs and merging search in the address part.
And there's like the browser project.
There's people trying to do it now,
but it hasn't worked, hasn't got traction.
And is that sort of how LLMs will work
in that it's about the distribution and the brand
is not actually about the product or the model.
Or is it maybe more like social
in that, yeah, photo showing is a commodity,
but there's a big difference
between Instagram and Flickr
and all the other people that try to do photo showing.
And so you have to really...
That would almost be an argument
that it's sort of winner take all, right?
It's very hard for, like use clode as an example,
it'd be very hard for Claude to compete
if they don't have enough
usage to continuously make the investments.
Well, that's a slightly different thing.
So the winner doesn't appear to be a sort of self-reinforcing cycle in which more people
use it because more people use it.
The product gets better because more people use it.
So more people use it, which is what you have with operating systems because you have
more apps, therefore more users, therefore more apps.
So what you have with Google search, that Google has all the feedback from how people use it
that makes a search engine better.
You have a network effect on social media.
that you're there because you're there, because your friends are there, because you're there.
There's no apparent equivalent in LLMs right now.
There's no reason why the LLMs get better because more people use them.
Now, that may come.
You have that, the Open AI and people have been doing memory,
where it remembers what else you've asked.
But that seems more like a switching cost than a network effect.
And also, it might be easier for you to just ask it what it knows about you and then tell
Claude or vice versa.
So it's not clear if that.
But we are that sort of stage
where you're looking at the browser
and saying is there a way that you can create
stickiness here
or that you can create a network effect
on the browser
or is it just that the browser
itself is a commodity.
Now, Capital is not a winner-takes-all effect
in that convention.
At any way, it's a different kind
of winner-takes-all-effect.
I mean, I wouldn't conventionally think
of Capital as a network effect
or is it, it's not a product
that's not something that's inherent
in the product.
It's something else.
It may be that, yes,
chat GP,
the A PPR has modeled.
money so they can make their model better.
There's like six rabbit holes I want to go down before we move on to something new.
If open AI can, I like your point about sort of at the point where AI gets better because people
are using it, then there's a huge advantage to being open.
But we don't have any, we don't have visibility on what that would be yet.
At that point, though, whoever's in the lead would sort of.
If it did, then you could get kind of a runaway.
But we should kind of go back and think about MySpace because you have like, in this,
It would be MySpace, you know, in the early phases of these things,
and you see the same thing with the early PC industry, you've got a dozen of them.
And there's often an early leader that falls away later.
And so MySpace was the early leader, the fell away later.
Then you get a late stage where the S curve is kind of flattened out,
where all the network effects have kind of solidified and the product quality are solidified.
It was very easy, actually, to get people to switch back and forth between MySpace and Facebook and Bebo
and Friends reunited and Whatever the Orcut and all these other things.
in the early days
then you kind of get this
separation out
but then of course
then you get
then Instagram comes along
and then TikTok comes along
and so as soon as you have something
that's a different proposition
that turned out to be extremely easy
to pull that away
you know
Google
lost to YouTube
they had to buy YouTube
Facebook lost to
Instagram and WhatsApp
and they had to buy them both
so those are quite fragile
and quite narrow
when it takes all effects
or at least they appear to be
we don't know what that would be
or what the modalities would look like
modalities. Sorry, that's a great meaningless word. It's like saying
societal. We don't know what that would look
like and therefore we can't. We don't know how
rigid it would be or how it would work because we don't
have it yet. You can't, as I'm sure
you know, they're not retraining the models
all the time with the data.
So you don't have that kind of runaway effect
as like continuous flow and more queries
produces, you know, better results.
So it's kind of tricky
to do that yet. I want to come back
to something you said. You said some people
look at Chad TPT and don't get it.
Yeah, I think this is really important.
There's a whole bunch of survey data on how many people are using this stuff.
You've got the numbers from, so chat, opening I pretty say, well, we've got this many
weekly out of users.
The funny thing about social is people, when social happened, people would talk about registered
users.
You remember in the early days of the internet, people would talk about hits.
Yeah.
And then we realized that if your webpage has seven items in the menu bar, that's seven GIFs, so
that's seven hits.
So hits was meaningless, and you had to switch to page impressions.
And then it's registered users.
And then it was monthly out of users.
And on social people said, well, hang on, if you're using Instagram once a month, you're not using it.
It's daily active users or nothing.
And weekly active users, we don't like either.
Now OpenAI is doing weekly active users.
And Sam Waltman was a social media startup founder.
He knows this. It's a bullshit number.
You look at survey data, and I did this slide in the last presentation I did, of like five different surveys from the US from late last year earlier this year.
And it's all roughly the same.
It's like something around 10%, give or take three or four percent of people, depending on the survey.
are using this, say they're using this every day.
Another sort of 15 to 20% of people say they're using it every week.
And then there's, say you've got like, say 10% of people using it every day, say 15 or 20% of
people using it every week, another 20 or 30% of people who say I use it every month or two.
And another 20 or 30 people of percent of people who said, yeah, I had a look.
I didn't get it.
And then you have this survey where people say 70% of people are using AI.
And like, wait, what do you mean?
There's a whole other rabbit hole which is, you know, people say, well, did you use Snapchat's face filters?
So you're using AI.
What do we mean by AI?
So which is, let's be specific.
Let's talk about, are you using a consumer-facing LLM chatbot?
Like, you're going to chat to EPP or Claw and asking questions.
Like, that's the number we want to look at.
Most people don't think about their inbox as a system, but I do.
Email used to be the thing that helped me run my business.
Lately, it felt like the thing getting in the way of it.
I'd spend too much time reading through low-priety messages, try and not
to miss the one or two that actually mattered,
and it was draining my focus.
Then I started using Notion Mail and everything changed.
Notion Mail is the inbox that thinks like you.
It's automated, personalized, and flexible
to finally work the way that you were.
With AI that learns what matters to you,
it can organize your inbox, label messages, draft replies,
and even schedule meetings.
No manual sorting required.
I've created custom views that split my inbox
by urgency and topic so I can focus without distraction.
distraction, and I use snippets to fire off my most common emails, follow-ups, intros, and
scheduling without rewriting anything. The best part is it works seamlessly with my Notion workspace
and is powered by Notion, the tool trusted by over half of Fortune 500 companies.
Notion is known for powerful connectivity, intuitive functionality, and the ability to supercharge
productivity. Get Notion mail for free right now at Notion.com slash knowledge project
and try the inbox that thinks like you. That's all lower-case.
case letters, notion.com slash knowledge project.
To get Notion mail for free right now, when you use our link, you're supporting our show to.
Notion.com slash knowledge project.
Your local Benjamin Moore retailer is more than a paint expert.
There's someone with paint in their soul.
A six cents honed over decades.
And if you have a question about paint, it's almost as if they can read your mind.
I sense.
You need a two-inch angle brush for the trim in your family room.
Regal selected an eggshell finish and directions to the post office.
Benjamin Moore paint is only sold at locally owned stores.
Benjamin Moore, see the love.
And to me, there's a bunch of you could matrixes.
So some of this is, it's early.
There's a counterpoint here, which if people do the chart, and they say, oh my God, it's so fast.
It's like, it's faster than smartphones.
Yes, because you didn't need to buy $1,000.
the smartphone. Right. It's faster than PCs, yes, because you know what PCs cost in the 80s
adjusted for inflation? It's like five grand. Yeah. It's free for a lot of them. It's free. It's a
website. You just go there. Of course, it's got faster adoption. And there's way more people online
as well. So even the absolute numbers are faster than they were for Facebook 20 years ago, 15 years
ago, because there's way more people online now. Yes. So that's, that's again an example of my
unfair but relevant comparison. You're sort of standing on the shoulders of Jones. So, of course,
you can get to way more people quicker. But do you have to keep asking, well, what, yes,
But why do so many more people look at this and not get it?
Or even worse, they're not getting it, I can kind of see.
Because people look at everything and don't get it.
Why is it that somebody looks at this and gets it and goes back every week?
But only every week.
Right.
Why is it they can only think of something to do with this once a week?
I worry about those people.
I mean, I'm just thinking if these numbers are accurate, the 10%, the 15, you know,
90% of the people that I spend the most time with are within that 10%.
Well, I'm not.
Interesting.
Tell me more about that.
Well, here, actually, I'll preface this conversation with my kids don't use Google anymore.
They use it to find phone numbers or local businesses or places distance.
Everything else, they basically have defaulted to chat GPT now.
Again, I'm going to, all AI conversations seem to be analogies.
100%.
And it's like.
nuclear weapons. It's like, no, it's not. The comparison, I think, is interesting here.
It was not perfect, but it's interesting is to look at early spreadsheets for software
spreadsheets, spreadsheets for paper. Dan Bricklin creates, and the other guy, I can't remember
the other guy's name. You create PhysiCalc in the late 70s. And I think to get an Apple 2 to run it
with a screen and everything costs like 15 grand adjusted for inflation. And you show this to an accountant,
and it's like you can change the interest rate here and all the other numbers change. And we see
that now and we're like, yes.
1978, that was a week of work.
Almost literally.
That was like amazing.
Yeah, they would do a week of work in half an hour.
Yeah.
Or less.
And he has all these stories about accountants who would, you know, they would be given a
one month project and they'd get it done in a week and then they'd like go and
play golf for three weeks because they partly because they didn't actually want
to tell the client I needed it in a week because the client would think they hadn't
done it properly.
So I look at chat, GPT and I think, right, I don't write code.
I have zero use of something that will like hope for me.
I don't really do brainstorming.
Okay.
I don't do summarization of things.
I don't do the things where it's sort of out of the box, easy and obvious.
And then there's a sort of mental load of, okay, I've got to kind of try and think of what things am I doing that it could do for me.
And that, most people don't think like that.
So there's a sort of, I said a memory ago, there's like a matrix.
There's a matrix of like, who has the kinds of tasks that it's good at, obviously?
Who is good at has the kind of tasks that it's good at?
Not obviously.
Who is good at thinking about new tools for the things that they're doing?
Who isn't?
I'm kind of blown away.
You don't reflexively use AI.
If you were using Salesforce and you had a button that said, draw off me an email to
reply to this client, then that gets massive.
adoption. Well, that's a feature that we talked about earlier. That's a different thing.
Yeah. Is it, is it the chatbot as product where you get this blank screen and you kind of
look at it and you scratch your head and you have to think, well, what is it that I would do with
this? And then you have to form new habits around it. Or is it that it's wrapped in product and
UI where somebody else has said it would be really useful for this, wouldn't it? And then you look
at it and go, oh yeah, that would, I could do that. Do you think it's better with qualitative or
quantitative analysis.
So I think it is presently, and I'm going to get in a binary statement, I think today
it has zero value for quantitative analysis.
Oh, interesting.
Because if, well, let me qualify that.
Do the numbers need to be right or roughly right?
Because what all of these things do is they give you something that's roughly right.
And roughly is a spectrum, but it's always.
You don't want pi to be 3.1.
Depends how big.
you're measuring is.
You know, this is the line about pie that, you know, we can calculate it.
The however many digits we have is, like, enough to calculate, you know,
the diameter of the universe or something, but people still adding more numbers.
So, you know, there's a little bit of, you know, it's paradox in here.
You know, like you get infinitely close.
At a certain point, it doesn't matter.
And this is at a high level, this is some of the AGI argument,
that if the thing gets infinitely close to reasoning without ever actually reasoning,
does it matter?
Like, at a certain point, the thing, if the thing is always right,
If the thing is only wrong once in a billion years,
does it matter that it's not always right?
The problem today is, it's not wrong once in a billion years.
It's wrong a dozen times a page.
You don't want to spit that out and give it to somebody.
And I don't know.
Yeah, yeah.
So I had a very early example of this,
and I was going to speak at an event at the beginning of 2023,
and the conference people had asked me for a long biography of myself,
and I still don't have one.
And so they'd made one, and they'd used it with chat TVT and not hold me,
and they just sent it to me to check,
and I looked at them and I said,
what the fuck is this bullshit?
That's 2020.
That's like generations ago.
That's not relevant to the point I'm going to make.
The point I'm going to make is, A, it was always the right kind of biography.
It was the right kind of degree, the right kind of university,
my kind of experience, the right kind of jobs.
It just wasn't actually right things.
But B, I could take that and fix it.
So for them, it was useless.
For me, it was completely, very useful.
I was like to spend 30 seconds fixing it instead of spending an hour scratching my head,
which is why I say right or wrong depends,
which is a very kind of French philosopher kind of question,
is the prompt, does the prompt have errors?
It kind of depends why you wanted it.
Okay, well, that...
I don't have use cases where I want something that's roughly right.
I don't have use cases where I want a list of ten ideas
or I want it to brainstorm or I wanted to draw from me an email
or I wanted to write code or I don't want to to generate some images.
You know, I'm a friend who works at a consultancy and they want pencil sketches of concepts
and now they can just use Midjoney to make those.
That's great.
Does that sketch, like, does that person at the back have three legs?
Not anymore, no.
And if they did, it wouldn't matter.
You could Photoshop that out.
I don't do that.
I don't create images.
So I don't have a good mapping of the stuff this is good for early against the stuff that I do.
And the stuff that it maybe would be useful for.
is the stuff where it's actually not yet very good.
And the things where you would mitigate that by saying, well, I would fix it, I don't do those things.
Okay, so this is a good thing because I wanted to come back to something you said.
You said you think by writing and in a world where you're taking something generated by AI and editing it, that's different than writing.
Talk to me about thinking by writing.
I, actually my chat GPT use case, which is more mental model than a practical thing, is I write something and I think it's such a, I would ask in the past is kind of your point about pattern recognition is I look at something and say, am I adding value here? Am I saying something useful? Am I saying something different? Am I asking the key question? Am I pushing further? Am I pushing the, am I asking the next question rather than just answering the obvious questions? Now I can just say, is this what chat GPT would have said? And if the answer is
this is what ChatGPT would have said, then I don't publish it.
Not because people can get it from ChatGPT,
but because anyone would have said that.
That's a perfect analysis in the sense that it raises the baseline
of what qualifies as insight.
The difference is the slope of the insights.
And so you wouldn't say it if ChatGPT is going to say,
and push back on this by all means,
but the ChatGPT level of insight to use an example,
it could be Clote, it could be Cross.
could be any of them, is increasing at a faster pace than most people.
And eventually those slopes intersected already or intersected already with, you know,
maybe up to intern level.
And next year it might be master's level or it might even far surpass that.
And it hasn't some domains in terms of math.
The year after it might.
And so maybe it's like five years before it passes Benedict.
and maybe it's four years before it passes somebody else
and maybe it's like passed me way long time ago.
So I think there's a,
there's two or three directions we could take that.
One of them is,
that's an interesting theoretical, philosophical question,
is originality,
which is to say AlphaGo could do original moves
because it could do all the moves
and do moves that no one had done before,
not knowing what people have done before.
but it had an external scoring system.
It knew that that move was good.
Because I don't have feedback.
Yeah, it had a feedback loop because every move has a score.
They can evaluate the score every move.
The classic parable of the monkeys and typewriters,
or the board is Inflip library, is there's no feedback loop.
Yes, the Borghers in the Flat Library contains new masterpieces
generated at random.
Well, the monkeys with typewriters would generate new masterpieces,
but there's no feedback loop, so there's no way of knowing.
You'll see this for the music now.
You can generate new music.
It could generate new stuff that you wouldn't know.
for an LLM, variance is bad.
Originality is a lower score.
So what's the feedback loop for original but good?
Now, it might be that that's the same sort of false question as saying,
is it really reasoning or is it just right 99 times, you know,
nine followed by many zeros.
Does it actually understand or is it just always right without understanding?
Does it actually know that's original and different or does it not?
And that's a kind of a puzzling, I don't think we know the answer to that, and it may be the wrong question, but it's kind of a puzzle as to how would these, how far can these things make things that are both different from the training data and good?
And is knowing that this is different but good, is it really different or is it just matching the pattern on a longer frequency?
You see what I mean.
And how much could you actually have predicted that, given enough data that it's not actually actually out.
outside the pattern. It just kind of looks like it is if you're zoomed in more. And if you zoom out
more, then that is for matching the pattern. How would you know that people, it's like,
you know, thinking about music, like how would you know that people would like punk? You could
get, you can very easily imagine a generative AI system that can make you more stuff that
sounds like, yes, or more stuff that sounds like Pink Floyd. It might not sound like good
Pink Floyd for Floyd, but you could imagine it would make more stuff that sounds like, like the
Grateful Dead. You know what Grateful Dead fans say when they run out of drugs.
This music's terrible.
You can imagine, I'm being unkind,
but you can imagine the challenge is knowing that now people are really fed up of 70s progop,
and they would really like something else,
and that something else would be punk, and that would work,
is knowing that people in the 40s were really fed up of the war
and would want luxury,
and that Christian Dior's new look would work and would express that.
Could an LLM do that thing?
how much variance do you need?
I don't know.
It's an interesting, like, thought experiment
to ask that question.
There's a completely different place to take this,
which is to say this is an appeal
for boutiques and in-person events
and the unique and the curated in the individual.
There's a shop I always used to always talk about,
I'm not sure if it's actually still there.
There's a shop in Tokyo that just sells one book
in Ginza that just sells one book
and they change what it is once a month.
It may have closed 10 years ago.
I've been talking about it of 20 years.
But the point is you don't go into the shop
and have to work out while booked by,
but you have to know the shop exists.
Or you can be Amazon,
and you've got 500 million scoes,
and they know they've got everything,
like actually there's some stuff they don't have
because they want to be individual.
They don't have LVMH.
But for the sake of argument,
Amazon has everything,
but you can't go to Amazon and say,
what's a good book?
Or, you know, what's a good lamp?
They have all the lamps.
You can't just go to it and say,
what lamp should I buy?
Right.
All of retailing and merchandising
and advertising is about where are you on that spectrum
and what else do?
Do you spend the money on rent?
on rent or advertising or shipping or what and how does that work and the more there's a sort of a
polarization between well if I know I want the thing I can get it within 12 hours but how do I
know I want the thing and as I alluded to earlier an LLM is one might paradoxically and LLM could
suggest you the unique individual thing would the LLM also create the unique individual thing
You know, that's a high order.
And a second, that's a different question.
It's a question further down the pipe.
But the more that the LLM can do what everybody would probably do
or say what everyone would probably say,
then the more you push to other places.
That makes a lot of sense.
I mean, there's always going to be a market for insight,
whether it comes from LLMs or people.
You have to be providing insight.
You know, our world as quote-owned content creators
is a very wide spectrum of people who do very different stuff.
And there's, you know, there's people doing AI slop and there's people doing, you know, the, what is it called?
The, you know, the passive income thing.
But there's people who do very different kinds of content coming from different places for different reasons.
You know, Scott Galloway does very different kind of stuff to me.
Mary Meeker does very different kind of stuff to me.
You do very different kind of stuff to me.
It's just part of that is about who you are and your story and the authenticity of it.
And some of it is about no one cares who you are, but you're saying interesting stuff.
And some of it's a recommendation algorithm or something else.
There's a book by Zola about the creation of department stores called Bono de Dam, which means the happiness of women.
And it's basically about a 19th century, Jeff Bezos, Collinger, an apartment store and department store into existence out of thinner through force of will.
And like he invents, fixed prices so that you can have discounts and lost leaders and mail order and advertising.
And, you know, he puts the slow-moving expensive stuff at the top of the store and he puts food and makeup on the bottom of the store.
there's nothing new under the sun
and meanwhile the shopkeepers on the other side
of the street are saying like have you seen what that
maniac's doing now he's selling hats
and gloves in the same shop he's got no moles
he'll be selling fish next and of course he's
got this counter like the whole
plot point is about lost leaders
so it's like you can again you can have to step back
and think well people have freaked out about
industrialised mass produced product before
people have freaked out about there be too much content
there's a line that Erasmus was the last
person to have read every book
there's too much AI content
slop on the internet now.
Like, yeah, how many books do you think were being published in 1980?
Do you think everyone was reading all the books then?
Yeah, same thing, just different scales, I guess.
What advice would you give students today?
Well, when I was a student, we're all supposed to be learning Japanese.
I think that was just the tail end of that.
You know, I was sort of lucky to have a sort of very expensive and old-fashioned
and handcrafted education that was all about learning how to learn and learning how to think.
I think there are skills that people used to sneer at
that probably shouldn't have been sneered at
and certainly shouldn't know.
I mean, I'm old enough to remember
when people would just sort of smuggily say,
well, I'm not computer literate.
So that was like being a car mechanic or something.
I don't know how to do that.
That's not my problem.
That's somebody else's problem.
And I don't think anyone now,
partly this is because of mobile,
I don't think anyone thinks like that anymore.
should you learn to code?
No, I think you should find out if you want to learn how to code.
I think this is like saying should you learn an instrument
or should you, you know, go take theatre classes.
That may or may not be what you should be doing.
Of course, what does learn to code need in 10 years' time?
That's a different question.
But should you, I don't think you should presume you will or won't be a software engineer.
I think you should presume that you will need to be curious
and that you'll have many careers and different kinds of jobs.
I think you should be focusing on learning how to think.
But I think you should be presuming that everything will change.
Everybody says something like learning how to think.
I feel like you would have a really good.
What does that mean?
Like break that down for me because it probably means different things to different people.
So every now and then I'm slightly perplexed to get an email asking for a career advice
because I think if you looked at my LinkedIn and like you sort of company shut down,
company shut down, like lasted a year there.
That didn't work.
Coming from the UK and seeing the US system,
I never really liked the US idea that like,
if you want a good job,
you should be doing mass and business and engineering.
Now, that may be how people hire students here,
but I never liked the idea that you're like studying philosophy
or studying history or studying literature is useless
because you're learning about history.
That's not what I learned.
Yes, you know, I throw off lots of analogies about history,
none of which are actually things I studied at university.
What I learned studying history at Cambridge was how to ask what the next question is, how to break this apart, how to read 100 books or 50 books in a week and find the bits that you need, how to synthesize lots of information, how to ask, well, what does that actually mean as opposed to what it looks like it means, do you believe this? Is this credible or should you just jettison that idea? How do you put this together and think about how you would explain something? And that's what my friends who studied English did, or my friends who studied,
studied philosophy did or my friends who studied engineering did that was what you were being taught
how to do you wouldn't be weren't being taught to be an english to be a historian or to and i'd hesitate
to think that you know you can only build a company if you've had it or only work for goldman's
or mackenzie or a big law firm if you had a particular kind of education a particular kind of
degree, I think he should be looking for what's going to challenge you and push you and give you
the ability to learn and think in different ways. But again, this is me. You know, what are the
skills that you have? How does your brain work? How do you think about things? And it took me
got 20 years to work out what I was good at. So I'm not sure that you can know that as a student.
So you have to try and find what you're good at as well as, you know, learning to think. Maybe
learning the thing is what I do. Maybe that's not what you should be doing. You should be learning.
What is it that you should be learning to do? What are the things that you're good at? Try all the
different things. I don't know. It sounds like in a university commencement speech now. I don't
fucking know. But you don't know what you're going to be good at. So you kind of want to try and
like create options for yourself. What did you learn about investing working at a 16C?
So there's a bunch of like maxins or sayings. That's probably I wouldn't want to dignify them.
as like theses or anything else.
But there's a whole bunch of maxims and sayings in venture,
which, you know, we could have a podcast talking about,
but there were better people to give you a podcast
talking about the mechanics of venture.
But like you're understanding what startups are
and how they work and how the machine works.
And startups are an industry.
And Silicon Valley is like a machine for creating startups.
And still too many people kind of look and say,
well, that was a dumb idea.
And it's like, well, that's the wrong question.
The question is if you look at a startup and you think,
could it work?
And if it did work, what would it be?
And could those people make it work?
And then you understand more, you like the mechanics of, well,
how do social media work and how do people build companies
and what is it like to create a startup, which is a whole other conversation.
I think something else that I learned was calibration.
This is sort of, again, another metaphor I always think of,
which is that if you go to like a really great art gallery,
like you go to the memo or, you know, the Met or the Louvre or something,
everything that was a masterpiece.
If you go to a smaller, weirder art gallery, like London's a gallery in London called
the Wallace Collection, or I was in Rome a couple of weeks ago, and I went to one of the
sort of old aristocratic palaces, and there's palaces like, it's like 10 or 15 rooms
of pictures, and they've got like a quite good tintoretto and a maybe Titian and a Raphael.
You see it glowing across the room. And they're like, oh, that's why he's.
And it's the same when you see lots and lots of startups.
Like, oh, that's why he's maxed left chin.
Oh, no, that's why this is bollocks.
Like, you get 10 minutes in and like, God, I've got another 45 minutes.
I've got to pretend to be interested and polite.
So the founder has a good experience.
It's seeing that contrast and texture and seeing what good looks like,
seeing what worked, what doesn't work, what people tend to say, how this tend to work.
It's pattern recognition as much as anything else.
You also get, there's a whole, all sorts of other kind of cultural contexts you get around.
You know, Silicon Valley is can be very high school.
You know, I always used to say, it's an industry town, and I always used to say it was like being a college town where there's one subject.
So everybody you meet is doing the same thing.
And so in some ways, that's very powerful.
You know, everybody around you is doing a PhD.
You want to do a PhD?
Everyone around is doing a PhD.
Here's the world that's spent on the subject.
Of course you are.
It's like being a little class kid.
Of course you're going to university.
What do you mean you're not going to, everyone's going to university.
Of course you're going to do.
great work. Of course, you're going to start a company. And you're surrounded by the people
who've done it. You want to get a CTO who's done it five times. You want to get the head
of grace who's done it five times. They're all there. The other side of that is you'll never
meet anybody who isn't working on exactly the same stuff and isn't interested in what you're
working on. So you have no external context. You have no external perspective. The nearest theater
is in L.A. No, Chicago, I think. Is there a theater's in L.A.? You want to go to an art
gallery, you've got to go to L.A. Who's the best positioned right now? So from the outside looking
And, you know, SOC is going all in on AI.
Elon seems to be going all in on AI.
Which of sort of the leading companies do you think, like, A, why are they all of sudden
shifting and really, you know, they were dabbling in it before, but now they're really
committing, you know, tens and hundreds of billions of dollars?
And then who's the best positioned in this sort of space?
If you had to pick one and invest your entire net worth in it, who would it be?
Well, that's several different questions.
Let's pick them apart.
This was a technology that had been kind of floating around before Chat-T-PT-3.5,
and everyone kind of saw that it didn't work very well.
And then Chat-T-WT works.
Now, actually, it works well enough.
And then there's this explosion of interest since then.
And so I think last year, the Google, Microsoft, AWS, and Meta spent about $220 billion
capace last year, and we'll probably spend something over $300 this year.
And so that's basically more than doubled, almost tripled, I think.
from a couple of years ago.
So there's enormous surge in CAPEX in investment in this.
And we've got these stories about like, well,
meta bought, half-49% of scale.
A.I for $15 billion.
Apparently looked at both of the other recent open-AI spin-out,
safe route, superintelligence and what's the other one,
thinking machines, which are both basically pre-product,
pre-revenue labs with somebody from open AI
at multiple tens of billions valuations.
And he apparently, like Sam Alton complained that Mark is offering people
$100 million to join.
so Marx is in beast mode
Microsoft has this kind of weird
situation in that his own models aren't actually
very good but it's got this very kind of weird
relationship with Open AI
Open AI, Sam Altman
is, I was going to say polarising figure
but actually opinions about him tend to be fairly
unanimous and tend to be fairly
negative like everybody he's ever worked with him quit
Open AI itself still
kind of sets the agenda
but much less so
than it was two years ago
and I wouldn't want to do like a detailed
like calling the scores on whose models are good and whose labs are good.
But, you know, objectively, Google is clearly like firing on all cylinders now
and is making great models.
Lama seems to felt Lama 4 was the fact that seems to have been an embarrassment.
And so Netris kind of scrambling to catch up.
Apple is a slightly different position in that they have always sort of taken the position
that they don't want to be first, they want to do it right,
and that they don't need to be doing whatever the latest consumer internet thing.
It's like they don't have a YouTube.
I think Craig Federegi said
an interview after WWDC
like we don't have a YouTube
we don't have a YouTube we don't have a car showing service
we don't do grocery delivery
we also don't have a chat bomb
okay that wasn't quite the question
the question for Apple is how much
does would integrating an LLM
into the operating system change the experience
of what it is potentially shifting
the competitive balance with
pixel which at the moment
has basically only gets bought by people who work
for Google and people who work for the tech person
like literally no one else buys pixels
barley because Google doesn't want to compete with Samsung
if you go down a whole smartphone industry rabbit hole
there's this sort of question for Apple around
does this actually change the experience of what a smartphone is
what the ecosystem is
does it end up kind of getting Microsofted
in the sense that you're going to still
for the time being you're still going to buy a smartphone
it's not at all apparent there's going to be another device
and if there is it's a long way away and it might be an Apple device as well
but you're still going to buy a smartphone
we're still going to buy the nice one
with a good battery and the fast chip
to run the AI models and the good screen
and the best camera, which will still be an iPhone
because Apple still has the best chip team
and a whole bunch of other hardware advantages.
But everything you do and it will be from someone else
and it won't be someone else in the sense
that it's an app from the app store,
it will be someone else in the sense
that it's a model running in the cloud,
which is what happened to Microsoft in the 2000s,
which was everyone had to get on the internet.
To get on the internet, you needed a computer.
You weren't going to buy a Linux computer.
You probably wouldn't buy a Mac either.
on board a Windows PC, but they were using it to do, to do web stuff, not Microsoft stuff.
So Microsoft kind of lost that.
And so that would be the concern for Apple would be you'll still buy your new iPhone and you'll buy the iPhone at air this autumn because it will be center and lighter and it will be a lovely phone and you'll use it to do chat GPT.
But the counter argument would be to say, yeah, you'll use it to do chat GPT and DoorDash and Uber and Instagram.
and that cool new game
and the other new cool game
that's enabled by LLMs
and to do Ticktok
and to do and and and and
it will be kind of the same
except they'll
so there's a sort of
you say what I mean
there's this sort of slight
fuzziness around
what the bare case for Apple
actually is
does it sort of end up
like Microsoft did
how bad is that exactly
what does that mean
if we all go to wearing
something like this
that's lit up by AI
then that's a bigger shift
but it's very
unclear how close that really is, just for the optics.
There's another axis here, which is what happens to Google search?
Where does that money go?
How do you map the search activity that goes to an LLM that, and how do you map that against
where the revenue is?
And also from the other side, how do you map that against where the publishers are?
How do you think about whether you just shift your habit and you're actually using
CHATDPT as Google?
It's basically doing what Google does, but you've shifted.
did the brand and you're going to that search box instead of the other search box.
I don't think you can count them out in absorbing that in.
So who would you, out of the public companies,
you had to put your money in one of the top like seven or eight?
Well, then there was a valuation question.
And a long time ago, I was a public markets analyst.
And I was bad at being a public markets analyst,
equity analyst for a bunch of reasons,
one of which was I was never interested in share prices.
Well, we've given the standard disclaimer.
You're forced to you.
What would you?
It's hard to see iPhone sales slipping from what we see now.
Even half the cool, sexy Google stuff is in the Google app on the iPhone.
I think meta and Google, there is this sort of big question around where the ad revenue goes
and how much the ad revenue gets pulled away to different places.
I think Instagram is probably in a very good place there in terms of changing what advertising looks like and how that works.
I mean, I had a slide in my presentation, which was like what META and Amazon want to do
is to make LLM's commodity info that sold it cost.
Yep.
This is why MetaMet is open source because they want to make it commodity in for that sold it cost
and they differentiate on top with meta stuff, with Facebook, social, Instagram-y stuff,
and they want the model itself to be just infrastructure.
Amazon would also like it to be commodity infrastructure that costs because that's what anybody
is.
They sell commodity infrastructure a cost, and they do it better than anybody else, and they make a lot
of money from doing that.
Go to Amazon's financials and basically all the money comes from AW.
us in the ads. People who complain about AWS
don't haven't realized the ads make it. Amazon
did $50 billion to $60 billion of ad revenue last year.
So Amazon seems to be fine
but there's a bunch of stuff to navigate
around how does this change, how people buy
stuff on Amazon. Who does that leave?
Microsoft, it was this line from Bismarck
that the great man is somebody who hears
God's footsteps through history and grabs onto
his code as he walks past.
And like, Satu is like tried to, first of all,
he tried to grab onto VR and AR with whole
HoloLen saying that we don't talk about that anymore.
Now it's AI, their own models are not really ranking.
I mean, they hired Mustafa, but like they're still struggling.
They've got this weird contentious relationship with Sam Altman and Open AI,
and it's basically not their models.
On the other hand, like, they're going to sell an awful lot of Azure to run all this stuff.
Which, again, is this tension?
Is it that everybody just uses chaty BT to do the thing?
Or is it that someone is going to come to you where they're,
a great accounting product to run Farnham Street.
And it runs on Azure and it uses some LLM, who cares which one it is.
It's just better.
You know, you can edit your bank and it does the cool stuff and you can edge to that.
It does the cool stuff.
You know, my use case for an LLM is do my fucking invoicing for me.
It's not even that.
It's work out why exactly it is that that client's ERP doesn't like my bank account
and not have me spend the next three months bouncing emails back and forth.
with somebody in India
about one thing
on getting this done.
That would be a great use case
that LLMs can't do that yet
if they could.
That would be great.
But we're not there yet.
So Microsoft and Google
are in this sort of position
of being the incumbent.
Both, you know,
how can I put this?
Give you a more systematic.
Again, I'm sort of thinking
my way through to the answer
to your question.
For Google and Microsoft,
they have an incumbent business
that is potentially disrupted
pretty profoundly by this,
but they also have a cloud business
that sells all the new stuff
for this.
Amazon has an incumbent business
that doesn't get disrupted by this,
at least much less obviously,
and a cloud business
that will be very happy
selling all of this stuff.
Meta doesn't have a cloud business
selling this stuff
and has a bunch of new ways
to make money from all of this new stuff
except they've got to have some better models.
Apple, is this a competitive threat
to the iOS ecosystem?
A lot of stuff would have to happen first.
And they'd have to drop a lot more balls
before that was to happen
and meanwhile they're still going to sell you
the nicest glowing rectangle
to do all of this stuff
and then we're talking about
glasses and VR which is a whole other
to our conversation about
when does that happen
there's still people poking around
in crypto like Web 3
remember maybe that someone said
that people still working on crypto
are like those Japanese soldiers
on islands in the Pacific
you don't know the war's over
but there's still people working on crypto
so that's like another disruptive thing
coming down the pipe
who are the other incumbents
Netflix's TV
company.
It was a car company.
Elon Musk,
there's a whole
Tesla conversation
fascinates me
because Tesla bears
balls think it's a software
company and Tesla bears
think it's a car company.
And at the moment
it's a car company.
I mean, yes, they launched
Autonomous driving,
but what did they launch?
They launched half a dozen
existing model cars
with test drivers
doing a geo-fence drive
that everyone else
was doing 10 years ago.
is that going to scale
is they finally going to get the flywheel
of having all the camera data
meaning that we'll work with just cameras
we've been
there's a conversation you could have asked 10 years ago
in fact I wrote stuff 10 years ago
are there when it takes all effect in autonomous cars
will Tesla get it working with cameras
before everybody else gets it working with LIDAR
well WOMO's got it working with 50 grand of LIDAR
or whatever that that stack costs
it's tens of thousands of dollars of extra stuff on the car
so they've got it working with all of that stuff
Tesla does not have it working with cameras.
Will Tesla get it working with cameras
before Waymo can get rid of the LIDAR?
We could have had that conversation.
I literally was on podcasts
seven, eight years ago
having those conversations.
We don't know the answer.
Maybe.
We don't know.
The interesting Tesla point is people always looked at it and said
it's the iPhone of cars.
No, it's not.
What's happening is that cars are becoming Android
with no iPhone.
And Tesla is just selling, in that metaphor, Tesla is just another Android phone maker.
And they're competing with the whole Chinese industrial policy to make more,
and there's just going to be a flood outside the U.S.
They're protected by tariffs.
Everywhere else is it's very clear what's happening.
It's just a flood of EVs, whereas it are just as good as Tesla's.
We always end on the same question, which is, what is success for you?
We live in the luckiest time.
You know, we are not worried about rockets landing on our heads.
We're not worried about our children as dying from diseases.
we're not worried that the bank might be closed tomorrow
and all of your money's gone.
We're doing something interesting that we enjoy
and that pays the rent that we want to be able to pay.
I get paid to fly around the world and give slides for money.
So I think I'm doing okay.
I could always be doing more.
But I'm always looking for the next question.
Like, I'm always trying to be curious.
This was a great conversation.
Thanks for taking the time today.
Thank you.
Thanks for listening.
and learning with us, be sure to sign up for my free weekly newsletter at fs.
dot blog slash newsletter.
The Farnham Street website is also where you can get more info on our membership program,
which includes access to episode transcripts, my repository, add free episodes, and more.
Follow myself and Farnham Street on X, Instagram, and LinkedIn to stay in the loop.
If you like what we're doing here, leaving a rating and review would mean the world.
And if you really like us, sharing with a friend is the best way to grow this community.
Until next time.
We're going to be able to be.