a16z Podcast - Marc Andreessen's 2026 Outlook: AI Timelines, US vs. China, and The Price of AI
Episode Date: January 7, 2026a16z co-founder and General Partner Marc Andreessen joins an AMA-style conversation to explain why AI is the largest technology shift he has experienced, how the cost of intelligence is collapsing, an...d why the market still feels early despite rapid adoption. The discussion covers how falling model costs and fast capability gains are reshaping pricing, distribution, and competition across the AI stack, why usage-based and value-based pricing are becoming standard, and how startups and incumbents are navigating big versus small models and open versus closed systems. Marc also addresses China’s progress, regulatory fragmentation, lessons from Europe, and why venture portfolios are designed to back multiple, conflicting outcomes at once. Resources:Follow Marc Andreessen on X: https://twitter.com/pmarcaFollow Jen Kha on X: https://twitter.com/jkhamehl Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X :https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://twitter.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
This new wave of AI companies is growing revenue, like just actual customer revenue, actual demand, translated through to dollars showing up in bank accounts.
I'd like an absolutely unprecedented takeoff rate.
We're seeing companies grow much faster.
I'm very skeptical that the form and shape of the products that people are using today is what they're going to be using in five or ten years.
I think things are going to get much more sophisticated from here.
And so I think we probably have a long way to go.
These are trillion-dollar questions, not answers.
But once somebody proves that it's capable, it seems to not be that hard for other people to be able to catch up, even people with far less resources.
a company is confronted with fundamentally open strategic or economic questions, it's often a big
problem. Companies need to answer these questions, and if they get the answers wrong, they're really in
trouble. Venture, we can bet on multiple strategies at the same time. We are aggressively investing
behind every strategy that we've identified that we think has a plausible chance of working.
If you want to understand people, there's basically two ways to understand what people are doing
and thinking. One is to ask them, and then the other is to watch them. And what you often see
in many areas of human activity, including politics and many different aspects of society,
The answers that you get when you ask people are very different than the answers that you get when you watch them.
If you run a survey or a poll of what, for example, American voters think about AI,
it's just like they're all in a total panic.
It's like, oh, my God, this is terrible, this is awful.
It's going to kill all the jobs, it's going to ruin everything.
If you watch the revealed preferences, they're all using AI.
AI is moving faster than any technology way before it,
and the rules are being written in real time.
For decades, new platforms followed a familiar arc.
Build infrastructure, attract developers, capture the value.
AI is breaking that pattern.
Models are improving weekly, costs are collapsing,
and entire markets are being rebuilt before incumbents can react.
What looks stable today may not exist a year from now.
No one has seen more technology cycles up close than Mark Andreessen.
From the early internet to mobile, cloud, and now AI,
he's watched multiple air as reset the economy,
and he believes this one is larger than all the rest.
In this broad AMA, Mark joins the conversation to unpack
why AI still feels early despite the hype,
how model economics are reshaping software,
and why usage-based pricing and open competition
are accelerating adoption at unprecedented speed.
He also dyes into the hard questions.
Big versus small models, open versus close ecosystems,
and the role of startups versus incumbents,
and how China and geopolitics factor into the future of AI.
Mark explains why this moment feels different from past cycles,
why venture portfolios are uniquely positioned
to better cross-conflicting futures,
and why the different opportunities may emerge
where technology becomes cheap, abundant, and embedded everywhere.
We hope you enjoy.
A lot of folks that send questions ahead of time
and what I've done is kind of curated into a few different sections
in an AMA this morning with Mark.
So what we thought we'd do is cover four big topics.
So AI and what's happening in the markets, policy and regulation,
all things 816 and then we've got a fun catch-all,
which we're calling sandbox of things if we get to it.
So starting first, maybe with the biggest question.
We're sitting in the middle of the AI revolution mark.
What inning do you think we're in?
And what are you most excited about?
First of all, I would say this is the biggest technological revolution of my life.
And hopefully I'll see more like this in the next whatever 30 years.
But this is the big one.
And just in terms of order of magnitude, like this is clearly bigger than the internet.
The comps on this are things like the microprocessor and the steam engine and electricity.
So this is a really big one, the wheel.
The reason this is so big, I mean, maybe obvious to folks at this point, but I'll just go through it quickly.
So if you kind of trace all the way back to the 1930s,
there's a great book called Rise of the Machines
that kind of goes through this.
If you trace all the way back to the 1930s,
there was actually a debate among the people
who actually invented the computer.
And it was this sort of debate between whether
they kind of understood the theory of computation
before they actually built the things.
And they had this big debate over whether the computer
should be basically built in the image
of what at the time were called adding machines
or calculating machines.
We think of essentially cash registers.
IBM is actually the successor company
to the national cash register company of America.
And that was, of course, the path that the industry took,
which was building these kind of hyper-literal mathematical machines
that could execute mathematical operations billions of times per second,
but of course had no ability to kind of deal with human beings
the way humans like to be dealt with.
And so, you know, couldn't understand human speech,
human language, and so forth.
And that's the computer industry that got built over the last 80 years.
And that's the computer industry that's built all the wealth
and financial returns with the computer industry over the last 80 years,
you know, across all the generations of computers
from mainframes through to smartphones.
But they knew at the time, they knew in the 30s,
actually, they understood the basic structure of the human brain,
and they understood, they had a theory of human cognition.
And actually, they had the theory of neural networks.
So they had this theory that there's actually the first neural network paper,
academic paper, was published in 1943,
which was over 80 years ago, which is extremely amazing.
You can watch the interview on YouTube with these two authors,
McCullough and Pitts.
And you can watch an interview, I think, with McCullough on YouTube
from, I don't know, 1946 or something he was like on TV in the ancient past.
And it's amazing interview because it's like him and his beach house,
and for some reason he's not wearing a shirt.
And he's talking about this future in which computers are going to be built
on the model of the human brains.
through neural networks. And that was the path not taken. And basically what happened was the
computer industry got built in the image of the adding machine. And the neural network basically
didn't happen. But the neural network as an idea continued to be explored in academia and
sort of advanced research by sort of a rump movement that was originally called cybernetics and
then became known as artificial intelligence, basically for the last 80 years. And essentially
it didn't work. Like essentially, it was basically decade after decade after decade of excessive
optimism followed by disappointment. When I was in college in the 80s, there had been a
famous kind of AI boom bust cycle in the 80s in Venture in Silicon Valley.
I mean, it was tiny by modern standards, but at the time was a big deal.
And by the time I got to college in 89 and computer science departments, AI was kind of a
backwater field and everybody kind of assumed that it was never going to happen.
But the scientists kept working on it to their credit.
I mean, they built up this kind of enormous reservoir of concepts and ideas.
And then basically, we all saw what happened with the chat GPT moment.
All of a sudden, it sort of crystallizes.
It's like, oh, my God, right?
It turns out it works.
And so that's the moment we're in now.
And then really significantly, that was less than three years ago, right?
That was the Christmas of 22.
And so we're sort of three years in to basically what is,
effectively an 80-year revolution of actually being able to deliver on all the promise
that the people on the alternate path,
the sort of human cognition model path,
kind of saw from the very beginning.
And then the great news with this technology is it's already kind of ultra-democratized.
You know, the best AI in the world is available on Chad GPD or GROC or Gemini
or these other products that you can just use.
And you can just kind of see how they work.
And same thing for video, you can see with SORA and Vio kind of state-of-the-art.
With that, you can see with music,
can see Suno and IDO and so forth. And so we're basically seeing that happen. And now Silicon Valley
is responding with this just like incredible rush of enthusiasm. And really critically, this gets to
the magic of Silicon Valley, which is Silicon Valley long since has ceased to be a place where people
make Silicon, that not long ago moved out of California and then ultimately out of the U.S.,
although we're trying to bring it back now. But the great kind of virtue of Silicon Valley
over the last 80 years of its existence is its ability to kind of recycle talent from previous waves
of technology and new ways with technology, and then inspire an entire new generation of talent
to basically come join the project. And so Silicon Valley has this recurring pattern of being
able to reallocate capital and talent and build enthusiasm and build critical mass and build
funding support and build human capital and build everything, enthusiasm for each new wave
of technology. And so that's what's happening with AI. I think probably the biggest thing
I could just say is I'm surprised, I think essentially on a daily basis of what I'm seeing,
and we're in the fortunate position to kind of get to see it from two angles. One is we track
the underlying science and kind of research worked very carefully. And so I would say like every day
I see a new research paper that just like completely floors me of some new capability or some new
discovery or some new development that I would have never anticipated that I'm just like, wow,
I can't believe this is happening. And then on the other side, of course, we see the flow of all of the
new products and all the new startups. And I would say we're routinely kind of seeing things that
again kind of have my jaw on the floor. And so, you know, it feels like a lot of this giant vista.
I do think it's going to kind of come in fits and starts. These things are a messy.
processes. This is an industry that kind of routinely gets out of riskies and over promises.
And so there will certainly be points where it's like, wow, this isn't working as well
as people thought or wow, this turns out to be too expensive and the economics don't work
or whatever. But against that, I would just say the capabilities are truly magical.
And by the way, I think that's the experience that consumers are having when they use it.
And I think that's the experience that businesses are having for the most part when they're
working on their pilots and looking at adoption. And then it translates to the end of like
numbers. I mean, we're just seeing this new wave of AI companies is growing revenue,
just like actual customer revenue, actual demand translated through to dollars.
showing up in bank accounts, you know,
at like an absolutely unprecedented takeoff rate,
we're seeing companies grow much faster.
The key leading AI companies
and the companies that have real breakthroughs
and have very compelling products
or growing revenues that, you know,
kind of faster than any way
of I've certainly ever seen before.
And so, like, just from all that,
it kind of feels like it has to be early.
Like, it's kind of hard to imagine
that we've, like, we've topped out in any way.
It feels like everything is still developing.
I mean, quite frankly, it feels like the products,
to me, it feels like the products are still super early.
Like, I'm very skeptical that the form and shape of the products that people are using today
is what they're going to be using in five or ten years.
I think things are going to get much more sophisticated from here.
And so I think we probably have a long way to go.
Maybe on that topic.
So one of the big knocks is, yes, the revenue is immense,
but the expenses seem to also be keeping pace.
So, like, what are people missing as a part of that discussion and topic?
Yeah, so I'll start with just like core business models, right?
And so you're right, there's basically, this industry basically has two core business models,
consumer business model and the quote-unquote enterprise
or infrastructure business model.
You know, look, on the consumer side,
we just live in a very interesting world now
where the internet exists and is fully deployed, right?
And so I give you an example.
Sometimes people ask, it's like,
is AI like the internet revolution?
It's like, well, a little bit,
but like the thing with the internet was we had to build the internet.
Like we had to actually build the network
and we actually had that, you know,
and ultimately it involved an enormous amount of fiber in the ground
and it involved enormous numbers of like mobile cell towers
and an enormous number of, you know,
shipments of smartphones and tablets and laptops in order to get people on the internet.
Like there was this like just like incredible physical lift, you know, to do that.
And by the way, people forget how long that took, right?
The, you know, the internet itself is a invention of the 1960s, 1970s.
The consumer internet, you know, was a new phenomenon in the early 90s.
But, you know, we didn't really get broadband to the home until the 2000s.
You know, that really didn't start rolling out actually until after the dot-com crash,
which is fairly amazing.
And then we didn't get mobile broadband until like 20.
10 and people actually forget the original iPhone dropped in 2007 it didn't have broadband it was on a
it was on a narrow band 2G network it did not have high speed like it did not have anything resembling
high speed data and so it wasn't really until you know really about 15 years ago that we even
had mobile broadband so so the internet was this massive lift but but the internet got built right
and smart fronts proliferated and so the point is now you have five billion people on planet
earth that are on some version of you know brought mobile broadband internet right um and
And, you know, smartphones all over the world are selling for, you know, as little as like 10 bucks.
And you have these, you know, amazing projects like Geo and India that are bringing, you know,
the sort of the remaining, you know, kind of the remaining population of planet Earth that hasn't been online until now is coming online.
So, you know, so we're talking five billion, six billion, you know, people.
And then the consumer, the reason I go through that is the consumer AI products could basically deploy to all of those people
basically as quickly as they want to adopt, right?
And so the internet's the carrier wave for AI to be able to proliferate a kind of white speed into the broad base of the global population.
And that's a, let's just say, that's a potential rate of proliferation of a new technology that's just far faster than has ever been possible before.
Like, you know, like you couldn't download electricity, right?
You couldn't download, you know, you couldn't download indoor plumbing.
You know, you couldn't download television, but you can download AI.
And this is what we're seeing, which is the AI consumer, you know, the AI consumer killer application.
are growing at an incredible rate.
And then they're monetizing really well.
And again, I mentioned this already,
but generally speaking, the monetization is very good.
By the way, including at higher price points.
One of the things I like about watching the AI wave is the AI companies,
I think, are more creative on pricing than the SaaS companies
that the consumer internet companies were.
So it's, for example, now becoming routine to have $200 or $300 per month tiers
for consumer AI, which I think is very positive because I think,
I think a lot of companies cap their kind of opportunity by capping their pricing kind of too
low. And I think the AI companies are more willing to push that, which I think is good.
So anyway, so that, you know, I think that's reason for like, I would say, you know, considerable
rational optimism for the scope of consumer revenues that we're going to be talking about here.
And then on the enterprise side, you know, there the question is basically just, you know,
what is intelligence worth, right? And, you know, if you have the ability to, like, inject more
intelligence in your business and you have the ability to do, you know, even the most prosaic
things like raise your customer service scores, you know, increase up cells, you know, or reduce
churn or if you have the ability to, you know, run marketing campaigns more effectively, you know,
all of which AI is directly relevant to, like, you know, these are like direct business payoffs,
you know, that people are seeing already. And then if you have the opportunity to infuse AI into
new products and all of a sudden, you know, all of a sudden your car talks to you and everything
in the world kind of lights up and starts to get really smart. You know, what's that
worth. And again, there, you just, you kind of observe it and you're like, wow, the leading
AI infrastructure companies are growing revenues incredibly quickly. You know, the pull is really
tremendous. And so, you know, again, it feels like this, just like incredible, you know,
product market fit. And the core business model, right, is actually quite, it's quite interesting.
The core business model is basically tokens by the drink, right? And so it's sort of tokens of
intelligence, you know, per dollar. And oh, and then, by the way, this is the other fun thing is
if you look at what's happening with the price of AI,
the price of AI is falling much faster than Moore's Law.
And I could go through that in great detail,
but basically, like, all of the inputs into AI
are on a per unit basis, the cost are collapsing.
And then, as a consequence,
there's kind of this hyper deflation of per unit cost,
and then that is driving, you know, just like,
you know, a more than corresponding level of demand growth,
you know, with elasticity.
And so, you know, even there, we're like,
it feels like we're just at the very beginning
of kind of, you know, figuring out exactly how, you know,
expensive or cheap this stuff is getting.
I mean, look, there's just no question tokens by the drink
are going to get a lot cheaper from here.
That's just going to drive, I think, enormous demand.
And then everything in the cost structure is going to get optimized, right?
And so, you know, when people talk about, like, you know, the chips
or, you know, whatever, you know, kind of the unit input cost
for building AI, you know, you now have these, like,
the losses of blind demand are going to kick in, right?
But what's the, you know, in any market that has sort of commodity
like characteristics, you know, the number one cause of a glut is a shortage and the number one
cause of a shortage is the glut, right? And so you have, you know, to the extent you have like
shortage of GPUs or shortage of whatever infrastructure ships or shortage of, you know, whatever
data center space, you know, if you look at just the history of humanity building things
in response to demand, you know, if there's a shortage of something that can be physically
replicated, it does get replicated. And so there's going to be like just enormous build out
of all. I mean, there is. There's just hundreds of billions or at this point, trillions of
maybe going into the ground in all these things.
And so the per unit cost of the AI companies
are going to drop like a rock over the course of the next decade.
And so, yeah, I mean, the economic questions, of course,
are very real.
And of course, there's microeconomic questions around all these businesses.
But the sort of macro forces have been at least sure, I think, are very strong.
And, yeah, I just given the underlying value of this technology
to both the consumers as the enterprise users,
and given the just like incredibly aggressive discovery that's happening
of all the ways that people can use this in their lives and in their businesses,
like it's just really hard for me to see how it both doesn't grow a lot
and generate just enormous revenue.
Yeah, and actually I think it was what two or three weeks ago
where AWS was saying like the GPUs that they've been using,
they've been able to extend back to even like seven plus years.
So like the shelf life also of the GPUs that they're using
is now extending in ways of which they can optimize better
than maybe perhaps the last couple of cycles as well.
Is that the right way to think about as well?
Yeah, that's right.
And then that's one really important question and observation.
And then, by the way, that also gets to this other kind of question
where there's different theories on it,
which is basically big models versus small models.
And so a lot of the data, a lot of the data center build is oriented around hosting,
training, and serving the big models, you know, for all the obvious reasons.
But there's also the small model revolution is happening at the same time.
And if you just kind of track, you know, you can get the various research firms,
of these charts you can get, but if you just kind of track the capability of the leading
edge models over time, what you find is after six or 12 months, there's a small model that's just
as capable. And so there's this kind of chase function that's happening, which is the capabilities
of the big models are basically being shrunk down and provided at smaller size and then therefore
smaller cost quite quickly. So I'll just give you the most recent example that's just got hit
over the last two weeks. And again, this is the thing that's just kind of shocking, is there's this
Chinese company that has a, well, I forget the name of the company, but it's the company that
produces the model called Kimi, just spelled KIMI, which is one of the leading open source models
out of China. And the new version of Kimi is a reasoning model that, at least according to the
benchmark so far, is basically a replication of the reasoning capability is of GPT5, right? And the
reasoning model is a GPT5. We're a big advance over GPT4, and of course GPT5 costs a tremendous amount
of money to develop and to serve, and all of a sudden, you know, here we are, whatever,
six months later and you have an open source model called Kimmy and I think I don't know if they've had
it's either shrunk down to be able to run on either it's like one MacBook or two Macbooks
right and so you can all of a sudden if you have like an applicate you know if you're a business
and you want to have a reasoning model this GPT5 capable but you know you're whatever you're not
going to pay the whatever GPT5 cost or you're not going to want to have it be hosted and you want to
run it locally you know you can do that and and again that's just like another just it's like
another, you know, it's another breakthrough.
Like, it's just, it's another, another Tuesday, another huge advance.
It's like, oh, my God.
And then, of course, it's like, all right, well, what is opening a I going to do?
Well, obviously, they're going to go to GPT6, right?
And, you know, right.
And so there's this kind of laddering that's happening where the entire industry is moving
forward.
The big models are getting more capable.
The small models are kind of chasing them.
And then, and then the small models provide, you know, completely different way to deploy,
you know, at very low price points.
And so, yeah, and, you know, we'll see what happens.
there are some very smart people in the industry who think that ultimately everything only runs
to the big models because obviously the big models are always going to be the smartest.
And so therefore, you're always going to want the most intelligent thing because why would
you ever want something that's not the most intelligent thing for any application?
You know, the counter argument is just there's a huge number of tasks that take place in the economy
and in the world that don't require Einstein, you know, where, you know, 120 IQ person is great.
You don't need a, you know, 160 IQ, you know, Ph.D.
And, you know, string theory, you just like have somebody who's competent and capable and it's great.
And so, you know, I, you know, we've talked about this before.
I tend to think the AI industry is going to be structured a lot like the computer industry
and it ended up getting structured, which is you're going to have a small handful of basically
the equivalent of supercomputers, which are these, like, giant, you know, kind of we call God models
that are, you know, running in these giant data centers.
And then, you know, I'm not, like, convinced on this, but my kind of working assumption
is what happens is then you have this cascade down of smaller models, ultimately all the way,
the very small models that run in embedded systems, right, run on individual
chips inside every, you know, physical item in the world, and that, you know, the smartest models
will always be at the top, but the volume of models will actually be the smaller models that
proliferate out. And right, that's what happened with microchips. It's what happened to
computers, which became microchips, and then it's what happened with operating systems and
with a lot of everything else that we built in software. So, you know, I tend to think that's what
will happen. Just quickly on the chip side, again, like chips, you know, if you look at the entire
history of the chip industry, the shortages become gluts. And
you get just, you know, like anytime there's a giant profit pool in a new chip category,
you know, somebody has a lead for a while and kind of gets, you know, let's say the profits
appropriate to what we call robust market share. But in time, what happens, right, is that
draws competition. And of course, you know, that's happening right now. So Nvidia's, you know,
Nvidia's absolutely fantastic company, fully deserves the position that they're in, fully deserves
the profits that they're generating. But they're now so valuable generating so many profits that
it's the bat signal of all time
to the rest of the chip industry
to figure out how to advance
the state of the art and AI chips.
And that's already happening, right?
And so you've got other major companies
like AMD coming at them
and then you've got, really significantly,
you've got the hyper-stalers building their own chips.
And so, you know, a bunch of the big,
a bunch of those kind of big tech companies
are building their own ships.
And of course, then the Chinese
are building their own chips as well.
And so it's just, it's like pretty likely
in five years that, you know,
AI chips will be, you know,
cheap and plentiful, at least in comparison
to the situation today,
which again, I think we'll, you know,
will tend to be extremely positive for the economics
of the kinds of companies that we invest in.
Yep. And that startups are also starting to go after
new chip design as well, which is that's the other thing is,
yeah, you have these disruptive startups. And actually that,
just for a moment on the chips, we're not really big investors in chips
because it's kind of a big, it's kind of a big company thing.
But it's a little bit of historical happenstance that AI
is running on quote-of-quote GPUs, you know,
which a GPU stands for graphical processing unit.
So, and basically just for people who haven't tracked this,
there were basically two kinds of chips that
made the personal computer happen, the so-called CPU central processing unit,
which classically was the Intel X-86 chip.
It's kind of the brain of the computer.
And then there was this other kind of ship called the GPU or graphical processing unit
that was the sort of second chip in every PC that does all the graphics.
And this is graphics, like, you know, 3D graphics for gaming or for CAD cam or for anything
else, you know, Photoshop or for anything that involves, you know, lots of visuals.
And so the kind of canonical architecture for a personal computer was a CPU and a GPU.
by the way, same thing for smartphones.
By the way, and over time, you know, these have kind of merged.
And so, like, a lot of CPUs now have GPU capability built in.
Actually, a lot of GPUs now have CPU capability built in.
So, you know, this has gotten fuzzy over time, but like that was like the classic breakdown.
But the fact that that was the classic breakdown, you know, kind of meant that while Intel had a, you know,
monopoly for a long time on CPUs, there was this other market of GPUs, which Nvidia, you know,
basically fought the GPU wars for 30 years and came out the winner, like what was the best
company in the space.
But it was like a hyper-competitive market for graphics processors.
It was actually not that high margin.
It was actually not that big.
And then basically, it turned out that there were two other forms of computation
that were incredibly valuable that happened to be massively parallel in how they operate,
which happened to be very good fits for the GPU architecture.
And those two basically highly lucrative additional applications were cryptocurrency,
starting about 15 years ago and then AI, starting about, you know, whatever, four years ago.
And so, and Invidia, like, I would say very cleverly set itself up with an architecture that works very well for this.
But it's also just a little bit of a twist of fate that it just turns out that if AI is the killer app,
it just turns out that the GPU architecture is the best legacy architecture is devoted to.
And I go through that to say, like, if you were designing AI chips from scratch today, you wouldn't build a full GPU.
You would build dedicated AI chips that were much more specifically adapted to AI and would have, I think it would just be much more economically efficient.
you know, Jen, to your point, there are startups that are actually building entirely new kinds of chips oriented specifically for AI. And, you know, we'll have to see what happens there. You know, it's hard to build a new chip company from scratch. You know, it's possible that one or more of those startups makes it on their own. And some of them are, you know, doing very well. It's also possible, of course, that they get bought, you know, by big companies that have the ability to scale them. And so, you know, we'll see exactly how that unfolds. And of course, we'll also, by the way, see, you know, the Koreans are going to play here.
For sure, the Japanese are going to play, and then, you know, the Chinese in a major way as well.
And, you know, they have their own, you know, native chip ecosystem that they're building up.
And so there are going to be many choices of AI chips in the future.
And it's going to be a, you know, that'll be a giant battle that will be a giant battle that we observe very carefully
and that we make sure that our companies basically are able to take full advantage of.
Well, on the topic of international, you mentioned Kimmy earlier.
So it seems like some of the best open source models today are from China.
Should this be worrisome to folks?
How are you thinking and talking about this topic with folks in D.C.?
I know you were just there last week.
How much of this is a concern for U.S. companies,
particularly just having seen the rise of China do unnatural things in solar markets,
car markets.
Are they kind of flooding the ecosystem so that they can eventually kind of take share
and increasingly own the ecosystem.
Yeah, so, you know, a couple things.
So one is, you know, you want to start these discussions
by just kind of saying, like, you know,
there's vigorous debate in the U.S. and around the world
of, like, you know, how much are we in a new Cold War
was trying to, you know, and exactly like how hostile,
you know, should we view them in it?
You know, it's very tempting, by the way.
It's very tempting, and I think it's a very good case we made
that we're in, like, a new Cold War that's, you know,
that in a lot of ways is like the U.S. versus USSR in the 20th century.
You know, it is kind of going to be, it is more complicated than that
because the U.S. and the U.S.S.S.R were never really intertwined from a trade standpoint.
And a big part of that, quite frankly, was the USSR never really made anything that anybody else needed,
I guess, other than weapons.
But, like, you know, the USSR's primary exports were literally like, you know, literally like wheat and oil.
Whereas, of course, China exports just a tremendous number of physical things, right,
including like a huge part of like the entire supply chain of parts that basically go
into everything that American manufacturers, you know, kind of make, right?
And so by the time a U.S., you know, whatever, by the time an American company brings a toy
to market, right, or a, you know, or a car or anything or a computer or a smartphone
or whatever, like it's got a lot of componentry in that it was made in China.
So there is a much tighter interlinkage between the American and Chinese economies and there
was the American and Soviet economies.
And, you know, maybe, you know, Adam Smith or whatever might say, you know, that's good news
for peace and that, you know, both countries need each other. By the way, the other part of that
argument is the Chinese, basically, the Chinese, you know, the Chinese governance model is based on
high employment, you know, because, you know, if, you know, at least all the geopolitical people
say if China ended up with like 25 or 50 percent unemployment, that would cause civil unrest,
which is the one thing that the CCP doesn't want. And so the corresponding part of the trade
pressure is China needs the American export market. You know, the American consumer is like a third of
the global economy, a third of global consumer demand. And so, you know, China needs the U.S.
or it has high, all of a sudden, a lot of its factories would go kind of instantly bankrupt
and, you know, would cause mass unemployment and unrest in China. So, so anyway, like, you know,
there is this complicated, it's a complicated, intertwined relationship. Having said that, you know,
the mood in D.C., basically, for the last 10 years, on a bipartisan basis, has been that we need to take,
we, the U.S. need to take China more seriously as a geopolitical foe. And, you know, under that school of
thought, there's sort of, you know, there's the military dimension, which is, you know, this
is the risk of some kind of war in the South's trying to see,
the risk of some kind of war around Taiwan.
And so that has everybody in Washington on high alert.
You know, there's also this economic question
around the kind of deindustrialization of the U.S.
and the potential reindustrialization
and what that means about, you know, dependence on China.
And then there's this AI question.
And the AI question is an economic question,
but it's also like a geopolitical question,
which is, okay, you know, basically AI is essentially
only being built in the U.S. and in China.
you know, the rest of the world either, you know, can't build it or doesn't want to,
which we could talk about.
So it's basically U.S. versus China.
And then AI is going to proliferate all over the world,
and is it going to be American AI that proliferates all over the world
or is it going to be Chinese AI that proliferates all over the world?
And so, and I would say just generally across party lines in D.C.,
the things I just went through are kind of how they look at it.
And the Chinese are in the game.
And so, you know, the Chinese are in the game for sure, you know, with software, you know,
Deep Seek, you know, was kind of the big, you know, kind of fired the starting gun
in the software race, and now you've got, I think it's, I think you've got four, it's like
Deep Seek, which is a, deep, so deep seek is an AI model from actually a hedge fund in China.
It's a little bit, kind of took a lot of people by surprise.
Then Quinn is the model from Alibaba.
Kimmy is from another startup, oh, called Moonshot.
The company's called Moonshot.
And then there's, you know, and then, you know, there's also 10 cents and Baidu and
bite dance, you know, that are all primary, you know, companies.
he's doing a lot of work in AI.
And so, you know, there's somewhere between three to six, you know,
kind of primary AI companies,
and then there's, you know, tremendous numbers of startups.
And so, you know, they're in the race on, you know,
they're in the race on software.
They are, you know, working to catch up on ships.
They're not there yet, but they're working incredibly hard to catch up.
And just as an example of that, you know,
at least the common understanding, you know, in the U.S.
is that the reason you haven't seen the new version of Deep Seek yet
is that basically the Chinese government
has instructed them to build it only on Chinese chips
as a motivator to get the Chinese ship ecosystem up and running.
And then the main ship company there is Huawei, although there could be more in the future.
And then there's, you know, so there's that.
And then there's everything to follow, which is basically AI in kind of robotic form, right?
And so there's this basically global technological economic robotics competition that's kicking off.
And, you know, China kind of starts out ahead on robotics because they're just ahead on so many of the components that go into robots.
because the sort of, like I said, the kind of entire supply chain of like
electromechanical things, you know, basically moved from the U.S. to China 30 years ago
and has never come back.
So that's kind of the DC lands on it.
And I would say, you know, D.C. is watching it, you know, quite carefully.
The big kind of supernova moment this year was the deep seek release.
The deep seek release was surprising on a number of fronts.
One was just how good it was.
And again, along this line of it took the capability set that we're running in large
models in the cloud and kind of shrunk it onto a, you know, into a sort of a reduced size,
you know, a smaller version of sort of equivalent capabilities that you could run on small amounts
of local hardware. And so there was that. And then it was also a surprise that it was released
as open source, and particularly open source from China, because China does not have a long history
of open source. And then it was also a surprise that it actually came from a hedge fund. So it didn't
come from a big, you know, sort of university research lab. It didn't come from a, you know,
from a big tech company. It came from a hedge fund. And it, like, as far as we can tell,
it basically is this somewhat idiosyncratic situation where you just have this incredibly
successful quant hedge fund with all these, you know, super geniuses. And the founder of that hedge fund,
you know, basically decided to build AI. And, you know, at least external indications are this
was a surprise even the Chinese government. It's impossible to prove, you know, what the Chinese
government was surprised by or not. But, you know, there's at least the atmospherics are that this was
not exactly planned. This was not a national champion tech company at the time that Deep
Seek was released. It sort of came out of left field, which, by the way, is very encouraging for
the field that it was possible for somebody to do that, kind of who was unknown, right? Because
it kind of means that maybe you don't need all these, you know, super genius, superstar
researchers. Maybe actually smart kids can just build this stuff, which I think is the direction
things are headed. And so that kicked off, I would say, like this kind of, I don't know,
copycast is the wrong word, but that was sort of, it feels like the success of Deepseek and the
success of Deep Seek from China as open source kind of kicked off.
a sort of trend in China of releasing these open source models.
You know, look, the cynics, you know, in D.C. would say, you know, yeah, like they're dumping,
right? They're obviously dumping. They're trying to, you know, they see that the West has
this opportunity to build this giant industry. You know, they're trying to commoditize it right
out of the gate. You know, there's probably something to that. You know, the Chinese industrial
economy does have a history of, you know, sort of, let's say, subsidized production that leads to
selling, you know, selling things below cost in some cases. But I think also it's like, I think,
that's almost too cynical of you also because it's just like,
all right, wow, like they're really in the race, like
open source, closed source, whatever, like
they're actually really in the race.
You know, we've talked in the past, I think, on
LP calls about, you know, these policy
fights that, you know, we've been having in D.C. for the last
two years. And, you know, there was a big, pretty
big push within the U.S. government, or, you know,
two years ago to basically, you know, restrict
or outright ban, you know, a lot of AI.
And, you know, it's very easy for a country that is
the only game in town to have those conversations.
It's quite another thing if you're actually in a foot race
of China. And so I think actually the policy landscape in D.C. has, I would say, has it proved
dramatically as a consequence of sort of an awareness now that this is actually a two-horse race,
not a one-horse race. For sure. Yeah, actually, on the point, I'll jump ahead here to policy
and regulation just because it seems like the current stance on 50 different set of AI laws
by state seems like a catastrophic way to put us effectively with one of our our
hands tied behind our back here in terms of the AI race. What's a state of plan that? Are folks
recognizing that that would be catastrophic for progress and development? Where do most people
at least stand on that topic today? Yeah, so it's a little bit complicated. So I rewind to say like two
years ago, I was very worried about like really ruinous federal federal legislation on AI.
And there was, you know, we engaged kind of very heavily at that point, which we talked about
in the past. And I think the good news on that is I think the risk of that sitting here today is very
low. There's very little mood in D.C. on either side of the aisle to really, you know,
essentially there's very little interest in doing anything that would prevent us from beating
China. So, you know, on the federal side, things are much better now. There will be issues and
their attentions in the system, but I think things are looking pretty good. That has translated,
Jen, to your point, that's translated a lot of the attention to the states. And basically,
what's happened is, you know, under our system of federalism, you know, the states get to pass their own
laws on a lot of things. And so, yeah, basically, you know, a lot of, you know,
with these things, it's always a combination. A lot of well-meaning people are trying to figure out
what to do at the state level. And then, of course, there's a lot of opportunism where AI is just
the hot topic. And so if you're a, you know, aggressive up-and-coming state legislator or whatever
in some state, then you want to run for governor and then president, you know, you want to kind of
attach yourself to the heat. And so there's like a political motivation to do state-level
stuff. Yeah, and you're sitting here today, like, we're tracking on the order of 1,200 bills
across the 50 states.
And by the way, not just the blue states, also the red states.
And so, you know, I've, you know, for the last, like, five years or whatever,
I spent a lot of time complaining about, you know,
kind of what Democratic politicians are threatening to do to attack.
There's also a lot of Republicans that, like, Republicans are not a block on this,
and there are quite a few, like, local Republican officials in different states
that also, I think, have, you know, let's say, you know,
misinformed or ill-advised views and are trying to put together,
put out bad bills.
You know, it's a little bit weird that this,
is happening and that, you know, the federal government does have regulation of interstate
commerce and, you know, technology, AI, kind of by definition, is interstate. Like, you know, there's,
there's no AI company that just operates in California or just operates in, you know, Colorado or Texas.
You know, AI, of all technologies, AI is obviously something this sort of national scope. You know,
it's sort of obvious that the federal government should be the regulator, not the states.
But the federal government needs to assert itself, needs to step in. There was actually an attempt to do that.
there was an attempt to add a moratorium on state-level AI regulation
that basically would reserve the right of the federal government
to regulate AI and sort of prevent the states
from moving forward with these bills.
That was, I think, part of the negotiation
for the, quote, one big, beautiful bill.
And then there was a deal behind that,
and that deal kind of blew up at the last minute.
And that moratorium didn't happen.
And, you know, in fairness, the critics of that moratorium,
it probably was probably too much of a stretch.
It was definitely too much of a stretch
to get enough support to pass,
but it was also probably too much of a stretch
in terms of restricting the states
from certain kinds of regulation
that they really should be able to do.
So it just didn't quite come together.
There's a very active,
we're having very active discussions in D.C. right now
about kind of the next, you know,
the kind of the next turn on that.
You know, the administration is,
I would say the administration is very supportive
of the idea of the federal government
being in charge of this,
as part of it being an actual, you know,
50 state issue and an issue of national importance.
And then, you know, I'd say most Congress people
on both sides of the aisle,
to get this. So we just, we kind of have to figure out a way to, you know, to land this,
but, but I think that'll happen. Some of the state level bills are wild. The, the Colorado passed
a very draconian regulation bill last year and against like furious objections from the local
startup ecosystem in and in and around Denver and Boulder. And actually, they're, they're now actually
trying to reverse their way out of that bill, you know, a year later.
Maybe some of the nuance of it, like the algorithmic discrimination and like had a minute
What were some of the extreme versions of what they had proposed?
Yeah, so the really draconian one was,
the one that we really fought hard was the one in California,
which was called SB 1047.
And it was basically modeled basically after it was called the EU AI Act,
so the European Union's AI Act.
And this is the backdrop to all the U.S. stuff,
which is the EU passed this bill called the AI Act.
I don't know, whatever, two years ago.
And it basically has killed AI development.
But it's actually killed AI development in Europe to a large extent.
And then it even is so draconium that,
Even big American companies like Apple and Meta are not launching leading-edge AI capabilities in their products in Europe.
Like, that's how, that's how, like, draconian that bill was.
And it's sort of a classic, it's a classic kind of European thing where they, like, you know, like, they just thought that, you know, they have this kind of view that it's just like, well, you know, if we can't be the leader.
They literally say this, by the way, if we can't be the leader's innovation, at least you can be the leaders in regulation.
And then they passed this, like, incredibly, you know, kind of ruinous self-harm, you know, kind of thing.
and then, you know, a few years passed
and they're like, oh, my God, what have we done?
And so they're, you know, they're kind of going through
their own version of that.
By the way, you know, I, you know, when I talk about Europe,
I tend to be very dark about the whole thing.
I will tell you the darkest people I know about Europe
are the European entrepreneurs who move to the U.S.
are just, like, absolutely furious about what's happening in Europe on this stuff.
But even there, like, it's so bad in Europe,
like they shot themselves in the foot so badly
that there's actually a process now at the EU
to try to unwind that.
They're trying to unwind the GDPR.
So anyway, for people tracking Europe, Mario Draghi is the former, I guess, Prime Minister of Italy,
did this thing about a year ago called the Draghi Report, which is the report on European competitiveness.
And he kind of outlined kind of in great detail all the ways that Europe was holding itself back,
and part of it was over-regulation areas like AI.
So they're trying to reverse out of that, or making gestures, you know, we'll see what happens.
In the middle of all that, California sort of inexplicably decided to basically copycat the EU AI act
and try to apply it to California.
which might strike you as completely insane,
to which I would say, yes, welcome to California.
And, you know, it was basically this, like,
Sacramento political dynamic that kind of got crazy.
It would have, you know, completely killed, you know,
AI development of California.
Unfortunately, our governor vetoed it at the last minute.
It did pass both houses to the legislature
that he vetoed at the last minute.
Jen, to your point, it would have done,
it would have done a whole bunch of things
that were ruinously bad,
but one of the things it would have done
is it would have assigned downstream lives,
liability to open source developers.
And so, you know, we talked about, you know, the Chinese open source thing.
Okay, so you've got Chinese out there with open source.
Now you're going to have American companies that have open source AI.
By the way, you're also going to have American academics and just like independent people
and their nights and weekends developing open source, you know, which is a key way that all this technology
proliferates.
And so this law would have assigned downstream liability to any misuse of open source to
the original developer of the open source.
And so, you know, you're an independent developer or you're an academic or you're a startup,
you develop and release an AI model.
The AI model works fine.
The day you release it, it's great.
But like five years later, it gets built into a nuclear power plant,
and then there's a meltdown of the nuclear power plant.
And then somebody says, oh, it's the fault of the AI.
The legal liability for nuclear meltdown or for anything,
any other practical real world thing that would follow in the out years
would then be assigned back to that open source developer.
Of course, this is completely insane.
It would completely kill open source.
It would completely kill startups doing open source.
It would completely kill academic research, like in its entirety.
you know, anything in the field.
And so, you know, like, that's the level of playing with fire, you know,
kind of that these state level politicians have become enamored with.
Like I said, I think the good news is the feds understand this.
I suspect that this is going to get resolved.
But it does need to get resolved because, you know, just as a country,
it just doesn't make any sense to let the states kind of operate suicidally like this.
And so that's what we're doing.
You know, we talk about this.
We call this our little tech agenda.
We're extremely focused on the freedom and startups to innovate.
We are not trying to argue, you know, many, many other issues.
We operate in a completely bipartisan fashion.
We have extensive support, you know, on both sides of the aisle and for both sides of the aisle.
So it's a truly bipartisan effort, very policy-based.
And, you know, I think very much aligned with the interests of the country broadly.
And so that is what we're doing.
And then the other question we get, we get actually, you know, in some cases from LPs,
but in a lot of cases, actually from employees, is like, okay, why us?
Right? Like, you know, with any sort of, you know, policy question like this, there's always this collective action question, which is this like, you know, tragedy of the commons, which is, in theory, like everybody, every venture firm, every tech company, whatever should be weighing in on these things. And so at some point, it falls on somebody's shoulders to fight these things. And we, we, Ben and I just basically concluded that the stakes here were just way too high. You know, if we're going to be the industry leader, we just have to take responsibility for our own destiny, you know, for better or for worse,
I think that's the cost of doing business for being the leader in the field right now.
Before we get off the topic of AI, I want to go back to one question that was submitted in.
So do you think usage-based or utility is a right way to price in AI compared to seats?
Ah, that is a fantastic question.
So this is one of these giant, this is in my list of what I call the trillion-dollar questions,
where, you know, depending on how this is the answer, we'll drive, you know, trillions of dollars in market value.
So, yeah, so usage-based pricing is, it's actually fairly amazing.
If you think about this from a startup standpoint, from a venture standpoint,
it's actually fairly amazing what's happened.
And I'm not really talking about this in public because I don't really,
I guess I don't want it to stop.
I think it's actually quite amazing, which is you have these technology companies,
you know, these big tech companies with these incredible R&D capabilities
that are building these big models, these big AI models with this incredible,
you know, new kind of, new kind of intelligence.
And then it turns out that they were already in a war, they were already in the cloud war, right?
And so they were already in the war for kind of cloud services.
And this is like AWS versus Azure versus Google Cloud,
you know, and then all these other cloud efforts.
And so what actually happened was they sort of, like,
there's an alternate universe in which they basically just kept all of their magic AI secret
and captive and just used it in their own business or used it to just compete with more companies,
you know, in more categories.
But instead, what they've done is they've basically, you know,
commoditize this too strong a word, but they have proliferated their magic new technology
through their cloud business, which is this business that just has these incredible scale
kind of components to it, you know, and sort of this hypercompetition between the providers
and these, you know, these prices that come down very fast. And so you've got like the most
magic new technology in the world. And then it's basically being served up by those companies
as a cloud business and made basically available to everybody on the planet to just click
and use and for like relatively small amounts of money. And then on a usage basis, which means
and usage is great for startups because it means you can start a,
easily, right? There's basically no fixed cost. For a startup building an AI app, they don't
have giant fixed costs because they could just tap into the Open AI or Anthropic or Google or
Microsoft or whatever cloud tokens by the drink, you know, intelligence tokens by the drink offering
and just get going. And so it's kind of this, from the startup standpoint, it's like this marvelous
thing where like the most magical thing in the world is available by the drink. You know, it's absolutely
amazing. And, you know, that model, you know, by the way, that model's working in those companies
are happy and they're growing really fast.
and they're, you know, happily reporting massive cloud revenue growth,
and, you know, they're happy with the margins and so forth.
And so, you know, I think generally it's working.
And those businesses are, I think, likely to get much larger.
And so I think, you know, generally that's going to work.
But to the question, like, that doesn't mean that the optimal pricing model,
for, for example, all of the applications should be tokens by the drink.
And in fact, very much, I think, not the case.
You know, we spent a lot of time working.
We actually have, you know, dedicated experts on pricing in our firm.
We spent a lot of time with our companies working on pricing because it's, you know,
it's really this magical art and science
that a lot of companies don't take seriously enough.
So we spend a lot of time
of their companies on this.
And of course, a core principle of pricing
is you don't want to price by cost.
If you can avoid it, you want to price by value, right?
Like, you want to price,
you have a price where you're getting a percentage
of the business value.
You know, especially when you're selling two businesses,
you want to price as a percentage of the business value
that you're getting.
And so you do have some AI startups
that are pricing by the drink for certain things
that they're doing, but you have many others
that are exploring other pricing models.
you know, some that are just like replications
and SaaS pricing models,
but you also have other companies
who are exploring pricing models,
for example, of, well,
if the AI can actually do the job of a coder
or the AI could do the job of a doctor or a nurse
or a radiologist or a lawyer or a paralegal, right,
or whatever, or a teacher,
you know, basically can you, can you price by value
and can you get a percentage of the value
of what otherwise would have been,
you know, would have been literally a person.
You know, or by the way,
equivalently, can you price by marginal productivity?
So if you can take a human doctor and make them much more productive because you give
them AI, you know, can you price as a percentage of kind of the productivity uplift, you know,
from the, from the, from the augment, you know, the symbiotic relationship between the human being
and the AI.
And so I think what we see in startup land is like a lot of experimentation happening on, on these
pricing models.
And I think, again, I think that's like super healthy.
I, you know, I was in this little speech on this is like high prices are really
underappreciated.
High prices are often a favor to the customers.
It's actually really funny.
The naive view on pricing is the lower the price, the better is for the customer.
The more sophisticated way of looking at it is higher prices are often good for the customer
because a higher price means that the vendor can make the product better faster.
Right.
Like you can actually – companies with higher prices, higher margins can actually invest more in R&D
and they can actually make the product better.
And, you know, most people who buy things aren't just looking for the cheapest price.
They want something that's going to work really well.
And so often high prices – the customer doesn't ever say this.
It will never show up in a survey.
but the high price can actually be a gift to the customer
because it can make the vendor better,
it can make the product better and ultimately make the customer better off.
And so I'm very encouraged by the degree
to which the AA entrepreneurs are willing to run these experiments.
And we'll have to see where it pans out.
But at least so far, I feel good about the, you know,
at least the attitude in the industry about it.
Awesome.
I actually, as you were done through it,
I had probably 10 more follow-up questions,
but I'm actually going to go back to a topic you had
briefly the trillion-dollar questions.
Will open-source or closed-source win?
feels like we've come out on this debate,
or where do you put that?
No, I think this is still open.
I think this is still very open.
You know, like the closed-source models
keep getting better.
By the way, generally, if you just like take the temperature
of the people working at the big labs
who work on the big preparatory models,
like generally what they'll tell you is,
progress is continuing at a very rapid pace.
You know, there's this, you know,
there's this periodic concern that kind of shows up online,
which is, or in the market,
which is like, you know,
maybe the capabilities,
are topping out. And, you know, there's certain, there's, there's, there's, there's,
people are working at, but like, the people working at the big labs are like, oh, no, we have like
800 new ideas. Like, we have tons of new ideas. We have tons of new ways of doing things.
We might need to find new ways to scale, but like, we, we have a lot of ideas on how to do that.
We know a lot of ways to make these things better. And, you know, we're basically making
new discoveries all the time. So, like I would say, you know, generally the people working
like across all the big labs are pretty optimistic. And so, like, I think the big models are going to
continue to get better, you know, very quickly here.
And then, you know, overall, and then the open source models continue to get better.
And like I said, you know, every, every, I don't know, every month or something,
there's like another big release of like something like this Kimmy thing.
Where it's just like, wow, like, you know, that's amazing.
And, you know, wow, they really like shrunk that down and got that capability on a very small form factor.
And so, yeah, that's the case.
And then, you know, maybe just the third kind of thing to bring up is the other really nice
benefit of open source is that open source is the thing that's easy to learn from.
Right. And so if you're a computer science, if you're a computer science professor who wants to teach a class on CS on AI, or if you're a computer science student that's trying to learn about it, or if you're just like a normal engineer in a normal company trying to learn this new thing, or just somebody in your, you know, by the way, somebody in your basement at night with a startup idea, the existence of these state of the art open source models is amazing because that's the education that you need. Like they actually, these open source models actually show you how to do everything. Right. And so.
And what that's leading to, right, is the proliferation of the knowledge about how to build AI is, like, expanding very fast.
Again, as compared to a counterfactual world in which it was all basically bottled up in two or three big companies.
And so, you know, the open source thing is also just proliferating knowledge, and then that knowledge is generating a lot of new people.
And so I, you know, say, as you guys have all seen sitting here today, AI researchers are at an enormous premium.
You know, AI researchers today are getting paid more than professional athletes, right?
Like, you know, and that's, right, that's the supply demand imbalance.
There aren't enough of them to go around, but, you know, again, shortages create gluts.
The number of the number of smart people in the world who are coming up to speed very quickly on how to build these things.
I mean, some of the best AI people in the world are like 22, 23, 24, like they, you know, kind of by definition, they haven't been in the field that long.
You know, they can't have been experts their whole lives, right?
So, you know, they kind of have to have come up to speed over the course the last four or five years.
And if they've been able to do that,
then there's going to be a lot more in the future
that are going to do that.
And so just the sort of spread of the level of expertise
on this technology is happening now very quickly.
So, yeah, I mean, I think it's still, like I said,
I think it's still a race.
And by the way, you know, look,
the long-term answer may well just be both.
You know, like I said, if you believe my pyramid industry structure,
then there will certainly be a large business
of whatever is the smartest thing,
almost regardless of how much it costs.
but there will also be this just giant volume market
of smaller models everywhere,
which is what we're also seeing.
Yep, yep.
Another question you had posed at that point in time
was will incumbents versus startups win?
And at that point in time,
I think there was a mixed bag of where the incumbents were approaching AI.
I think that's radically changed in the last two years.
And then on the counter example,
the blossoming of startups increasingly now
maybe migrating into the incumbent category,
just how they've come since that time.
You want to take that question and give your assessment of where the state of the world is?
Yeah.
So, I mean, look, you know, big companies that are definitely, you know, playing hard,
you know, Google's playing hard, Metis playing hard, Amazon, Microsoft.
You know, there's a bunch of these companies that are, you know,
that are kind of in there, you know, very aggressively.
And then you've got these, you know, what we call the new incumbents like Anthropic and
an open AI.
But you also have like, you know, even in the last two years,
you've had this birth of all of a sudden, like, brand new companies that are almost
instant incumbents.
And you could say XAI is one of those.
as Mistral. By the way, Mistral is the great outlier to my Europe thing from earlier.
Like, Mistral is actually doing very well as sort of the European kind of, you know, French, national, European continental, you know, kind of AI champion.
Sort of the, you know, the exception that proves the rule.
But, you know, there's a bunch of these now that are like, you know, doing quite well and are kind of becoming new incumbents.
And then, of course, there's tons of startups.
By the way, there's, and then there's actual foundation model startups, right?
And so, you know, we funded, you know, we funded Ilya out of Open AI to do a new foundation model company.
We funded Miramaradi, also out of Open AI.
We funded Fay-Faley out of Stanford to do a world model foundation model company.
And so, you know, there are new swings all, you know, all early, but very promising to kind of build, you know, new incumbents quickly.
And so, you know, that's all happening.
And then, you know, and then on top of that, there's just this giant explosion of AI application companies, right?
And so there's basically companies that then usually startups that basically take the technology and then, you know,
feel that in a specific domain, whether that's law or medicine or education or.
you know, creativity or whatever.
But again, here it's just like, it's amazing kind of how sophisticated things are getting
very quickly.
So I was going to talk about the application companies for a moment.
So like an application company, like classic examples, like a cursor is like an application
company.
So they take the core AI capability, which they purchase by the drink from, you know,
Anthropic or Open AI or Google, you know, tokens by the drink.
And then they build a code, basically a code editor, what we used to call an IDE.
integrated development environment or basically like a software creation system.
So they build like an AI coding system on top of the anthropic or open AI or whatever,
you know, kind of big models.
So Tield that, and the critique of those companies in the industry has been,
oh, those are what are called GPT wrappers.
It's kind of the pejorative.
And the idea basically being as, well, they're not actually like,
they're not actually doing anything that's going to preserve value because the actual,
the whole point of what they're doing is they're surfacing AI,
but it's not their AI.
The AI that's being surfaced is from somebody else.
And so these are kind of these pass-through shell things
that ultimately won't have value.
It actually turns out what's happening
is kind of the opposite of that,
which is the leading AI application companies like Cursor.
I mean, first of all, what they're discovering is
they're not just using a single AI model.
They actually, as these products get more sophisticated,
they actually end up using many different kinds of models
that are kind of custom tailored to the specific aspects
of how these products work.
And so they may start out using one model,
but they end up using a dozen models.
And then in the fullness of time,
it might be 50 or 100 different models
for different aspects of the process.
product, A, and then B, they end up building a lot of their own models.
And so a lot of these, the leading edge application companies are actually backward
integrating and actually building their own AI models because they have the deepest
understanding of their domain, they're able to build the model that's best suited to that.
And then, by the way, also AI, open source, they're also able to pick up and run an open source
models. And so if they don't like the economics of buying intelligence, you know, by the drink
from a cloud service provider, you know, they can pick up one of these open source models
and implemented instead, which these companies
are also doing. And so the best
of the AI application companies
are actually full-fledged
deep technology companies actually building their own
AI. And so that,
you know, that's, I think... Small models, though, right?
Mark, when you think about God models versus small models
as you were describing that, but that would be small? Would
you categorize that as a small... Well, some
of them, I mean, I will let them announce, you know, whatever they're doing,
whenever it's appropriate, but some of them are now also doing big model
development. And again, this
is also part of what... This is also part of learning
just in the last two years.
Well, so, like, here's a big learning just in the last two years,
which is very interesting, which is two years ago or three years ago,
for sure you would have said, wow, open AI is like way out ahead.
And, like, it's probably going to be impossible for anybody to catch up.
And then it's like, okay, well, Anthropic caught up.
And so, you know, they came out of Open AI.
And so they had all the secrets, you know, whatever.
And so, okay, they caught up.
But surely nobody can catch up after them.
And then very quickly after that, there were a raft of other companies that caught up very fast.
And XAI is maybe the best example of that, which is like, you know,
XAI, you know, Elon's company,
X-AI is the company named GROC is the consumer product version of it.
X-AI basically caught up to, you know, state-of-the-art, open-a-a-a-a-anthorpecloval
in like less than 12 months from a standing start, right?
And so, and again, that kind of argues against any kind of permanent lead, right,
by any one incumbent that's just going to basically be able to lock the entire market down,
like if you can catch up like that.
And then as we've discussed, you know, the China part is all new in the last year, right?
The deep-seek moment, I think, was in January or February of this year, right?
So less than 12 months ago.
And so now you've got like four Chinese companies that have effectively caught up.
And so, you know, so it's like, all right.
I mean, again, these are trillion-dollar questions, not answers.
But it's just like, wow, okay.
Like, it's one of these things where once somebody proves that it's capable,
it seems to not be that hard for other people to be able to catch up,
even people with far less resources.
And so, you know, I don't know what that does.
Maybe it makes you slightly more skeptical in the long-right economics of the big players.
On the other hand, maybe it makes you like more bullish about the startup.
ecosystem. It certainly should make you more bullish about startup application companies,
being able to do interesting things, which is why we're so excited about that.
You know, it should make you probably, you know, a bit more excited about,
certainly about China. On the other hand, the Chinese competition, putting pressure on the
American system to not screw itself up is very positive, so it should probably make you a little
bit more bullish on the U.S. And so, yeah, I think, you know, these are, yeah, these are live
dynamics, and I think we still need more time to pass before we know the exact answer.
I should say this, because sometimes, I don't know, sometimes they freak people out when I say these are open questions.
When a company is confronted with fundamentally open strategic or economic questions, it's often a big problem because a company needs to have a strategy.
And the strategy needs to be very specific.
And a company has to make, like, very specific concrete choices about where it, like, deploys investment dollars and personnel.
And, like, the strategy has to be, like, logical and coherence or the company kind of collapses in a chaos.
And so, like, companies, like, need to answer these questions.
and if they get the answers wrong, they're really in trouble.
Venture, we have our issues in venture,
but a huge advantage that we have is we can bet on multiple strategies
at the same time, right?
And we are doing this.
So we are betting on big models and small models and for prairie trade models
and open source models, right, and and and, you know,
and foundation models and applications, right,
and consumer and enterprise.
And so the portfolio approach, the nature of it is, like,
we are aggressively, basically, we are aggressively investing behind
every strategy that we've identified
that we think has a plausible chance of working,
even when that's contradictory
to another strategy that we're investing in.
And one is just like the world's messy
and probably a bunch of things are going to work.
And so like there's not going to be clean yes or no answers
to a bunch of this.
Like a lot of the answers to this,
I think are just going to be and answers.
But the other is like if one of these strategies doesn't work,
like, you know, we're not trying to hedge per se,
but you know, we're going to have representation
in the portfolio of the alternate strategy.
And so we're going to have multiple ways to win.
So anyway, that's the goal.
That's the theory.
of why we are, you know, kind of taking the approach
in the space that we're taking.
And that's why I have a big smile on my face
when I say that there are these big open questions
because I think that actually works our advantage.
It's a good segue to A16C questions
because we've gotten a few insofar
and we have a few that were sent-ins ahead as well.
So I'll start one with the broad topic.
What is something you and Ben disagree and commit on?
Disagree and commit.
You know, we agree.
I mean, we, it's been, I was going to say, you know, we're an old married couple,
so we argue, are you constantly, but we've been.
We're the romance of dead.
The long dad, yes, yes, yes, yes.
The fire has long since gone out.
But, yes, yes, we're in the park squabbling all the time.
So, yeah, I mean, so, look, we debate everything.
We argue about everything.
That said, like, you know, one of the things this made our partnership work is,
like, we do tend to come to the same conclusion.
Like, each of us is open to.
being persuaded by the other one. And so we end up coming, you know, we end up coming to the same
conclusion most of the time. So I would say there aren't like a, there are, I would I say,
specifically sitting here, there are like zero issues where I'm sitting here and I'm like,
I can't believe, you know, I just, I can't believe, I'm, you know, I'm putting up with this
crazy thing on his, on his part that he's doing, that I really disagree with, but I feel like I have
to commit to, or I don't think vice versa. And so, so we don't have any of those.
You know, quite honestly, the biggest thing, I say, the biggest thing that I, that he and I, the biggest
thing that he and I discuss,
this is, by the way, this is not the most important
thing we're doing, but it is a topic since somebody asked
the question. The biggest thing he and I discuss where
I, I don't know, maybe I'm always like second
guessing myself, or I never quite know where I should
come out on it, that he and I talk about a lot is
just, like, basically the public
footprint of the company.
So, like, our presence
in the world in terms of, like,
public statements,
controversy, you know,
how we vocalize and express
our views on things.
And I would just say they're like, you know, there's a real, there's a tension, there's a real,
it's, you know, maybe obvious, but like a very important tension.
Like, generally speaking, the more out there we are and the more outspoken we are,
and the more controversial we are, the better for the business in the sense of the entrepreneurs love it.
The founders want to work with, this was very clear at this point,
the founders want to work with people who basically are brave and controversial and take
controversial stands and articulate things clearly.
And they want that for a bunch of reasons.
One is because it's a demonstration of courage, which they appreciate.
But the other is because it teaches them who we are before they even meet us.
And that has just proven to be just like this incredible competitive advantage.
You know, long-term LPs will know, like, this is why we started with a very active marketing strategy from the very beginning.
And, like, it completely worked.
Like, the whole thing was if we're able to broadcast our message and we're able to basically be very clear in what we believe,
even at the point where it's controversial, like the best founders in the world are going to understand us before they even walk in the door.
Right, and they're going to know us even before they'd met us as opposed to everybody else in venture, at least at the time, that was basically just like keeping everything quiet, where they, you know, the founder just has no idea who these people are and what they believe.
And so that, that, like, worked incredibly well. It continues to work incredibly well.
It's, by the way, it's, you know, it's generally true across the industry. It's, it's like generally the case.
On the other hand, there are externalities to being, you know, publicly visible and to being controversial on many fronts.
we are, I would say this, we are very much,
we're trying very hard to thread this needle.
So like we're not backing off of generally being a company
that does a lot about bound.
We, you know, Eric Wrenberg and the team that he's built,
you know, we've talked to you guys about in the past,
you know, is already off to the races.
You know, we're going to, you know,
we're tripling down on the idea of basically being the leaders
and articulating the tech and business issues that matter.
You know, the issues for sure that people need to be able to understand.
And that's proven to be very effective.
By the way, a fair amount of our cons are actually aimed at
Washington. Because, again, it's like if you're a policymaker in Washington and you're sitting
there 3,000 miles away and your entire information source is like East Coast newspapers that
hate Silicon Valley, like, that's bad. And so, you know, our ability to like broadcast, you know,
inform points of view on technology, we just, we meet people in D.C. all the time who say, yeah,
most of what I know about this topic, I learned from you guys, because I listen to the podcast,
I read the articles, I watched the YouTube channel. And so, you know, we're going to continue to do that.
And so we, you know, over, over, overall we have a, you know,
we're kind of on our front foot on that stuff.
But, yeah, he and I do, he and I do go back and forth a bit
on exactly how, yeah, how many third rail topics should we touch
and how frequently.
And I would say we're trying to, we are trying to moderate that.
As Elizabeth Taylor said, as long as I spell our name right,
it's oftentimes could be good in most scenarios,
particularly when it comes to little tech,
and also, I think embedded in that question is probably some degree
of the relationship that you and Ben have,
which is now going on 30-plus years at this point,
so much so that Mark has become one person representing both.
Some people refer to Mark as Andreessen Horowitz.
Now, lost the mark, have combined just into one person.
Yes.
That's the result of 30-plus years working together.
Okay.
So it's been two years since you've reorganized around AI, launched AD.
What do you think you got most right?
And in hindsight, is there anything that you,
underestimated or missed in that
decisioning process?
No, I mean, look, we made plenty of mistakes.
I think those were the right calls.
I mean, AI was, like I said, like,
you know, the whole theory, it was back up,
the whole theory of venture, the whole theory of venture
that we've had from the beginning is that, you know,
many people before us have had as well.
That's very correct, I think, is the whole theory is like,
the money adventure is made when there's like a fundamental
architecture shift, like when there's like a fundamental
change in the technology landscape.
And that's been true for, you know, adventure basically
forever. And the reason is because if you have a fundamental change in technology, then you have
this period of creativity in which you can have basically aggressive, you know, very aggressive
kind of people, you know, kind of start these new companies. And they have this kind of shot to kind of
come in and you kind of win categories before big companies can respond. If there's no fundamental
change in technology, it's very hard to make startups work because the big companies just end up
doing everything. And so venture kind of, you know, sort of lives or dies on the basis of these,
of these waves of these transitions. And so,
So there's always this question, it's always this question.
I mean, I would just say, the best venture capital firms in history, I think, are the ones
that were the most aggressive of being able to navigate from wave to wave, right?
And look, I was a beneficiary of this when I came to Silicon Valley in 1994, you know,
there was no venture firm in 1994 that was like the internet venture capital firm,
like it just didn't exist.
But there were a set of venture capital firms at the time, you know, at the time,
our firm Kleiner Perkins, that said, oh, this is a new architecture, this is a new technology
change. It seems totally crazy. Everybody says you can't make money on it, whatever,
whatever. These kids are nuts, but like, we're going to make those bets. And so they were willing
to invest. And by the way, KP in the 90s invested not only in us, but also in Amazon and then Google
and like, you know, company after company after company. They invested at home, which basically
made home broadband work. You know, they invested in a fleet of companies. And they were a venture
capital firm that had started in the 1970s around, really around what was at the time called mini-computers,
which was like a, you know, three generations of technology back,
and they had navigated from wave to wave.
And, you know, the same thing is true for Sequoia.
The same thing's true for basically any successful venture firm
has been a business for, you know, 30 or 40 or 50 years.
And so I think in this business, like of all businesses,
like you just, you need to get on to the new thing.
You know, it was, I mean, quite honestly,
it was, I think, pretty amazing that most of the venture ecosystem
just decided to sit crypto out.
And the number of VCs that we talked
two between, call it, you know, the release of the Bitcoin white paper in 2009 to the beginning
of the crypto war in 2021, who just basically said, oh, we're not going to do crypto.
It was fairly, it's, I never quite know what to do with the VC who says, oh, there's a new wave
of technology and I'm very deliberately not going to participate in it.
And I'm always like, is that not in the job, right?
So, so, so like, I was fairly amazed by the VCs that didn't make the jump to crypto.
You know, they looked briefly smart during the crypto wars, I would say, of the last,
you know, three or four years.
and I think they probably look maybe a little bit less smart now.
You know, AI is another one of these where there are certain firms that are jumping all over it,
and there are certain firms that are just kind of sitting back and letting it happen.
And by the way, there were certain firms that never made it to the Internet.
I mean, there were firms that were very well known in the 80s and very successful
that just, like, did not make to jump out to the Internet and basically just petered out.
And so anyway, long-winded way of saying, I think in this business of all businesses,
you have to jump on the new wave.
And I think we got the magnitude of it, right, that this is like a fundamental,
fundamental transformation inside the firm.
You know, AD is, you know, AD is doing great.
AD itself, I believe, is also a beneficiary of AI, right?
Because in two ways, one is a lot of the kinds of products
that AD companies build themselves benefit from AI.
And then also AI is a driver of demand in other sectors of AD,
like energy and materials.
And so, you know, I think that generally is very consistent and, you know,
is working well.
By the way, you know, crypto's back to being a, you know,
I would say an exciting industry as a consequence of all the policy changes.
And then there's even going to be, I think, intersections.
I think there's actually going to be quite a few intersections between AI and crypto.
And then biotech also, bio and health care, I think, are obviously going to be transformed by AI,
both on the health care side and on the actual drug discovery side.
And that's underway.
And so anyway, so like the individual efforts in the firm feel good and suitable for the time.
The interactions between the teams and the hybrid ideas, you know, the company,
that are coming at these things from multiple angles, you know, feels really good.
You know, maybe the corollary question is like, you know, what do we feel like we're missing
right now? And I think the answer is really not, like, I don't think like right now we're
not missing a vertical. Like I don't, like, as of right now, like, there's not like a specific
vertical of like, I don't know, whatever that, like where we just like, oh, we just need,
you know, we need the equivalent of a new unit or the equivalent of a new fund or whatever.
I don't see that at the moment. I think it's more executing extremely well in the
verticals that we have in front of us, and then, you know, being the best possible partner
to the portfolio companies.
Yeah, actually, on the point of AD, because AI is creating, and there's a lot of talk
around AI taking jobs, et cetera, ironically enough, the jobs in AD sectors have never been
more in demand in the physical world related to energy, related obviously to data center, build
that, etc. So like the pendulum, it seems like also is swinging from just an accelerant standpoint
from a society point of view.
You talked about the importance of society
also needing to be ready for tech adoption.
Have you seen that accelerating of recently?
What's your sentiment of how to actually increase that
just to also make sure the convergence of adoption
also falls in line with how quickly tech is actually being implemented?
Yeah, so, you know, look, we've talked about this before,
but, you know, for a very long time,
tech was just not a very relevant...
Look, if you go back over,
like whatever, 300 years, like, there's just like recurring waves of like total panic and freak out
caused by new technology. Or even you go back 500 years, you can go back to the printing press,
you know, which basically was hand in hand with the sort of creation of Protestantism,
which really changed things. And then, you know, you go back to, you know, there were just
always kind of, you know, continuous panics. There have been multiple ways of automation
panics for the last 200 years. You know, a lot of the foundational panic under Marxism was basically
a fear of the elimination of jobs through the application of automation.
a lot of the same arguments he heard today
about like AI is going to centralize all the wealth
and a handful of a few people
and everybody else is going to be poorer and miserated
like that basically is what Marks used to say
which I think was by the way
wrong then and is wrong now
which you can talk about but
you know and then even like in the 1960s
there was this whole panic around
around AI replacing all the jobs
there's this great long forgotten
but it was a big deal at the time during the Johnson administration
you read these AI pause letters
today you know this one that just came out
a few weeks ago that Prince Harry
headlined, of all people.
And, you know, he takes what AI is going to ruin everything.
And it's like, in that it says, in 1964,
there was basically a group of like the leading lights
in the academia science and, you know,
kind of public affairs.
There was this thing called the triple committee,
or the committee for the triple revolution.
If you do a Google search fund,
it's like, committee for the triple revolution,
Johnson White House or whatever, this thing will pop up.
And, you know, it was a very similar kind of
manifesto of like we need to stop the march of technology today or we're going to ruin
everything.
And then, you know, even in the course of the last 20 years, there was like a big panic around
actually outsourcing in the 2000s was going to take all the jobs.
And then it was actually robots, weirdly enough in the 2010s, which is amazing because
robots didn't even work in the 2010s and they kind of, you know, still don't.
But, you know, there's a panic around that.
And now there's kind of whatever level of AI panic.
And so, like, you know, like, I would just say, like, look, the way I would describe it is,
you know, we in Silicon Valley have always wanted the work that we do to matter.
You know, we spend most of our time, quite honestly,
with people telling us that everything that we're doing is stupid and won't work.
Like, that's the default position.
You know, and then basically that flips at some point into panic about how it's going to ruin everything.
You know, it's easy sitting out here to be cynical about that,
especially when you kind of see the patterns over time.
You know, my view is we need to be actually very respectful of that,
and we need to be very aware of that.
and basically that we, you know,
I used the metaphor with the dog to cut the bus.
Like, we always wanted to work on things that matter.
We are working on things that matter.
People in the rest of society actually really do care about these things.
And, you know, it's our responsibility to think that all through very carefully
and to do a good job, you know, both not just building the technology,
but also explaining it.
You know, look, I think we have a real obligation to, you know,
to really explain ourselves and engage on these issues.
In terms of how to measure how it's going, you know,
it's sort of the classic social science question,
which is like, okay, if you want to understand basically, you know, patterns of people,
there's basically two ways to understand what people are doing and thinking.
One is to ask them, and then the other is to watch them.
And like every social scientist, like every sociologist will tell you this,
which basically is you can ask people, right?
And the way you do that, right, is like, you know, surveys, focus groups, polls, you know, what they think.
But then you can watch them.
but you can do what's called reveal preferences
or just observe behavior,
because you can actually watch their behavior.
And what you often see in many areas of human activity,
including politics and many different aspects of society
and culture over time is the answers that you get
when you ask people are very different
than the answers that you get when you watch them.
And the reason is because, like, I mean,
you can have a bunch of theories as to why this is,
the Marxists claim that people have false consciousness.
The somewhat explanation I believe is just people have opinions
on all kinds of things,
particularly when they're in a context where they get to express themselves.
And they'll have a tendency to kind of express themselves in very heated ways.
And then if you just watch their behavior, they're often a lot calmer and a lot more measured and a lot more rational in what they do.
And so that's playing out on AI right now, which is if you run a survey or a poll of what, for example, American voters think about AI, it's just like they're all in a total panic.
It's like, oh, my God, this is terrible.
This is awful.
It's going to kill all the jobs.
It's going to ruin everything, the whole thing.
If you watch the revealed preferences, they're all using AI.
they're like they're downloading the apps they're using chat gpT in their job they're you know
having an argument you see this online all the time now i'm having an argument with my boyfriend
or girlfriend i don't understand what's happening i take the text exchange i cut and paste it into chat
gpt and i have chat gpt explain to me what my partner is thinking and tell me how i should answer
so that he's you know he or she is not mad at me anymore right so or like you know i have this thing
you know i have a skin you know i have a skin condition and doctors you know da da da da da da and i take a photo and i
finally, like, learning about my own health, or I use it in my job.
Like, I, you know, I had to get this report ready for Monday morning, and I ran out of time.
I like it, you know, chat GPT really saved my bacon.
And so people in their daily lives are, I would, you know, just, you just look at the data.
It's just like they are not only using this technology, they love this technology.
And they love it and they're adopting as fast as they possibly can.
And so I tend to think we're going to pick with a public discussion that's going to ping pong back and forth for a while
because there is this divergence between what people are saying, what people are doing.
But I do think that what people are doing part
is obviously the part, the part ultimately that wins.
And I think this, by the way,
I think this technology is going to be exactly the same
as every other one, which is the thing that's going to happen here
is this is just going to proliferate really broadly.
It's going to freak everybody out.
And then, you know, 20 years from now,
everybody's going to be like, oh, thank God, we've got it.
Like, wouldn't like be miserable if we didn't have this.
Or, you know, five years from now or one year from now,
you know, people are going to reach that conclusion.
So I'm very optimistic about where this lands.
It's just that, you know, there will be turbulent.
that's long the way. I'm smiling because I also witnessed that in the wild. Literally late last
week, I was on the plane. The guy next to me was talking to his chat too. I could see him.
And he was like, help me draft an escalation letter to United for the delay on this flight.
I was like, sir, you are on the flight right now. Like, at least wait until it's over.
It was very good, though. I'm sure he had a great email crafted as a part of that.
So, okay, I'm going to switch gears to a few fun questions that were sent in, that, that
is intended to be a lightning ground.
So what is something you've changed your mind on recently,
bonus points, if it was someone younger than you?
I mean, it's like every day.
It's just like, it's just a constant, you know,
it's almost all like what's in the realm of the possible.
I'm terrible with specific examples,
so I don't have one like ready at hand.
But like I said, it's just, it's always, yeah,
no, it's often somebody's showing up.
It's either something somebody writes or something somebody says.
And, yeah, it's almost all, yeah,
it's very frequently somebody who's very young.
And, yeah, it's just, like, I would say it's a routine experience.
Good way to stay young.
Do you plan, speaking of you, do you plan to be cryogenically frozen?
Not with current, not with current cryogenic technology.
The track record of that is not great.
And the stories are somewhat horrifying, but, you know, we'll see.
We'll see.
We've still got some time.
How do you stay grounded when your influence itself,
may distort reality around you.
Yeah, so
I was to say the good news,
you know, I would say the good news on several.
So one is, look, the concern is real,
and it's hard for me to talk about
with the sort of my Midwestern, you know,
kind of, you know,
midwesterners, we either are very humble
or we're really good at faking it.
But, you know, it's hard to talk about,
but it requires some introspection.
But, yeah, I mean, look, the reality warping effect
is definitely real.
By the way, there is a very big advantage
to the reality warping effect,
which is being able to get people
to do what you want them to do.
so that you know there is there is another side to it um but it you know it is a concern in terms
of like having an accurate understanding of what's happening i guess i'd say two things i would
say one is um you know i mean one is just you know my partners i think are quite you know
including ben are quite forthright um in telling me when i'm wrong but you know more generally like
we're just we are very exposed to reality um and so and again you know you mentioned i don't know
it's way to stay younger make sure that our hair never goes back or whatever is just like you know
we run these experiments, you know, because we make these decisions about whether an investor,
not invest, and we work with these companies and all their things. And like, you know, reality
kicks in quickly. You know, the delusions don't last very long in this business because, like,
you know, these things either work or they don't. And, you know, you know, you have these like
long, elaborate, you know, discussions about, you know, theories on this and that and the other
thing. And then reality just like completely smacks you square in the face. You know, like, you idiot, right?
You know, like, you know, this is like the ultimate frustration of business, which is also
very motivating, which is the number of times that you think that you've applied superior
analysis. And then you've either investor or not invested based on that analysis. And it turns
out it was just, your analysis is just completely wrong. Right. And, you know, you just like
completely overrated your ability to epistemically, you know, kind of analyze these things. You just,
you know, basically inflicted harm. Like I always, you know, the question is always, you know,
any activity that we do, is it value add or is it actually value subtract? Right. And I think
in this business and all businesses is kind of like that. And that applies to all of my own
contributions as well. So there is that. And then I would say, you know, maybe this final
thing is just like I do have the entire internet ready to tell me that I'm an idiot. So that also
that also doesn't hurt. And it does on a regular basis.
On the point of your, you're alluding to earlier about decisions on investing your companies,
my favorite line, I think it was from the cheeky pint interview that you did was, you know,
when you invest in a company, it doesn't go well. At least it goes bankrupt.
Right? If it does, if it does well and it does fantastically well, you hear about it every single
fucking day. For the rest of your life, yeah, for the next, for the next 30 years,
melody smacking you in the face saying, you fool. You had it, it's literally, it's literally,
you had it in your office. All you had to do is say yes. And by the way, and this is the thing,
like every great VC, like this is, this is the stories that, you know, the VCs tell each other.
Every great VC basically has this history of like, my God.
I had it. It was in my office. The thing was in my office and I said no, and I just said yes.
And so it's, yeah, it's very hard to, yes, the constant reminders in the Wall Street Journal
and on CNBC every day that you made a giant mistake.
Yes, very good, very good for the old humility factor.
Yeah, very humbling. Helps you stay grounded all the time.
Last question. Do you plan to go to Mars if and when that opportunity presents itself?
Probably not.
My subliminal Zoom background wasn't sending you the positive vibes.
Well, I'm not even willing to leave California.
I'm barely willing to leave my house.
So, yeah, maybe by VR.
And then we'll see what happens.
I mean, look, having said that, I think Elon's going to pull it off.
And so I think, you know, I don't know.
I don't know, I don't want to predict.
This is not a prediction.
But, you know, I would not be surprised within a decade.
routine trips back and forth.
So, yeah, we may
this may actually become a practical question.
And by the way, I do know a lot of people who are probably
going to go.
Myself included, put me on that.
Oh, fantastic.
The flights around the world have prepared me for the
six-month journey to Mars, so I
will be just fine.
Thanks for listening to this episode
of the A16Z podcast.
If you like this episode, be sure to like,
comment, subscribe, leave us a rating
review and share it with your friends and family or more episodes go to youtube apple podcast and
spotify follow us on x a a16z and subscribe to our substack at a16z.com thanks again for listening
and i'll see you in the next episode as a reminder the content here is for informational purposes only
should not be taken as legal business tax or investment advice or be used to evaluate any
investment or security and is not directed at any investors or potential investors in any
A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies
discussed in this podcast.
For more details, including a link to our investments, please see A16Z.com forward slash disclosures.
