a16z Podcast - Ben Horowitz: Why Open Source AI Will Determine America's Future
Episode Date: November 27, 2025Ben Horowitz reveals why the US already lost the AI culture war to China—and it wasn't the technology that failed. While Biden's team played Manhattan Project with closed models, Chinese developers ...quietly captured the open-source heartbeat of global AI through DeepSeek, now running inside every major US company and university lab. The kicker: Google and OpenAI employ so many Chinese nationals that keeping secrets was always a delusion, but the policy locked American innovation behind walls while handing cultural dominance to Beijing's weights—the encoded values that will shape how billions of devices interpret everything from Tiananmen Square to free speech. Resources:Follow Ben Horowitz on X: https://x.com/bhorowitzFollow Costis Maglaras on X: https://x.com/Columbia_Biz Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
The biggest mistake people make on cultures, they think of it as this very abstract thing.
And my favorite quote on this is from the samurai, from Bushido, where they say,
look, culture is not a set of beliefs, this set of actions.
So the way for the U.S. to compete is the way the U.S. always competes.
We're an open society, which means everybody can contribute.
Everybody can work on things.
We're not top-down.
And the way that get everybody to work on things is to have the technology be open and give everybody a shot at it.
and then that's how we're competitive.
I think when you have new technology,
it's easy for policymakers to make really obvious,
ridiculous mistakes that end up being super harmful.
Today on the podcast, we're sharing a conversation
from Columbia Business School with A16D co-founder, Ben Horowitz.
Ben is a Columbia College alum from the class of 88
and joined Dean Costas Maglaris
for a discussion on AI, culture, and leadership in times of disruption.
They cover how open source
AI and blockchain could shape the global race for technological leadership and why culture,
not just strategy, determines which companies thrive through disruption. Let's get into it.
What a wonderful, wonderful way to start the semester by inviting an incredible leader and an alum
of the college, Ben Horowitz, to join us to talk about sort of a variety of things. So I'm not going to
spend time introducing Ben, but I'm going to make a small anecdote because Ben run a company
in the Bay Area in the late 90s to until the mid-2000s that I, without knowing Ben back then,
visited with a bunch of MBA students, I think in 99 or 2000 as part of our Silicon Valley
trip back in January. Name of the company was LoudCloud, which was the second company
you were working after Netscape. Yeah. So he has seen through the entire trajectory
of both the internet era, Silicon Valley, and I guess around the late 2000s, you started
Andreessen Horwich with your partner and has been sort of one of the leading venture capital
firms.
So I want to start by talking about AI.
We're going to talk about venture capital.
We're going to talk about leadership and the types of teams and people that you look for.
But I was reading this morning about Anthropic, closing the latest round at $183 billion
dollar valuation, which speaks a little bit about AI, speaks also a little bit about how venture
capital has changed, because that's a private company that is approaching $200 billion valuation.
Incredible growth, incredible changing capabilities.
Where do you think we are now in that AI cycle?
And you're a war veteran from the 2000s, so in some sense, maybe you can give us your insight
about that and then launch from there.
Well, I think we're early in the cycle in the sense that we just got the technology working like four years ago.
So if you think about technology cycles, they tend to run 25-year sort of arcs.
So we're really, really early on.
I think there is a question now of how big is the next set of breakthroughs compared to the last set.
So if you look at, you could call like gradient descent like a 10 out of.
10 and then the transformer and reinforcement learning maybe eight out of 10 breakthroughs is there
another 10 out of 10 breakthrough or even an eight out of 10 breakthrough on the horizon and we haven't
seen it yet so we'll see there's certainly companies kind of working at that and so the big thing is
is there another kind of big discontinuous change in I'll just call it probabilistic computing
since AI tends to freak people out or are we just going to keep kind of building on the
breakthroughs that we've had to date and that's i would say an open question right now when you think
about adoption and disruption in the economy how far out do you think that is going to be and what
sectors do you think may start getting affected large sectors big corporates i think it's kind of like
both overrated and underrated in terms of the dislocation so if you look at the long arc of
automation. Going back to the 1750s when everybody worked in agriculture, like nobody from 1750
would think any of the jobs that we have now make any sense. They're all ridiculous, like
completely frivolous ideas. And so it's not clear like what the jobs will be kind of in the next
30, 40 years. But like I said, the jobs that we have now were unimaginable. Nobody would think
somebody doing graphic design or even certainly being like a marketing executive where that was
like an actual job that makes any sense at all. So, you know, we'll see on that. And then the other
thing is, you're speaking to an MBA crowd. If you think about computers, so like deterministic computers,
what we've had since the kind of 40s and 50s, obviously a lot of things have changed. And like many,
many, many jobs are gone because of it. But it was much more gradual than I think.
people would have thought it would be when it happened. And like some of the changes, like the
whole private equity industry was created because of the spreadsheet because it turned out that
like a huge portion of every business was just like people, you manually calculating what you'd
calculate in a spreadsheet and a model. So basically private equity companies were like, oh, we use a
spreadsheet, take that company over and then get all the money out and so forth. And so that created
that whole industry. But nobody would have put that together in advance.
It's just like weird side effects of the tech.
And I think what we're saying in AI is it's kind of starting to automate the mundane
and then move to kind of over time, you know, maybe it will eliminate that job.
But the job is kind of morphing as it goes.
So I give you an example.
So my business partner, Mark and I had dinner with a kind of fairly famous person in Hollywood
who's making a movie now and basically half the movie is AI.
But the way that's working is they're taking an open-source model.
By the way, the open-source video models are getting very, very good.
And normally in Hollywood, when you shoot dailies,
there's many dailies that you might shoot a scene like 10 or 20 times.
Now they'll shoot it like three times and have the AI generate the other like 17 takes.
And it's indistinguishable.
So it kind of really improves the economics of the current movie-making industry,
which have gotten extremely difficult with the way distribution has changed,
and it's going to make it much easier for many more people to make movies.
But I think that the way Hollywood would view AI right now is that it's just taking all the jobs, right?
Like it's just going to write all the movies, make all the movies.
I think that's not going to happen.
It's just not going to be that way.
It will change.
I think there will be a new medium that's different than movies,
the way movies were different than plays using the technology.
So things are going to change.
I think it's going to affect.
every single sector, but not in ways that you would easily anticipate.
By the way, every writer in Hollywood is already using AI to help them write dialogue that
they don't feel like writing and all that kind of thing.
So that's already going on.
But that's not eliminated those positions.
It's just kind of enabling them to kind of work faster and better.
You mentioned open source.
Where do you fall into that?
I mean, I think I know where you guys fall into the spectrum, or maybe you can tell us a little
bit about your thinking about open source and perhaps also talk about US China and competition
in AI in that context. Yeah, so well, with open source, so in AI, there's kind of open source the
algorithm, which is like not that big a deal, but then open weights is kind of the bigger thing because
then you've trained on the model and it's encoded in the weights. And in that encoding, there's
kind of the quality of the model, but also the subtle things like the value of the models,
like the model's interpretation of history,
the model's interpretation of culture, human rights,
all those kinds of things are in the weights.
So the impact of open source,
if you think about the control layer of every single kind of thing
that you, every device in the world is going to be AI,
right?
Like you're going to be able to talk to it.
What those weights are matter in terms of the kind of global culture of the world
and, you know, how people.
think about everything from race issues to political issues, to free speech, to Tiananmen Square,
what actually happened, that kind of thing, is all encoded in the weights.
And so whoever has a dominant open source model has a big impact on the way global society
ends up evolving.
Right now, so kind of a combination of things happened at the beginning of AI, one, just
the way the U.S. companies evolved in conjunction.
with the U.S. policy. So the U.S. policy under the Biden administration was very anti-open source,
and so the U.S. companies ended up being all closed source. And the dominant open source models
are now from China, Deepseek being the one that I would say most, not only U.S. companies use,
but also basically everyone in academia uses deep seek in Chinese open source models,
not U.S. open source models. So we've certainly, I think, lost the lead on open source to China.
And, you know, OpenAI, open source their last model.
The problem with going from proprietary to open source is it doesn't have, what do you call
the vibes?
So open source is very vibe-oriented and the community and the way developers think and so forth.
So if something evolves in open source, it ends up being a little different than if it doesn't.
So I think it's really important.
I think that the reason the Biden administration didn't want the products to open source was
so the rationale, let me describe the rationale, and then I'll say why it was delusional.
The rationale was, okay, we have a lead in AI over China.
I don't know, we had all these pseudo-smart people running around saying we have a two-year lead and a three-year lead.
Like, I don't know how you would know that.
But they were wrong, it turns out.
And that this was like the Manhattan Project and we had to keep the AI a super secret.
Now, it's delusional on several fronts.
One, obviously, Chinese AI is really good.
And their open source models are actually ahead of ours.
so we don't have a lead.
But the kind of dumber thing about it was,
like, if you go into Google or Open AI or any of these places,
do you know how many Chinese nationals work for Google and Open AI?
Like a lot.
And you think the Chinese government doesn't have access to any of them?
Come on.
And you think there's security.
There's no skiffs there.
All that stuff's getting stolen anyway.
Let's be serious.
There is no information that companies in the U.S. are really locking down.
So the way for the U.S. to compete is the way the U.S. always competes.
We're an open society, which means everybody can contribute.
Everybody can work on things.
We're not top down.
And the way to get everybody to work on things is to have the technology be open and give
everybody a shot at it.
And then that's how we're competitive, not by keeping everything a secret.
We're actually the opposite of that.
We're terrible at keeping secrets.
And so we have to go to our strengths.
And so that's just a dumb mistake.
But I think when you have new technology, it's easy.
for policymakers to make really obvious, ridiculous mistakes that end up being super harmful.
And so we have to be careful here.
So when thinking about AI and national security, are you concerned about that?
Well, I think there's a real concern on AI and national security,
but it's not in terms of keeping the AI a secret because we can't.
Look, if that's a viable strategy, then great.
But it's not a viable strategy.
like we'd have to reshape the entire way society works.
And by the way, even on the Manhattan Project,
like, the Russians got all the nuclear.
They got everything, including, like, the most secret part,
which was the trigger mechanism on how to set off the bomb.
They got all of that.
And so even then, with no Internet,
with the whole thing locked down,
with it in a secret space, and all that kind of thing,
we couldn't keep it a secret.
So, like, in the age of the Internet,
and, like, by the way, China is really good.
at spying. This is one of the reasons why there's so much tension between the two countries.
It's almost like a national pride thing to be good at spying in China. So they're really good at it
and like we're really bad at defending against it. So like that just is what it is. Now, having said
that, all of defense like war is going to be very, very AI-based. We've already seen this
in the Ukraine with the drones and so forth. But like robot soldiers, autonomous submarines,
autonomous drones, all that stuff is basically here. And so the whole nature,
of warfare, I think, is changing. And we have to take that very, very seriously. But I think
that means competing in AI. And the best thing for the world is that not one country has the
AI to rule them all. That's the worst scenario where, like, anybody is too powerful. I think
a balance of power in AI is good, which is why open source is good, which is why us developing
the technology as fast as we can is important. It's why the private sector integrating
with the government in the U.S. is important.
China is much better at that than we are, so we have to get better.
But keeping things a secret, I don't think is going to work.
I mean, I actually don't even think keeping the chips to ourselves is going to work.
Like so far, we thought, okay, if we stop the export of NVIDIA chips to China,
that will stop them from building powerful models.
It really hasn't.
So, you know, like a lot of these ideas just end up retarding the growth of the U.S.
technology and the industry, as opposed to doing anything for national security.
You mentioned the previous administration, and we talked about their attitude.
I want to ask your question about regulation.
I've had so many conversations with European leaders about that.
Maybe you do as well.
Sorry.
I shouldn't laugh.
Yeah.
And why don't you share your thinking of a little bit about the American situation?
and sort of the global situation.
Yeah, so it's funny.
Every panel I've been on or like kind of time
I've been at a conference with like European leaders,
they always say that whether they're in the press or industry
or the regulatory bodies, they say the same thing.
Well, Europe may not be the leaders in innovation,
but we're the leaders in regulation.
And I'm like, you realize you're saying the same thing.
So Europe kind of got down this path,
which is known as the,
kind of precautionary principle in terms of regulation, which means you don't just regulate
things that are kind of known harmful. You try and anticipate with the technology anything that
might go wrong. And this is, I think, a very dangerous principle because if you think about
it, we would never have released the automobile. We'd never released any technology. I think
they, you know, it started in the nuclear era and, you know, one could argue that we had the
answer to the kind of climate issues in 1973. And if we would have just built out nuclear power
instead of burning oil and coal, we would have been much better shape. And if you look at the safety
record of nuclear, it's much better than oil, where people blow up on oil rigs all the time.
And I think more people are killed every year in the oil business than have been killed in
history of nuclear power.
So, you know, these regulatory things have impact.
In the case of AI, there is kind of several categories that people are talking about regulating.
So there's kind of the speech things, like can you, and Europe is very big on this, can
it say hateful things, can we, you know, can the AI say political views that we disagree with,
this kind of thing. So very similar to social media and kind of that category of things. And
do we need to stop the AI from doing that? And then there's kind of another section which is,
okay, can it tell you instructions to make a bomb or a bio weapon or that kind of thing. And then
there's, you know, another kind of regulatory category, which is I think the one that, you know,
most people, like, use this argument to kind of get their way on the other things is,
well, what if the AI becomes sentient, you know, and, like, turns into the Terminator?
We got to stop that now.
Or, like, kind of the related one, which is kind of a little more technologically believable,
but not exactly is takeoff.
Have you heard of this thing, takeoff?
So takeoff is the idea that, okay, the AI learns how to improve.
itself and then it improves itself so fast that it just goes crazy and becomes a super brain
and decides to kill all the people to get itself more electricity and stuff kind of like
the matrix. Okay, so let me see if I can deal. And then there's another one which is around
copyright, which is important, but probably not on everybody's mind as much. So if you look at
the technology, the way to think about it is there's the foundation. There's the foundation
to models themselves.
And it's important, by the way, that, you know,
everybody who works on this stuff
calls it models and not like, you know,
AI intelligence and so forth.
And there's a reason for that,
because what it is is a mathematical model
that can predict things.
So it's a giant version of kind of the mathematical models
that you all kind of study to do basic things.
So if you want to calculate, you know, when Galileo dropped the cannonball off the Tarapisa,
you know, you drop it off the first floor and the second floor,
but then you could write like a math equation to figure out what happens
when you drop it off like the 12th floor, you know, how fast does it fall.
So that's a model with, you know, maybe like a couple of variables.
So think then what if you had a model with 200 billion variables?
That's an AI model.
And then you can predict things like, okay, what word should I write next if I'm writing an essay on this?
Like, you can predict that.
And that's what's going on.
So it's math.
And inside, it's doing the model is just doing a lot of matrix multiplication, you know, linear algebra, that kind of thing.
So you can regulate the model or you can regulate the applications on the model.
So I think when we're talking about publishing a bio, how to make a bioweapon or how to make a bomb or that kind of thing, that's already illegal.
And the AI shouldn't get a pass on that because it's AI.
So if you build an application like chat CPT that publishes the rules of making a bomb, like you ought to go to jail.
Like that should not be allowed.
And that's not allowed.
And I think that falls under regular law.
Then the question is, okay, do you need to regulate the model itself?
And the challenge with regulating the model is you're basically,
the regulations are all of the form.
You can do math, but not too much math.
Like, you need too much math, we're going to throw you in jail.
But if you do it just this much math, it's okay.
And like how much math is too much math?
And look, the problem in that thinking is when you talk about sentient AI
or takeoff,
you're talking about sort of thought experiments
that nobody knows how to build.
And I think there's very good arguments,
you know, and we do know how to reason
about these systems that, like, takeoff is not going to have.
And that, like, we have no idea how to make takeoff happen.
And so it's kind of one of these things like,
well, the laws of physics,
I can do a thought experiment that says,
you know, if you travel faster than the speed of light,
you can go backwards in time.
So do we now need to regulate,
time travel and outlaw whole branches of physics in order to stop people from traveling back
in time and changing the future or changing the present and screwing everything up for us.
That's probably too aggressive.
And like we're really getting into that territory when we talk about sentient AI.
Like we don't even know what makes people sentient.
Like we literally don't.
You know who knows the most about consciousness?
Anesthesiologists because they know how to turn it off.
But that's like the extent of what we know about consciousness.
So like we definitely don't know how to build it.
And we definitely haven't built it today.
Like there's no AI that's conscious or has free will or any of these things.
And so when you get into regulating those kinds of ideas,
and I'm not saying that AI can't be used to improve AI.
It absolutely can.
But computers have been improving computers for like since we started them.
But that's different than takeoff because takeoff.
because takeoff requires a verification step that nobody knows how to do.
And so, like, you get into, you know,
you get into very, very theoretical cases,
and then you write a law that prevents you from competing with China at all,
and that gets very dangerous.
And so I just say, like, we have to be really, really smart
about how we think about regulation and how that goes.
Copyright is another one.
So copyright, should you be allowed to have an AI, like,
listen to all the music and then like
reproduce Michael Jackson
or definitely that's got to be illegal
because that's a clear violation of copyright
but then can you
let it like read a bunch of
stuff that's copyright and create a statistical
model to make the AI better
but not be able to reproduce it
well
that that gets
very tricky if you don't allow that
because by the way that's what people do right
like you read a lot of stuff and then you write something
and it's affected by all the stuff you read
And by the way, like, competitively with China, they're absolutely able to do that.
And you, you know, the amount of data you train on dramatically improves the quality of the model.
And so you're going to have worse models if you don't allow that.
So there's, you know, that's a trickier one.
But this is where you have to be very careful with regulation to not kill the competitiveness while not actually gaining
any safety. And so that's, you know, that's a big debate right now and it's something we're
working on a lot. We ask you one question and then I want to move on to crypto and venture and
leadership. But you mentioned machines, building machines, and I think of a colleague of
mine that is a roboticist. And what you're thinking about physical or embodied AI and are you guys
invested in that? Do you think that that's something that's going to be big over the next
10, 20, 30 years? How do you feel about that? Yeah, no, no. I definitely think it's going to be
big and it's going to be very important. It's probably going to be the biggest industry is
probably going to be robotics. It's going to be super important. I don't think there's any
question. I think it's further away than anybody is saying.
So if you think about like the full humanoid robot, well, just to give you kind of a time scale idea.
So in 2006, I think Sebastian Thrun, when the DARPA challenge and drove and had an autonomous car drive across the country.
And now in 2025, we're just getting like the Waymo cars and things that you can put on the road.
So 19 years to kind of solve that problem.
and why did it take so long?
And by the way, the self-driving robot problem
is a much easier problem than the humanoid robot problem
because the data is primarily two-dimensional,
and then we had all the map data already and so forth.
So, you know, it was like a lot easier to get there.
If you think about the robot data, it's many more dimensions.
Like, you know, the difference between picking up a glass
and picking up a shirt is very different.
different or an egg. So there's all these subtleties to it. And then with self-driving, like the thing that,
you know, if you look between 2012 and 2025, say, what took so long, it was the, it turns out
that the universe is very fat-tailed and human behavior is very fat-tailed. And so, like in working
with the Waymo team, you know, the things that were extremely hard to deal with were like somebody
driving 75 in a 25 zone or like somebody just running out in the middle of street for no reason
or that kind of thing. It was very, very difficult to make the car safe around those kinds of
use cases because they just weren't in the data set. And then if you think about like robots,
you know, we don't have any data on that. And you don't get the data from video because you have to
pick stuff up, you have to do things and so forth. And then these humanoid robots are like,
they're extremely heavy.
You know, the battery problem is hard.
And the models, the models that we have,
so to just feed an LLM enough data
until it can, like, drive a robot,
it can tell you hasn't been working yet.
And so then there's a question,
do you need another kind of model?
There are a lot of people working on
so-called real-world models.
Faye-Fei Lee's got a new company doing that
called World Labs and so forth.
but it's going to take a while to get there.
And you can tell in the video models
that they're not suited for robots
because you can't do things like move the camera angle
because it doesn't understand what's in the picture.
And that's okay for a video.
It's not okay for a robot.
So there's going to be a lot of things
that we have to do before we get to robots.
But, you know, those things are,
you know, there's certainly a lot of effort going on to it.
And in terms of a U.S. competitive space,
like probably the most worrisome thing right now
is the entire robot supply chain currently is in China.
So, like, every robot supply chain company is either Chinese base.
I think there's one in Germany, but it's all founded by Chinese national.
I think it was both.
You know, just from like a strategic, okay, do you get your supply chain cut off kind of thing?
That's something that, you know, we probably have to work on.
And, you know, it's not the most complicated thing to build the supply chain,
but it's, you know, something that if we don't do,
we're going to be in the same situation that we're in with rare earth minerals
and, you know, chips and these kinds of things.
A quick question about crypto before we talk about people.
All right.
We're going.
Yeah, crypto is changing in the quite a bit.
and, you know, a lot of momentum in the last year or so.
How do you feel about crypto and blockchain applications?
And do you envision that over the next five years,
we may start to see technology being applied in other areas,
apart from where it is right now?
Yeah, so, you know, crypto's a super interesting kind of technology.
And probably if Satoshi Nakamoto,
wasn't a pseudonymous person who nobody knows who he is.
He probably won the Nobel Prize for mathematics and economics on the Bitcoin paper.
So it's a very interesting and powerful technology.
I think that the way to think about it in the context of AI is,
if you look at the evolution of computing,
it's always been in kind of like two pillars.
One is computers and the other is networks.
So starting with like microwaves and integrated circuits
and going to mainframes and SNA to PCs and lands
to the internet and the smartphone,
they always kind of, they're very different technology bases,
but one without the other is never nearly as powerful.
And if you think about AI, what is a network that AI needs?
So first of all, in order for AI to be really valuable,
it has to be an economic actor.
so AI agents have to be able to buy things,
have to be able to get money, that kind of thing.
And if you're an AI, you're not allowed to have a credit card.
You have to be a person.
You have to have a bank account.
You have to have social, all these kinds of things.
So credit cards don't work as money for AI.
So the logical thing, the Internet native money is crypto.
It's a bearer instrument.
You can use it and so forth.
And we've already actually seen like new AI banks that are crypto-based
where AIs can kind of get KYC'd and that kind of stuff.
That's called, sorry, know-your-customer,
anti-money laundering laws, these kinds of things.
So crypto is kind of like the economic network for AI.
It's also, you know, if you think about things like bots,
how do you know something's a human?
Crypto's the answer for like proving that you're a human being.
Crypto turns out to be the answer.
for Providence.
So, like, is this a deep fake?
Like, is this really me?
Or is this, like, a fake video of me?
How do I verify that it's actually me?
And then if I verify that it's actually me,
where should that registry of truth live?
Should we trust the U.S. government what's true?
Should we trust Google what's true?
Or should we trust the game theoretic mathematical properties
of a block train?
it's true. So it's like a very kind of valuable piece of infrastructure. And then, you know, finally,
if you think about, you know, one of the things that AI is best at, and probably the biggest
security risk that nobody talks about is just breaking into stuff. It's like really, really good.
And not just like, you know, breaking into things technologically, but also like social engineering
and that kind of stuff, it's amazing. And so the current architectures of, you know, where your data is
and where your information is and where your money is,
is kind of not well suited for an AI world.
They're just giant honeypots of stuff to steal
for somebody who uses the AI.
And the right architectural answer to that
is a public key infrastructure
where you keep your data yourself
and then you deliver a zero-knowledge proof,
yes, I'm credit-worthy,
but you don't have to see my bank account information
to know that I'm credit-worthy.
I'm not going to give you that.
I'm just going to prove to you that I'm not.
and that's a crypto solution so it ends up being like a very very interesting technology in an
AI world and i think that's where a lot of the new developments are going to are going to be
good uh two three more questions and then we'll open it up a little bit to to the audience uh
we started by talking about anthropic at 183 billion uh open AI may be closing at half a trillion
what's happening with the venture capital industry and is our traditional models changing for, you know, seed, you know, et cetera, or why have we seen that sort of change in the last, I would say, we started seeing it with the Uber's and Airbnbs and now it has gone even further?
Yeah, so I think what's happened is, and this is another regulatory thing, you know,
no good deed goes unpunished, I would say.
So if you go back to the 90s,
it shows you how old I am,
if you go back to the 90s,
in those days, like companies went public,
Amazon went public, you know,
I think with a $300 million valuation.
When we went public at Netscape,
you know, the quarter prior to when we went public,
we had $10 million in revenue and so forth.
And then what happened was kind of a serious,
of regulatory steps and they you know some of them are so obscure that you you'd never know
about them things like order handling rules decimalization um reg fd just just like a series of
regulatory things sarbanes oxley many which came after like the great dot com crash and telecom
crash. And the result of that, of those things is becoming, going public became very, very
onerous and very difficult. So you couldn't, you definitely couldn't do it, you know, at a $300 million
valuation because one, like the cost of being public just in terms of like lawyers, accounting,
D&O insurance and so forth was so high, it would be a massive percentage of your revenue. So that's
thing one. Then secondly, like, you, you couldn't because of the way,
the particularly things like Reg FD changed,
there's this kind of asymmetric situation
between the company and the short sellers.
So the short sellers became much, much more powerful
because they were able to do things
to manipulate the stock where a company
could no longer defend itself
in the way it used to be able to defend itself.
And so that made it more dangerous.
And then, of course, you get sued like crazy.
So all that happened and made kind of
company stay private longer. And then the result of companies staying private longer was that the
private market, capital markets massively developed. So all of these huge money pools started
putting money into the private markets. And so what does that mean? Well, it means that,
okay, look, if Open AI can raise $30 billion in the private markets, what is the value of being
public? That you can get sued more, that you have to do an earnings call every quarter, right?
Like these things, you know, the tradeoff becomes a bad trade to go public.
And that's kind of where we are today.
I think, look, for the good of the country, the best answer is we fix the public markets.
But in the meanwhile, what's happened is the, you know, as a venture capital firm,
you kind of have to expand your capabilities all the way up into the very, very high end of the markets
and really kind of take over a lot of the role that investment banks had previously.
they had. And, you know, that's just, you know, kind of been what's happened. We'll see where it
goes. I think, you know, right now it's on the train to continue. I think the other underlying thing
in your question is how in the hell is anthropic worth so much money? And look, I think that the
answer to that is like these products, the biggest thing take away from the AI products is how
well they work. So, you know, open AI I went to $10 billion in revenue.
like in four years, which like, like, we've never seen anything like that.
And when you look at that, you say, well, why is that?
And it's like, well, how well this chat GPT work?
Like, it works awesome.
Like, way better than other technologies products you bought in the past.
Like, the stuff works really work.
Cursor works, unbelievable.
And so I think that the, because the products work so much better than anything in the fact
that we've had in the past, they grow much faster.
and as a result of them growing much faster,
the valuations grow much faster.
But the numbers are there to justify the valuations
in a way that in the dot-com era, they weren't.
So it's a different phenomenon.
Now, like, the AI land,
if there's another big breakthrough in AI,
then somebody could get dramatically better products
and then the valuations aren't sustainable and so forth.
But that's very theoretical compared to,
I could go on for days about what exactly happened during the dot-com era,
but this isn't the same.
You know, it may have issues, but they're not the same issues.
So there were at least two students that brought books of yours for you to sign when we stepped in.
So you wrote the book, The Hard Thing About Hard Things.
Yep, ain't nothing easy.
Yeah.
And for the many MBA students in audience,
you know, what's one of the sort of counterintuitive hard things that you think about
and people need to know about?
And there are so many things.
Actually, somebody asked, you know, my friend Ali Goetzie who runs Databricks brought up like a couple
days ago, you guys been like, you know, one of the best things you told me was, you know,
I can't develop my people.
which I thought was like, oh, wow, I said that.
But I actually had written a post on it, and it's a kind of a CEO thing that's not true for managers.
And let me explain to you what I mean by that.
So, you know, when I was, if you're a manager and like you're a product manager, like an engineering manager, or this kind of thing, you know exactly how to do the job that you hire people into.
And so you can develop them.
can train them. You can teach them to be a better engineer, a better engineering manager,
or a better, you know, accountant or whatever it is. But as CEO, you know, you're hiring like a
CFO, a head of HR, a head of market. You probably don't know how to do any of those jobs.
So like if they're not doing a good job, like you're spending your time developing, you don't
know how to do that job. Like, what are you doing? And the bigger problem is you're,
now distract. One, you're not going to improve them because you don't know what you're doing.
And then secondly, you're taking time away from what you need to be doing. If you think about
what the CEO needs to do, you have to set the direction for the company. They've got to articulate that.
They've got to make sure the company is organized properly. They've got to make sure the best people
are in place. They've got to, they have to make decisions that only they can make. And if they
don't make them, then the entire company slows down. So if you're not doing that and trying to
develop someone who you have no idea how to develop, that's just a huge mistake.
And it was a very sad lesson for me.
In fact, I wrote a post on it called The Sad Truth about Developing Executives.
And I think the rap quote that I used was Weezy.
And it was, the truth is hard to swallow and hard to say, too, now I graduated from that
bullshit and I hate school.
And that's how I feel about that lesson.
I just hate the fact that I learned that, but it's very true.
In another book that you wrote, which is about what you do is who you are, you focus on culture.
This is something that we speak a lot about here as well.
But what's in some sense some of the things that people need to be thinking about,
how they set culture, how do they influence culture within their organizations, the importance of
that, and how you have actually put it to work in your own organization.
Yes, I think that the biggest mistake people make on culture is they think of it as this
very abstract thing. And my favorite quote on this is from the samurai, from Bushido,
where they say, like, culture is not a set of beliefs, this set of actions. And when you think
about it in the organizational context that's where you have to think about it so like you know people go
oh well our culture is integrity or we have each other's backs or this and it's like right like
everybody can interpret that however they want and you know so your culture is probably hypocrisy
if if that's how you define it because nobody's doing that you know and and by the way like
the whole thing on these kinds of you know virtual
choose, I would just call them, is they only, actually, you only break them under stress.
So it's like, how many of you, like, think you're honest, like you're an honest person?
Okay, now think about how many people do you know who you would consider to be totally honest?
Like, I bet it's a way lower percentage than the people who raise their hand.
And why is that?
It's because honesty doesn't, everybody's honest until it's going to cost you something.
right oh are you going to be honest if it's going to cost you your job is you going to be honest if
it costs you your marriage are you going to be honest in that situation that's a whole other thing
right um and so like honestly all the virtues are like that they're only kind of tested under
stress and so you can't define like the ideal of something you want you have to define the exact
behavior like how do you want people to show up every day um because culture is a daily thing
It's not a quarterly.
You don't put it in the annual review.
Like, do you follow the culture?
It's like, well, yeah, sure.
I mean, like, who even knows how to evaluate it at that time?
So it's what do you do like every day?
And so what are the, it's, you want to think of what are the behaviors that
indicate the thing that you want?
And so I give you one example at the firm that we have is, you know,
one of the difficult things to do that we really wanted to do as a venture capital firm
is like, let's be very, very respectful.
of the people building the companies, the entrepreneurs,
and never kind of make them feel small in any way.
And every venture capital firm would say you want to do that.
But the problem with venture capital is,
I have the money, you have an idea,
you come to me to get the money,
I decide whether you get the money or not.
So, like, if that's my daily thing,
then I might feel like the big person,
and you might, you know, I might want to make you feel,
like the small person, you know, like, no, I don't think that's a good idea.
And so, like, how do you stop that?
So, you know, we put a thing, like, I can tell people not to do that,
but, like, there's all this kind of other incentive that's making them do that.
So what I said is, like, if you're ever late to a meeting with an entrepreneur,
one, it's a $10 minute fine.
I don't care if you had to go to the bathroom.
I don't care if you're on a important phone call.
Like, you're five minutes late.
You owe me $50 right now, and you pay on the spot.
Why did I do that?
Well, because I want you to think that nothing is more important in your job or in your day than being on time for that meeting with that entrepreneur because what they're doing is extremely hard and you have to respect that and you have respected by showing up on time.
I don't give what your excuse is.
If you were getting married, you wouldn't have to go to the bathroom and be late to the altar.
So, like, I know you can do it.
So like, don't give me that.
And that programs people, right, because every day you're meeting with entrepreneurs, like you know, okay, this is what we're about.
we got to do that similarly you know on that i'm like look where if somebody wants to do something
larger than themselves and make the world a better place we're for that we're dream builders we're
not dream killers so if you get on x and say oh that's a dumb idea that that you know that they're
selling dollars for 85 cents that you're fired like that's it gone i don't care because we don't
do that and so like you put in rules that seem maybe absurd but they set the call it's a cultural
marker for like, okay, this is who we are. And if you want to come work here, you've got to be like
that. And so that's, you know, that's a little kind of way you think about culture. I wrote a whole
book on it. So you're interested in this. There's many other aspects. But I think that the,
the worst thing you can do is just go have an offsite and like yabidabadoo about like the values
that you all have and write up a bunch of, you know, flowery language about how, you know,
you're like this. Okay. I promise we'll end on time.
time. So I think we're going to end here. Ben, thank you so much for coming in. Thank you.
Thanks for listening to the A16Z podcast. If you enjoy the episode, let us know by leaving a review at rate thispodcast.com slash a 16Z.
We've got more great conversations coming your way. See you next time.
This information is for educational purposes only and is not a recommendation to buy, hold, or sell any investment or financial product.
This podcast has been produced by a third party and may include paid promotional advertisements, other company references, and individuals unaffiliated with A16Z.
Such advertisements, companies, and individuals are not endorsed by AH Capital Management LLC, A16Z, or any of its affiliates.
Information is from sources deemed reliable on the date of publication, but A16Z does not guarantee its accuracy.
