a16z Podcast - The State of American AI Policy: From ‘Pause AI’ to ‘Build’
Episode Date: August 15, 2025a16z General Partners Martin Casado and Anjney Midha join Erik Torenberg to unpack one of the most dramatic shifts in tech policy in recent memory: the move from “pause AI” to “win the AI race.�...��They trace the evolution of U.S. AI policy—from executive orders that chilled innovation, to the recent AI Action Plan that puts scientific progress and open source at the center. The discussion covers how technologists were caught off guard, why open source was wrongly equated to nuclear risk, and what changed the narrative—including China's rapid progress.The conversation also explores:How and why the AI discourse got captured by doomerismWhat “marginal risk” really means—and why it mattersWhy open source AI is not just ideology, but business strategyHow government, academia, and industry are realigning after a fractured few yearsThe effect of bad legislation—and what comes nextWhether you're a founder, policymaker, or just trying to make sense of AI's regulatory future, this episode breaks it all down.Timecodes:0:00 Introduction & Setting the Stage0:39 The Shift in AI Regulation Discourse2:10 Historical Context: Tech Waves & Policy6:39 The Open Source Debate13:39 The Chilling Effect & Global Competition15:00 Changing Sentiments on Open Source21:06 Open Source as Business Strategy28:50 The AI Action Plan: Reflections & Critique32:45 Alignment, Marginal Risk, and Policy41:30 The Future of AI Regulation & Closing ThoughtsResourcesFind Martin on X: https://x.com/martin_casadoFind Anjney on X: https://x.com/anjneymidhaStay Updated:Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
So we've been through all of these tech waves and we've learned how to have this discussion in a way that for the United States interest balances these two things.
And if we're going to make a departure from a posture that was developed from 40 years, you'd better have a pretty damn good reason.
Today, a new frontier of scientific discovery lies before us.
You can sometimes judge a book by its cover.
And I think this is a strong start.
The conversation around AI regulation in the U.S. has changed dramatically.
Just a year ago, the loudest voices were calling to pause or shut down open source.
AI. Today, the U.S. is pushing to lead the global race. So what changed? And what does it mean
for innovation, competition, and the future of open source? I'm joined by A16Z general partners
Martin Casado and Anjane Midha to unpack the new AI action plan, the politics behind it,
and the implications for builders and policymakers alike. Let's get into it.
As a reminder, the content here is for informational purposes only. Should not be taken as legal
business tax or investment advice or be used to evaluate any investment or security and is not directed
at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates
may also maintain investments in the companies discussed in this podcast. For more details,
including a link to our investments, please see A16Z.com forward slash disclosures.
We're talking a week or two after the action plan has been announced. Looks like we've come
long way. Yeah. You guys have been on the front lines for years now in this discourse fighting to
make this possible. Why don't we trace where we've been so that we could then understand
how we got here and where we're up going? I mean, under the Biden administration, we had the
executive order, which was basically the opposite of what we're seeing today. I mean, it was trying
to limit innovation. It was doing a bunch of fear mongering. But to me, what was even more striking
was not regulators being regulators. You'd expect that. But if you remember a hunch, and this
is why we got involved is you'd have these politicians, you know, making recommendations,
which is fine you'd expect that, but nobody was saying anything. You know, it was like academia
was silent. Right. The startups were silent. And if anything, like, the technologists were kind
of supporting it. So we were in the super backwards world where it was like innovation is bad
or dangerous and we should regulate it. We should pause it. You know, there was this discourse and
it was like somewhat fueled by tech as opposed, you know, and then nobody was going against it.
So I think today we should definitely talk about the action plan.
It's great.
But we should also talk about how the entire industry has kind of come around to say like, listen,
we need to keep these things in check.
We need to be sensible and thinking about it.
I mean, pause AI, that was two years ago.
Remember that the big sort of, you know, all the CEOs signed this petition.
Oh, yeah, I think that was the last AI Action Summit, right?
The one before Paris.
Guys, there's been so many of these.
Yeah, I must try.
Yeah, I must try.
I remember, like, what was the Dan Hendrix's C-A-I-S?
It was California AI.
The Center for AI Safety.
Center for AI Safety, that's true.
The nonprofit, yeah, yeah, yeah.
And then they got, like, all of these, like, people to sign this list, you know, like,
when you need to worry about the existential risk of AI.
And, like, that was the mood.
It was almost like, can I just do something by contrast, right?
So I was, you know, there during kind of, like, the early days of the web and the Internet.
And at that time, you actually had examples of the stuff being dangerous, right?
like Robert Morris, like let out the Morris worm.
It took down critical infrastructure.
We had, so we had new types of attacks.
We had viruses.
We had worms.
We had critical infrastructure.
We actually had a different doctrine for the nation.
We said, you know, the more we get on the internet, the more vulnerable we are.
So instead of like mutually assured destruction, we have this notion of asymmetry.
So there was all of these great examples of why should we be concerned.
And what did everybody else do?
Pedal to the metal.
Invest more technology.
This is great.
And so like, you know, we were still at the time, like we wanted the internet.
We wanted to be the best.
We wanted to build it out.
You know, the startups were all over it.
And coming into this AI stuff two years ago,
it was the opposite, which is like there were the concerns
with new technology, which you always have.
But like, there are very few voices that were like,
actually it's really important we invest in this stuff.
And so that's kind of, to me,
the bigger changes is this more cultural change.
I think that's right.
There was a moment in, I think it was last summer,
where somebody sent you and me a link to the SB 1040.
seven bill. And I remember Martia and I reacting like, there's no way this is going to get
any steam. What was absurd to us, I think, was that it made it through the House and the Senate,
and it was on its way to a final vote and would have become law, one signature from the governor
later. And I think there was this escalation where I realized, I think my view is that
technologists like to technology and politicians like to policy. And to pretend like these
two things are in different worlds. And as long as these two worlds don't collide and the
engineers get to, like, build interesting tech, and there's no sort of, like, self-own too early.
We generally trust in our policymakers, and that changed completely, I think, last summer,
which is a really weird cultural shift, which is, no, no, no.
A lot of the policymakers who actually, I think, were quite open about the fact they didn't
know much about the technology, because it was moving so fast, still felt like something had to
be done, therefore, this is something, therefore it must be good.
And that this was this, I think the most egregious example of this being advertised.
show was SB 1047.
But that culture shift was one from let's let the tech mature and then decide how to regulate it
later to like before let's try to regulate it in its infancy was like a massive, I think, shift
in my head.
But let's just talk about how bad it was.
You had VCs.
Their entire job is investing in tech talking against open source.
You know, like the node founders fund.
They're like open source AI is dangerous.
it gives China the advantage.
And there was just some sort of prognostication
that if we didn't do open AI,
like the Chinese would somehow forget math
and not be able to create models.
And then you forward by a year
and they've got the best models by far
and we're way behind.
So it was like the people that are supposed to be protecting
the U.S. innovation brain trust
were somehow on the side of the let's slow it down.
And I think that now there's this realization
of, actually, China is really good at creating models.
and they've done a great job.
We've kind of hamstrung ourselves
from whatever discussion we were having.
And I think you're right.
I think it's good to be concerned
about the dangers and job risks,
but it has to be a fulsome discussion.
You need both sides.
And when you and I jumped in,
it just didn't feel fulsome at all.
It was like one side was dominant,
and there was almost no one on kind of the pro-tech,
pro-innovation, pro-open source side.
I just think it didn't feel grounded in empirics.
Certainly not bad.
It came from, you know.
So what is the steel man
of the critique of open source
that they were making a couple years ago.
That, you know, this is like a nuclear weapon.
Would you open source your nuclear weapon plans?
Would you open source your F-16 plan?
So the idea was that somehow, like, this was like, you know,
nuclear weapons are not dual use.
Nuclear energy is dual use, right?
An F-16 is not dual-use, like a jet engine is dual-use.
But a lot of the analogies that were used at the time
were something that, you know, if you squint one way,
parts of it are dual use.
They could be used for good or for bad.
but like the examples were clearly the weapons.
And that's what they would say.
They would say, listen, these things are incredibly dangerous,
which you open source, like whatever, the plans for an F-16.
And then, you know, the other side would slowly decide
like this conversation is ridiculous.
We've got to go ahead and set up.
It says, you know, no, you would not do this for an F-16
because that is a fighter, you know, jet.
However, like a lot of the technologies used to build it,
yes, this is, you know, fundamental.
it's not like people aren't going to figure out anyways
and we need to be the leader
just like we were the leader in nuclear
and we were...
Then, by the way, in nuclear, like,
if you go historically, when that came out,
we invested incredibly heavily in it.
The things that we thought were proximal to weapons,
of course we made sensitive.
You know, but this, you know,
all the universities were involved,
like the entire country had the discourse
and that just wasn't what was happening.
I think that's true.
They were basically like
there was a substantive argument against open source
and there was an atmospheric one.
And the substantive one was like the one Martin mentioned,
that the technology was being confused for the applications, right?
And all the worst case outcomes of the applications or misuses were then being confused.
But they were also theoretical, too.
It's even worse than that.
It was like, you're right in what you're saying,
but it was like, this could potentially create bio-weapons.
It's funny.
We got a bioweapon expert.
And he's like, well, not really.
I mean, like the difference between like a model in Google is almost nothing.
But, you know, like that was used as this, you know, straw person argument.
And then it could hack into a whole bunch of stuff.
like nobody had ever done it before, but it was theoretical.
So it was like these theoretical arguments that were very specific versus a broad technology.
That was one.
And then the atmospherics were, there was a famous former CEO who went up in front of Congress.
And literally in a testimony said, the U.S. is yours ahead of China.
And so since these are nuclear weapons and misuses were being confused with the technology,
and we're so far ahead, let's lock it down so we can maintain that lead.
And therefore our adversaries will never get their hands on it, which were both just fundamentally wrong.
For the reason, Martin said, like, substantively, AI was not introducing you marginal risks.
So if you did an eval on how much easier...
Well, at least not identified at the time.
I mean, you would go to Don Song, who is, like, a safety researcher, McCarthy Genius fellow at Berkeley,
and you'd say, what are the marginal risk of AI?
She'd say, great question.
We should research that.
We should be a good research problem.
The world expert on this question was like, this is a very important, but it's an open research statement.
Yeah.
So no empirical evidence at the time that AI was creating net new marginal risks and just factual inaccuracies that we were ahead of China.
Because if you just paid attention to what's happening, Deepseek had already started to publish a fantastic set of papers, including Deepseek Maths V2, which came out last summer.
And you're like, okay, obviously these guys are clearly close to the frontier.
They're not yours behind.
And so when Deepseek R1 came out earlier this year, you know, a lot of Washington was like shocked.
Oh my God.
Like how did these folks catch them?
They must have stolen our weights.
No, actually, it's not that hard to do.
is still on the outputs of our labs.
Have you actually looked at the author list of any paper in AI?
Like, where do you think these people come from?
Right. So I think those two things were, it felt like we were being gaslit constantly
because both the content and the atmosphere was just wrong.
Yep, yeah, yeah.
Maybe one question for the smartest people or the most sober people who were against it
is like maybe they were asking, where should the burden of proof be?
Because it's hard to prove that there is risk, but there's also hard to prove that there
is a risk of what's risky?
Is it riskier to just go full steam ahead?
Or is it riskier to kind of slow down until we better and,
understand sort of these models, you know, interpretability, et cetera.
I mean, I think it's really important to ground these hypothetical discussions on what we've
learned as an industry.
I mean, the discourse around tech safety has been around for 40 years.
And we went through it with compute.
Like, remember when we're like, okay, Saddam Hussein shouldn't have PlayStation because
you can use GPUs to simulate nuclear weapons.
That was actually a pretty robust and real discussion.
But that did not stop, you know, us from having other people create chips or video games, right?
I mean, we went through the Internet.
We went through cloud.
we went through mobile.
And so we've been through all of these tech waves,
and we've learned how to have this discussion
in a way that for the United States interest balances these two things.
And, you know, listen, we've had kind of areas
that were very sensitive to national governance.
Think about like Huawei and Cisco, for example,
and we as a nation did start to put in kind of imported export restrictions
as a result.
And so I just feel these almost platonic, you know, polemic questions
like the one that you just pose
aren't rooted in 40 years of learning.
So all I ask is, if we're going to make a departure from a posture that was developed from 40 years,
we better have a pretty damn good reason.
And if we don't have a good reason, then I think we should probably learn from that experience.
Yeah, I think extraordinary claims require extraordinary evidence.
And so the burden of proof should be on the party making the extraordinary claims.
And if there's a party who's going to show up and say, you know, AI models are like nukes,
and California should start imposing downstream liability on open source development.
developers for open sourcing the weights, that's a pretty high claim to make.
And so you should have like exceptional proof if you want to change the status quo.
And the status quo is you do not hold scientists liable for downstream uses of their technology.
That's absurd.
That's a great way to shut down the entire innovation ecosystem and start throwing literally like
researchers in jail.
We don't want that.
We want them to be trying to push the frontier forward.
And I just don't think that the tall claims were not being followed up by tall proof.
When we're talking about open source, are we all talking about the same thing,
meaning are there degrees of open source,
or is it kind of just like a binary?
Open weights, I think, was the primary contention,
which is that if somebody put out the weights of a model
and a bad guy took those weights,
fine-tuned it, did something really terrible two years later,
the SB 1047 regime proposed that the original developer of the weights
and that they put out basically as free information
should be held libel, which was absurd.
Right.
So I think a weight...
Yeah, I mean, I just want to make sure we're very clear
because, like, you know, people jump on top of these things.
Right.
What he's saying is clear.
So basically, if the weights were over a certain size
and there was a mass casualty event.
I think catastrophic harm was the word used,
but there were no real...
No, it was mass casualty.
There were so many versions...
Actually, I don't know which version you're talking about.
But I remember we actually looked it up.
The legal definition was three or more people were killed.
Or the medical system was overwhelmed,
which there was actually precedence of this,
including like a car crash.
Right.
Right?
And there was actually precedence of this happening in rural areas.
which basically doesn't have any sort of capacity.
And so, you know, basically it would move the conversation to the courts and outside of policy,
which is, again, historically, we've taken a policy position on these things,
which follows precedence that we understand, you know, to make sure that we don't introduce
externalities, like, for example, allowing, you know, China to race ahead with the source,
which is, you know, which has happened.
And the key thing is by moving into the court, even if you don't, you could,
One could argue, O'Ange, but, like, sure, it's moving to the course.
That means it's open for debate.
It's not clear that open weights are going to be regulated with liability.
The point is that that creates a chilling effect.
The chilling effect is the idea that when our best talent is considering...
I could be sued.
Like, I'm a random kid in Arkansas developing something.
Like, I don't want to be in a world where it can be resolved in the cars.
Right.
Hey, I can't even afford, you know, whatever.
And in a situation where you have an entire nation-state-backed entity, like China,
make actually doing the opposite of a chilling effect right encouraging a race to the frontier why on earth would we want you know that there's this meme of a guy on a bike and he picks up a stick and puts it into his front and topples forward that's the effect of a chill that that is what chilling effect is right at a time when your your primary adversary is racing so let's trace how the conversation has changed because we don't see benode tweeting about open source anymore obviously open ad is change your tune especially right now what um is it really just deep seek is that is that
Or how do you trace kind of how the sentiment shifted on open source?
Let's go through a few theories.
I'm not really sure what happened.
I almost felt like it was almost culturally in vogue
to be a thought leader on the negative externalities of tech.
And it kind of started with Bo Strum,
but it was picked up by Elon.
It was picked up by Moskowitz.
I mean, a bunch of like these intellectuals.
that, like, we all respect and still do.
I mean, they're just really the titans of our industry and our era.
They were asking these very interesting intellectual questions around, like,
do we live in a simulation, what happens if AI can recursively self-improve?
And then actually, you know, they created whole kind of cultures
and online social discourse around this stuff.
And so I think to no small part, that became a bit of a runaway train.
And it's just catnip to policymakers.
Yeah.
You know, and so I think part of it,
is like people didn't really realize that this has become so real
because of course GPT2 comes out and then three comes out
and like all this stuff's amazing and somehow it got conflated.
So I think part of it is just a path dependency on where we came from,
which is kind of the legacy of Bowstrom.
I think that was part of it.
I think the ungenerous approach would be that there was a lot of discourse is awesome,
but a lot of the people pushing the discourse were first order thinkers.
They weren't doing the math on, wait, wait a minute,
if policymakers who have no background in frontieria,
which, by the way, nobody does because the space is only three, four years old,
start to take discourse as canon, which is a big difference.
Then what happens, what are the second and third order effects?
And the second and third order effects are that you start making laws
that are really hard to undo and start mistaking interesting thought experiments
as the basis for policy.
And once that happens, those of us who've, look, law is basically code.
code is hard to refactor.
Law is like impossible to refactor.
And so I think the second and third order effects
was that were of a lot of well-intentioned folks,
for example, in the existential risk community,
saying, look, if you're intellectually honest
about the rate of progress of AI,
it's not crazy to say that there are some existential risks
on the technology. It's non-zero.
Sure, yes, that is true.
But to then say that that threshold is high enough
to start introducing Nash, sweeping changes in regulation
to the way we create technology,
that leap, I don't think a lot of the early
proponents of that technology realized
they would do that. In fact, I think Jack Clark,
who runs policy philanthropic, literally
tweeted, like, towards the end of the SB 1047 saga,
he was like, I guess we shouldn't realize
the impact of how far this could have gone.
And I think to those of us who had interacted with DC before
and regulation before,
like the second and third order effects
were much more discernible or legible.
And then I think what Deepseek did
was just made it super legible to everybody else.
So I think there were already, like, I think Deepak was the catalyst.
Right.
But it wasn't like there was a step, it didn't change the reality that the second and third order effects of policymakers confusing sort of like discourse for fact were always going to be terrible.
Yeah.
I just think it brought to light something a lot of us were already saying, which is we're in a race with adversaries.
And that should be the calculus we, the calculus we should be working backwards from.
Yeah.
There was always this prevailing view, which has turned out to be so wrong from really well-intentioned people, which was like,
it's going to be regulated anyways.
If it looks like we're self-policing, we can dictate, you know, how that happens, right?
Right.
And unfortunately, that just turned out not to be true because, you know, whatever self-policing we seem to be doing,
scared the shit out of people.
And they ended up like, and then, of course, I would say very opportunistic elements in tech
decided to use that for whatever agenda that they had.
And so it kind of got away from us.
And so.
Mark had this sort of the Baptist bootlegers.
Yes, I was just going to say, exactly.
True believers.
And then sort of people who use.
use the sort of that thinking for to support their own ends and it seems like that's changed even
just on the company but the reality as I think it was driven I think the majority of people are
neither yeah the majority of people are pragmatists right that are not trying to take advantage
of the system that think well maybe if we have this discourse it's an unjust discourse and then we'll
self-police and then I just feel like the silent majority was not part of the discussion
maybe the biggest change now is like those people are there like the founders are
are there. Academia's there. VCs are there. Now, now the people that are not either Baptist or
bootleaders are driving the discussion, which actually is independent of the action plan itself.
I feel much in a better position now. Like, for example, there's still a bunch of stupid
regulation that's popping up, but I'm not calling Ange at night and like, we have to do
something now because I feel like, okay, there's actually representation that's sensible.
At the time, there was none. Right. And I think to move, you know, to the action plan,
I think this is a great, like, if you read, the first page, right, what a marked shift?
the fact that the co-authors include
technologists.
And I think that was the core problem
is a system, like a self-contained
system, and the value is a self-contained
system. And I think a lot of the people here
were assuming best intentions over here and vice versa.
And what happened is a few bad actors
essentially use that arbitrage opportunity
to represent Silicon Valley's views incorrectly
in D.C. And when we saw some of the legislation,
we had policymakers calling us up and saying,
wait, you guys aren't happy with 1047,
but the guys, the other tech people
were calling us and saying
you'd love more of this kind of regulation.
We said, what other tech people?
And it turns out we are not one homogenous group.
Little tech is extraordinarily different
from big tech, which is extraordinarily different
from the academic communities.
And I think one of the things we had to contend with
was like, we used to be one shared culture.
And then when tech grew, we actually,
there are some major differences
in the valley at least between parties.
We're not one tech ecosystem anymore.
We have different interests.
and D.C. hadn't updated that.
And I think what's amazing about the action plan is
it's written by people who have bridged both
with enough representation across
like the four or five different subcultures
within tech who have different interests.
I think that's new.
Yeah, yeah.
Going back to open source,
why don't you talk a little bit about just sort of the
how different companies,
how much make sense of how different companies
have thought about it
or from a sort of business strategy perspective.
Maybe we saw Meadow with maybe the first big open source push.
You know, open AI has sort of
evolved their tune. I've seen even Anthropics seems to being evolving their dialogue a little
bit. How should we think about open sources as a business strategy in terms of what's changed here
and why? Oh, look, I don't think this is this part is actually is playing out beautifully along
the same trend lines of all previous computing infrastructure, databases, analytics, operating
systems like Linux. The way it works is the close source pioneers the frontier of capabilities.
It introduces new use cases. And then the enterprises never know how to consume that technology.
when they do figure out eventually that they want cheaper, faster, more control, they need somebody
like a red hat to then introduce them and provide solutions and services and packaging and forward
deployed engineering and all of that around it. And which is why the arc generally in enterprise
infrastructure has been close source wins applications and open source tends to do really well in
infrastructure, especially in large government customers, regulated industries where there's a bunch
of security requirements, things need to run on-prem, the customer needs total control over it. Broadly,
you could call that the sovereign AI market right now.
Lots of governments and lots of legacy industries are going,
wait, this open source thing is really critical to us.
So I think whereas two, three years ago,
it was open source was viewed as like this largely philosophical endeavor,
which it is.
Open source has always been political and philosophical by definition.
But now there's an extraordinary business case for it,
which is why I think you're seeing a lot of startups
and companies also changing their posture
because they're going, wait a minute,
some of the largest customers in the world,
enterprise customers happen to be governments
and happen to be legacy industries.
and Fortune 50 companies, and they want stuff on-prem.
And that's when you go adopt open-source.
I say, I think there's been a business shift as well.
I don't know if you do get it.
So I totally agree.
I do think it's interesting to have a conversation where it's the same
and where it's different.
Like everything I understood is exactly right, which is we have a very long history with
open source.
And it's a very useful tool for businesses, but also for research and academia, et cetera.
But let's just talk about businesses and startups, right?
It's a great way to get a distribution advantage.
It's a great way to enter a market where you're not an incumbent in your startup.
So it's just kind of one of the tools for building
in software that's been used
and open source has been used
in the very similar way, right?
I mean, you can use it for recruiting,
you can use it for brand,
you can use it to get distribution,
and we see all of that.
There's something that's unique about AI
that software doesn't have,
and like we're seeing very viable business models
come out of it that don't have the limitations
of traditional software.
And this is for two reasons.
One of them is, like, open weights
is not the ability to produce the weights.
But open software is the ability
to produce the software.
Like if you give me open software, I can compile it, I can modify it, whatever.
But giving open weights, you don't have it.
You don't have the data pipeline, you know, when you're talking about open ways.
So you don't actually enable your competitors in the same way open software enables it.
So that's one.
The second one is this is very nice business model, this kind of a piece dividend to the rest of the industry,
which is you open, you produce open weights to your smaller models that anybody can use,
but the larger model you keep internally, which is actually also more difficult to operationalize for
inference, right? I mean, there's kind of good reasons to do this. And then you charge for
the largest model and then, you know, the smaller open models you use for band or distribution
or whatever. And so I feel like it's actually almost an evolved from a, from a business strategy
and an industry perspective version of open source for these reasons. I think it's the AI flavor of
open core, which was historically a theoretically, was supposed to be a theoretically sort of
sustainable model for open source software development, which was really hard to implement.
because of the reasons Martin said,
where once you gave away the code,
it was really hard for you to protect your IP.
But with weights,
you can contribute something to the research community.
You can give developers control.
You can allow the world to red team it
and make it more secure
while you're still able to actually,
because of the way distillation works
and some of the ways like post-training works,
you can still actually hold on to some of the core IP,
which then allows you to build a viable,
sustainable business.
And that is unique about open...
But also, you have the data pipelines.
You have the data.
Like, I mean, nobody else could...
Just because I give you the weights,
doesn't mean you can recreate it.
the model. Like you could distill it to a subset model. There's a bunch of stuff you can do, but not
necessarily we created. And so I, listen, having been kind of a student of open source business
models for 20 years and have watching, you know, it shaped the way that the industry has adopted
and built software. I actually think that the AI one is more beneficial to the companies doing it
for sure. But as a result of that, we're going to continue to see a lot of it. And so I think we should
just kind of assume that open source is part of it and every country is going to do it.
And one of the best things about this current AI action plan is it acknowledges that and it wants
to incent the United States to be the leader in it, which is such a traumatic shift from where we
were this time last year.
Yeah, there's sort of an ecosystem mindset that people who, if you've worked in any kind
of developer business, which Martina, unfortunately, have spent, you know, way too long doing,
you know, working on dev infrastructure and dev tools, but you sort of internalize this idea that
When if you, it's often, you have to often sort of trade off short term revenue for long term ecosystem value, right?
And I think what the action plan shows is that yes, in the short term, it may seem like we're giving away IP to the rest of the world by open sourcing weights and showing the rest of the world how to create reasoning models and all of this stuff.
But in the long term, if every other major nation is running their entire AI ecosystem on the back of American chips and American models and American post-training pipelines,
an American RL techniques, then that ecosystem win is orders of magnitude more valuable than any short-term sort of give of IP, which anyway, as we saw with Deep Seek, that marginal head started is minimal.
Okay, so just to close a loop on open source, over the next several years, how do you predict open source and close source will intersect?
Like, what will the industry look like?
Well, I think these are two different markets.
Yeah.
I mean, actually, literally the requirements of the customers are completely different, right?
So if you're a developer, you're building an application and you happen to need the latest and greatest frontier capabilities today, you have a different set of requirements than if you're a nation state deploying like a chat companion for your entire employee base of like 7,000 government employees, and you need, and the product requirements, the shape of how you provide those, do you deploy them, the infra, the service, the support, and then the revenue models are completely different.
And so often I think people don't realize that close source and open source are not just differences in technology,
but completely different markets altogether.
They serve different types of customers.
And I think if you believe AI is this sort of explosive new platform shift, then there'll be winners in both.
I do think what we need to contend with is that it seems like it's getting harder and harder to be a category leader if you don't enter fast.
Like the speed at which a new startup is able to enter the open source or the close source,
market and create a lead is absurd, right?
We both have the chance to work with founders who are, I mean, literally, you know,
20-something-year-olds out of college, two years out of college, building revenue run rate
businesses in the tens to hundreds of millions of dollars serving both of these markets
expanding like this.
And so I think the biggest mistake is to confuse these two markets as one and to do the classic
like, oh, let's wait to see how they evolve because the pace at which a new entrant is able
to actually create a lead in the category.
is quite stunning.
Let's go into the action plan.
What are our biggest reflections from where are we most excited?
If you look at the quote that they start with, I wanted to read it out because I thought
it was pretty poignant.
It was today a new frontier of scientific discovery lies before us.
And I thought that first opening line was fantastic.
Out of all the things they could have said, you know, they could have said we're in an nuclear
we're in an arms race, which sure, the first page, the title says winning the race.
but if you actually start reading the document,
the first sentence is a quote from the president
that says,
today a new frontier of scientific discovery lies before us.
I love that they led with something inspirational.
Because ultimately the technology has to confer
some benefits on humanity.
And I personally, I just love the fact
that we are just starting to explore
what these frontier models mean for scientific discovery
in physics, in chemistry, in material science,
and we need to inspire the next generation.
to want to go into those areas, because it's hard.
It's really hard to do AI in the physical world.
You have to literally hook up wet labs
and start doing experiments an entirely new way.
And you need people who are excited not only about
wanting to do machine learning work,
but also the hard work of being lab technicians
and running experiments and literally pipeting new materials
and chemistry, right?
And that I think was missing in a lot of the discourse
under the previous administration.
So you can't sometimes judge
book by its cover. And I think this was a strong start. And now I think we should actually
dive into some of the bullets. Okay. So the other one that I thought was a huge omission is there's
basically no real mention of academia investing in academia. Like there's, you know, some oblique
references to it, but it's just been such a mainstay of innovation, computer science of the last
40 years, not having a major part of it. I think it's a shame. And I understand that right now
there's kind of a standoff between higher ed and the administration. I get it. And I
and I actually think that both sides
actually have fairly reasonable points
but to have a major
tech initiatives
without including academia
it just feels like we're
you know
what is it fighting a battle
with a hand tied behind her back
like some aphorism
this is a good problem to have
which is that I think it's
extremely ambitious
it's a little bit light on execution details
right which is
what happens next
so a good example of that
is I think I do think
think directionally, it was great that they said, we need, let's read this bullet point on
build an AI evaluations ecosystem. I love that because it acknowledges that, hey, before we start
actually passing grand proclamations of what these models are risky or whether these models
are dangerous or not, let's first even agree on how to measure the risk in these models before
jumping the gun. That part, I think in addition to the open source bullet, was probably I thought
the most sophisticated thinking
I've seen in any policy document.
And look, the reality is America leads the way.
And so every other, you know, within 24 hours
of this dropping, Martia and I were getting text
and messages from folks in many other governments
around the world going, what do you guys think?
And it was not hard for me to endorse it
and tell them, like, look at it as a reference document
because there are things here that arguably
are more sophisticated than policy experts,
even in Silicon Valley would recommend.
because building an AI evaluation ecosystem is not easy.
And I think layout a pretty thoughtful proposal on the fact that that's important.
Now the question is how.
And I think that's what we have to help DC with, the hard work of like implementing this stuff.
But the vibe shift going from let's not jump the gun on saying these models are dangerous.
Let's first talk about building a scientific grounded framework on how to assess the risk in these models to me was not at all a given.
And I was really excited about that.
Yeah.
There's been a lot of focus in the last few years by several companies,
but also by the broader industry around this idea of alignment.
Right.
Have we made any progress on alignment?
What is your assertive perspective of what are they trying to do?
Is that a feasible goal?
Help us understand what they're trying to solve for.
So at an almost tonological level, alignment is an obvious thing you'd want to do.
I have a purpose.
I want to align the AI to this.
purpose, and it turns out these models are problematic, generally unruly, chaotic,
whatever adjective you want to use.
And so, like, you know, understanding how to better align them to any sort of stated goal
is very obviously a good thing.
And so I think we'd all agree that alignment to whatever the goal is to make them more
effective that goal and do that thing is good, especially given these models who tend
to have a mind of their own.
The subtext that certainly I bristle to is that the people doing the alignment are somehow protecting the rest of us from whatever they think their ideal is as far as, you know, dangers to me or thoughts I shouldn't have or information I shouldn't be exposed to.
Which is why I think we need to be, even when we come up with policy, we need to be very careful not to impose like a different set of, you know,
ideological rules on top of these.
I just think, like, alignment is something we should all understand,
actually aligning them to me is kind of where I take issue
from any sort of kind of top-down mandate.
I agree, and I think, you know,
there's a quote from a researcher who, which I think is very accurate,
which is you've got to think about these AI systems
as almost biological systems that are grown, not coded up.
right because sure they expressed a software
but in many ways when you're training a model
you are actually growing in this
environment of a bunch of prior
history and training data etc
and often empirically you actually don't know
what the capabilities of the model are until it's actually done training
so I think that's a useful analogy
where I think that falls down is when people go
oh well if it's if we can't align it
because we actually don't know it's a biological mechanism
until it's grown up you don't know what its risks are
and so on then we can't deploy these AI models
in mission-critical places
until we've solved
let's say the black box problem,
the mechanistic interpretability problem,
which is can you trace deterministically
why a model did something?
We've made a lot of advances
as a space in the last few years,
but it still remains a research problem.
But that doesn't mean just because
you don't understand
the true mechanism of the system,
doesn't mean you don't unlock its useful value.
If you look at most general purpose
technologies in history,
electricity, nuclear fusion,
like we, there are many examples
of technologies where we knew they were complex systems
and we didn't truly understand
at an atomistic level or mechanistic level
how they work, but we still use them.
And we don't understand how the internet worked.
I mean, like, there's a whole research
of network measurements trying to find out
what the heck the internet was going to do.
Is it going to have congestion collapse?
I mean, like, you know, any complex system
has states that you just don't understand.
Now, listen, I would say these models more so than many
and the applications are very real,
but like we know how to deal with ambiguity.
We don't even know how our own brains work.
No, right?
Her consciousness.
Yeah.
But we don't stop working with other human beings.
Unfortunately, we're stuck with them.
We have no option on that one.
Yeah, totally.
I mean, I think to extend that analogy, what do you do?
You're like, okay, I don't know how a brain works.
It's got a bunch of risks.
This person may be crazy.
But I still want to unlock all the beautiful benefits of the big beautiful brains that humans have.
And so you develop education.
You send kids to school.
And you teach them values.
And then you send them off to college.
And then they get to learn something specific.
and you get to test them in the real world environment.
They get a resume and they get work experience
and they get to prove that they actually are
within a risk-based framework manageable and so on.
And that as a society is unlocked human capital, right?
Like arguably the greatest technology we've had in, you know,
500 years of modern industrial innovation.
So I think what I hate about the alignment discourse
is it sometimes confuses the fact that we don't understand the system
for the fact that then we can't use it.
And I think, I don't think we've, like for a long time,
I think mechanistic interpretability,
which is kind of like some folks would say
is the holy grail is like being able to reverse engineer
why a model does something is still a research problem,
but that doesn't mean we haven't made progress
on how to use unaligned models
or to improve alignment to a point
where they're useful in massive ways
like software engineering.
I think what the smartest might say is it's not that,
it's really just what's the rush.
Like maybe let's focus on like integrating all the capabilities
we already have before,
you know, pushing the frontier to which then ends, well, but the arms raised, et cetera.
Like there's a risk of slowing down, too, that maybe isn't fully appreciated.
Until we've solved cancer, every month that we're not rushing to the frontier of accelerating biological discovery or scientific progress is a month that millions of people are suffering from disease, that we could be solving with the eye.
We don't talk about the opportunity cost of slowing down the frontier.
I mean, this is the thing with all of these, like, there's always this kind of reverse question.
innovation, and they say, well, okay, it's like, it's like the Bostrom-Earn experiment.
You know, the, his kind of whole earned hypothesis is like, there's an urn of innovation,
and you pull up balls. One of us a black ball that destroys everything, right?
Like, so eventually you'll draw that ball, so why would you ever do innovation?
Like, that is the thought experiment. And the answer is so simple, which is,
it just turns out that it's much more dangerous not to pull out balls than pull out balls.
Like, that's always the answer. So, like, when people ask P-Doom, so what is the P-Dum,
the answer is not like 0.1 or 0 or 100.
The answer is the P-Doom without AI
is actually quite a bit greater
than the P-Doom with AI.
And the what's the rush?
The answer is the same thing,
which is clearly if you ignore,
exactly not just,
if you ignore the benefits of technology,
then you would say,
if it's all negative,
no rush at all, right?
The reality is the benefits are,
they're so dramatic.
And they're so obviously dramatic now.
Thank God we've got a year's worth of data on.
this stuff. Like, they're clearly economically beneficial. They're clearly beneficial,
expanding a number of areas of, like, core science, that the rush is, is getting to the next set
of solutions, you know, as opposed to being afraid of, you know, a set of problems that we still
can't clearly articulate. And listen, as soon as we do understand marginal risk, and we do have
these, we absolutely should address those directly. Which, again, the Action Plan does a great job
of penciling this out. I mean, it does want to explore implications.
implications on jobs, implications on defense, implications on, you know, alignment.
And that's exactly where we should be in the exploration base.
Do we have a definition of marginal risk or a perspective of how to think about that idea?
Well, let's just be clear what we've mean by marginal risk, which is computer systems are risky.
Network systems are risky.
Stochastic systems are risky.
We've been, we've got decades of, you know, ways of thinking about measuring, regulating,
changing common behavior
based on this type of risk.
And so the question is, can you take all of that apparatus
that's been hard one and apply it to AI?
If so, like, A, we know it's effective
because we've used it before
and we've got a lot of experience with it
and, B, it's ready to be done.
Or is there a different type of risk
that's not endemic on those systems?
In which case, we'll have to come up
with something that new, which you go down that exploration.
Maybe it works, maybe it doesn't work, et cetera.
So that's what marginal
risk is and I just think that the problem is if you don't if you don't know what it is how are you
going to define a solution I think that's right I mean philosophically the idea is if you're going
to say we need new solutions then you need to articulate why the problem is new and why are
solutions that work really great are not are no longer sufficient right and I think it's it's
almost obvious when you stated but this was the state of the world a year ago that we were having
to look around the room and say
can I raise my hand?
Why are we introducing net new liabilities
and new laws that we've never had to do before
if you can't articulate why
there are no new problems to solve?
If it ain't broken, why are you trying to fix it?
And so marginal risk is I think
a slightly just more technical way to say
we have the tools to manage risk.
We don't need new ones.
And if you think we need new ones,
then hey, just take a minute
to articulate to us why.
Is there anything else you wanted to make sure
we got to?
Otherwise, that's great.
Time to put the action.
plan interaction.
Excellent.
Martine, Ange,
thanks so much
for coming to the podcast.
Thank you.
Thanks for having us.
Thanks for listening
to the A16Z podcast.
If you enjoy the episode,
let us know by leaving a review
at rate thispodcast.com
slash A16Z.
We've got more great conversations
coming your way.
See you next time.