a16z Podcast - The Little Tech Agenda for AI
Episode Date: September 8, 2025Who’s speaking up for startups in Washington, D.C.?In this episode, Matt Perault (Head of AI Policy, a16z) and Colin McCune (Head of Government Affairs, a16z) unpack the “Little Tech Agenda” for... AI- why AI rules should regulate harmful use, not model development; how to keep open source open; the roles of the federal government vs states in regulating AI; and how the U.S. can compete globally without shutting out new founders. Timecodes: 0:00 – Introduction 1:12 – Defining the Little Tech Agenda4:40 – Challenges for Startups vs. Big Tech6:37 – Principles of Smart AI Regulation9:55 – History of AI Policy & Regulatory Fears19:26 – The Role of Open Source and Global Competition23:45 – Motivations Behind Policy Approaches26:40 – Debates on Regulating Use vs. Development35:15 – Federal vs. State Roles in AI Policy39:24 – AI Policy and U.S.–China Competition40:45 – Current Policy Landscape & Action Plans42:47 – Moratoriums, Preemption, and Political Dynamics50:00 – Looking Forward: The Future of AI Policy56:16 – Conclusion & DisclaimersResources: Read the Little Tech Agenda: https://a16z.com/the-little-tech-agenda/Read ‘Regulate AI Use, Not AI Development : https://a16z.com/regulate-ai-use-not-ai-development/Read Martin’s article ‘Base AI Policy on Evidence, Not Existential Angst: https://a16z.com/base-ai-policy-on-evidence-not-existential-angst/Read ‘Setting the Agenda for Global AI Leadership’:https://a16z.com/setting-the-agenda-for-global-ai-leadership-assessing-the-roles-of-congress-and-the-states/Read ‘The Commerce Clause in the Age of AI”: https://a16z.com/the-commerce-clause-in-the-age-of-ai-guardrails-and-opportunities-for-state-legislatures/Find Matt on X: https://x.com/MattPeraultFind Collin on X: https://x.com/Collin_McCune
Transcript
Discussion (0)
There have been these big institutional players in D.C. in the state capitals for a very long time.
There wasn't anyone who was actually advocating on behalf of the startups and entrepreneurs,
the smaller builders in the space.
They're trying to build models that might compete with Microsoft or Open AI or Meta or Google.
For those companies, what are the regulatory frameworks that would actually work for them
as opposed to making that composition even more difficult than it already is?
Regulate use. Do not regulate development.
Somehow is interpreted as do not regulate.
I actually can't think of a single example across the portfolio in which we are arguing for zero regulation.
Who's speaking up for startups in Washington, D.C.? Today, I'm joined by Matt Peralt, head of AI policy at A16Z, and Colin McCune, head of government affairs at A16Z, to talk about the little tech agenda.
Our framework designed to ensure that regulation doesn't just work for the giants, but also for the five-person teams trying to build the next breakthrough.
Their approach?
Regulate harmful use, not development.
We'll cover federal versus state rules, open source, export controls, and what smart preemption could look like.
Let's get into it.
Colin, Matt, welcome to the podcast.
Thanks so much.
Thanks for having us.
So there's a lot we want to get into around AI policy, but first, I want us to take a step back and reflect a little bit.
We had publicly announced the little tech agenda July of last year.
There's a lot that's happened.
When we first take a step back, Colin, and talk about what is the little tech agenda and how did it come to be at the firm?
Yeah, I mean, look, a ton of credit to Mark and Ben for having sort of the vision on this.
I think certainly when I first started here, I arrived.
We started advocating on behalf of technology interest, technology policy.
And I think what we realized was there have been these big institutional players that have been in D.C. in the state capitals for a very long time.
some of them have done a lot of really good work
on behalf of the entire tech community
but there wasn't anyone specific
who was actually advocating on behalf of
what I think what we call little tech
which I think in my mind are the startups and entrepreneurs
the smaller builders in the space
and I think beyond that what we realized was
well they're not always 100% aligned
with what's going with the big tech folks
and that's not necessarily always a bad thing
or a good thing but I think that was the whole impetus
of this you know how are we going to
think about positioning ourselves in D.C. and the state capitals in terms of our advocacy
on these issues. And how do we differentiate in between sort of the big tech folks who come with
their certain degrees of baggage? Yeah. On the left and the right. From the left and the right,
right, right? And the small is small. So that was really sort of the basic evidence of this.
For me, it was actually sort of almost a recruiting vehicle. So when it hit in July, I was not yet at the
firm. I started in November. And when I first read the agenda, it sort of transformed the way that I
looked at the rooms that I would sit in where there would be policy conversations where all of a
sudden you could see essentially an empty seat and little tech's not there you know there would be
conversations where people would say and in this proposal we want to add this disclosure requirement
and then we'll have companies do a little bit more and a little bit more and when you've read the
little tech agenda all of a sudden you start thinking how is this going to work for all the people
who aren't in the room and so for me the question like thinking about coming into this role in the
firm was is this a voice is this a part of the community I want to advocate for and think about
And when you start looking at the policy debate
from a perspective of little tech
and you see how many of the conversations
don't include a little tech perspective,
it comes, from my point of view,
it was very compelling to think about
how can I advocate for this part of the internet ecosystem.
Right.
And come, why do you outline some of the pillars
of the logistics agenda or some of the things
that we focus the most on
and maybe how it differentiates
from sort of big tech more broadly?
Yeah.
I mean, well, I mean, just from a firm perspective, right?
Obviously, we're verticalized.
You know, we all live and breathe this.
And I think that that's been very, very competitive
for us on the business side. But I also think it's a very competitiveist on the policy side, too, right?
Obviously, Madleads are AI vertical and that's sort of our AI policy lead. We have a huge
crypto effort. We have a major effort around American anonymism, and this is sort of defense
procurement reform, which is something that the United States is needed forever and ever.
We have, you know, other colleagues who work on the, on the bio and health team, and they're
fighting on behalf of, you know, FDA reform, everything from PBMs. There's a whole vertical there that
they're working on. We're working a lot on thin tech related issues. And then, you know, just
like classic tech related sort of internet entrepreneurs coming up. What does that relate to?
There's a lot of tax issues that come along with it. And then of course, obviously, there are
the venture specific things that we have to deal with. But look, I think I try and think about
this from a basic point of view, which is just like, if you're a small builder, what are the
things that should differentiate you between someone who's a trillion-dollar company?
and you have hundreds of thousands of employees, right?
If you're five people and you're in a garage,
how are you supposed to be able to comply
with the same things that are built
for a thousand person compliance teams?
Like, it's just not the same thing.
Right.
And, like, there are categories and categories
that, you know, Matt and I are dealing with
on a regular basis,
but that's probably the main pillar,
which is five person versus trillion dollar company,
not the same thing.
It's made my job actually really hard in certain ways
since I started at the firm
because the kinds of partners that you want within our portfolio often don't exist in that, like, a lot of the companies don't have a general counsel.
They don't have a head of policy. They don't have a head of communications. And so the kinds of people who typically sit at companies thinking all day about, like, what is this state doing in AI policy? What is this federal agency doing in terms of rulemaking? They're not at startups that are just a couple of people and engineers trying really hard to build products. Those companies face this incredible.
daunting challenge. I mean, it seems so daunting for someone like me, like non-technical and I've
never worked at a startup. If they're trying to build models that might compete with Microsoft or
open AI or meta or Google, and that is unbelievably challenging in AI. You have to have data,
you have to have compute. There's been a lot written about the cost of AI talent recently. It's
incredibly, incredibly daunting. And so the question that Colin and I talk about all the time is
for those companies, what are the regulatory frameworks that would actually work for them
as opposed to making that competition even more difficult than it already is.
Yeah.
Well, yeah, one of the principles I've heard you guys, you know, hammer home is we want a market
that's competitive where startups can compete.
We don't want a monopoly.
We don't want even oligopolys, you know, a cartel like system.
And that doesn't mean no regulation because that can, as we've seen, that could be destabilizing
too.
But it means smart regulation that enables that competition in the first place.
Yeah, so I think one of the things that's been surprising to me to learn about venture
is the time horizon that we operate in.
So our funds are 10-year cycles.
So we're not looking to spike an AI market tomorrow and have a good year, a good six months, or a good two years.
We're looking to create vibrant, healthy ecosystems that result in long-run benefits for people and long-run financial benefits for our investors and for us.
And that means having a regulatory environment that facilitates healthy, good, safe products.
It doesn't mean, like, if people have scammy, problematic experiences with AI products, if they think AI's bad,
for democracy, if they think it's corroding their communities. That's not in our financial
incentive. That's not good for us. And so that really animates the kind of core component of the
agenda, which is not trying to strip all regulation, but instead focusing on regulation that will
actually protect people. And we think that there are ways to do that without making it harder for
startups to compete. Yeah, to Matt's good point. I walk into a lot of lawmaker offices. It sounds
like I'm pitching my book. But I genuinely say, like, our interests are aligned with the United
States of America's interests. Because the people that we're funding are on the cutting edge.
They're the people who are going to build the companies that are going to drive the jobs.
They are going to drive the national security components that we need. And they're also going
to drive the economy. And, like, we want to see them build over a long time horizon. And, like,
that is exactly how we should be building policy in the United States. Of course, like, half the
office as I walk into, like, all right, great, get that guy out of here.
99.9% of people we talk to think that all we want is no regulation.
And despite like writing extensively, both of us writing speaking extensively about the importance
of good governance for creating the kind of markets that we want to create, and Colin can speak
more to it in crypto.
I've learned a lot from our crypto practice because the idea there is you really need to
separate good actors from bad actors and ensure that you take account for the differences.
And it's true in AI as well.
if we don't have safe AI tools, if there is absolutely no governance, that's not going to create
a long run healthy ecosystem that's going to be good for us and good for people throughout the
country. I actually can't think of a single example across the portfolio in which we are
arguing for zero regulation. The core component of our AI policy framework, which was developed
before my time, I wish I could take credit and I can't, is focus on regulating harmful use,
not on regulating development. And that sentence, regulate use. Regulate,
use, do not regulate development, somehow is interpreted as do not regulate.
And people just omit for some reason the part that we focus on on focusing on regulating
harmful use.
And that in our view is robust and expansive and leaves lots of room for policymakers to take
steps that we think are actually really effective in protecting people.
So regulating use means regulating when people violate consumer protection law, when they use
AI to violate consumer protection law, or when they use AI in a way that violates civil
rights law at the state and federal level or violating state or federal criminal law.
So there's an enormous amount of action there for lawmakers to seize on.
And we really want that to be like an active component of the governance agenda that we're proposing.
And for some reason, it's all passed over and the focus is just on don't regulate development.
I don't exactly understand why that ends up being the case.
Easy headline.
So there's been a lot that's happened in AI policy.
And I want to get to it.
But first, perhaps Matt, you can trace the evolution a bit over the last few years.
I believe there was a time where like pattern matching with social media regulation a bit.
When you trace some of the biggest inflection points, kind of the did.
debates over the last few years, and we'll get to today.
Maybe we'll.
I think we have to play a little bit of history.
And I want to get to, you know, sort of a point that I think is the really critical
point of what we're all facing here.
For us, for me, I would say from a policy in government affairs perspective, this
conversation started early 2023.
That was sort of like the kickoff of the gun.
It sort of puttered along and became more and more real over time.
But in the fall of 2023, so almost exactly to the,
the day two years ago, there was a series of Senate hearings in which, you know, some major
CEOs from the AI space came and they testified. And I think that the message that folks
heard was, one, we need and want to be regulated, which I think maintains that's true today.
That's obviously, you know, what Matt and I are working on a regular basis. But I think
included in some of that testimony was a lot of speculation about the industry that led to
and sort of absolutely jumpstarted
this whole huge wave of conversation around
the rise of Terminator,
you know, go hug your families
because we're going to all be dead in five years.
And that spooked Capitol Hill.
I mean, they absolutely freaked out about it.
And look, rightfully so,
you have these really important, powerful people
who are building this really important, powerful thing,
and they're coming in, they're going to tell you
that, you know, everyone's going to die in five years, right?
That's a scary thing for people to hear.
And, oh, by the way, we want to be regulated, which, you know, look, that starting gun, I think moved us in hyperspeed into this conversation around how do we lock this down? How do we regulate it very, very quickly? I think that led to the Biden executive order, which we have publicly sort of, you know, denounced in certain categories. That executive order led to a lot of the conversation that I think we're having in the states, a lot of
the, you know, sort of bad bills that we've seen come through the states. And I think it also
led to a number of federal proposals that we've seen that have not been very well thought
through also. And look, you know, I think people who are kind of sitting around, they're like,
oh, well, you know, was it just like, you know, some testimony from these CEOs that did this?
And the answer to the ad is no. You know, from my point of view, and look, you know, they deserve a lot
a credit. I think the effect of Altruist community for 10 years backed by large sums of money
were very, very effective at influencing think tanks and nonprofit organizations in D.C. and
the state capitals to sort of push us in a direction where people are very fearful about the
technology. And that has shaped, significantly shaped, the conversation that we're having
throughout DC and the state capitals
and candidly on a global stage.
The EU acting, the EUAI act.
We're public on that.
There's a lot of very, very problematic
provisions in there.
All of this banner of safetyism
came from this 10-year head start
that these guys have had.
So when I always, you know,
that's kind of a bit of the history,
but sort of as an aside of this,
I always just have to smirk
or, you know, smile to try and laugh it off.
But I mean, when people are writing these articles
about the fact that the AI industry is, you know, pumping all this money into the system.
Certainly, like, I'm not suggesting that there's not money in the system.
We're obviously active on the political and policy side.
We're, you know, we're not hiding that.
But it is dwarfed by the amount of money that is being spent and has been spent over a 10-year window.
And, and, candidly, I mean, the reason that Matt and I have jobs is because we are playing catch-up.
Yeah.
We're here to try and make sure that people understand what is actually going on in this conversation
and be a counterforce to this group of people
and this idea, this ideology that has been here
for a long period of time.
So that's kind of the briefer on this.
Yeah, I mean, and companies, I think,
were ready to consider some policy frameworks
that I think we're probably really going to be challenging
for the AI sector in the long run.
And I think that's because I was at Meta,
then Facebook, starting in 2011 and through 2019,
And so after really like 2016, there was aggressive criticism of tech companies.
And the general framing is like, you're not being responsible and regulation needs to catch up.
You governance of social media is behind where the products are.
And whatever you think about that, that was really the kind of strong view in the ecosystem that like governance has allowed, the lack of governance has allowed problematic things to happen.
And so I think when AI was starting to accelerate and, and, and, you know, you.
you had certain sort of prevailing political interests, I think, that were driving the conversation.
Companies rushed to the table. And I think it was a group of five, three, five, seven companies
who went into the White House and negotiated voluntary commitments. I mean, we don't even have to make
the argument about the importance of representing little tech. When you see that, there is a set of
companies who negotiated an arrangement for what it would look like to build AI at the frontier
with all current developers who weren't those companies and all future.
startups, not represented at the table. I think that is why, like, we started to think about
the value of having more dedicated support around AI policy, because clearly the views of
little tech companies aren't represented in the conversation. Yeah. Well, I mean, let me just add one
thing to this. And I, I, I, it's Mark and Med's story. They've told it many times. I was in the
meeting as well, you know, and like, you know, everything they've said has been 100% true and
accurate. But there was a, there was a prevailing view by very, very powerful people of the
previous administration that this was going to be only two or three major companies able to
compete in the AI landscape. And because that was the case, they needed to be basically
locked down and put in this incredibly restrictive view from a policy and regulatory perspective.
and that was going to be kind of like this entity
that was kind of like an arm of the government.
And I think that that was the most alarming thing
that I think we had heard from the administration
on top of an incredibly alarming series of events
that happened on the crypto side,
including sort of wanting to eradicate it off the face of planet.
And it seemed like, so I think that that all led
to kind of the position that we're in now
and certainly like Matt's hiring and the thing,
you know, like us building out the team.
et cetera. So that narrative is clearly like a very alarming, maybe the most alarming version of
this. But even since I've been in this role, I've heard other versions of it where people will
say, oh, don't worry about this framework. It just applies to three or five companies or it just
applies to five to seven companies. And I think they mean that to provide comfort to us.
Like, oh, this isn't going to cover a lot of startups. But the view of the AI market where there
are only a small number of companies building at the frontier is not the, that's not the vision for
the market that we have. We want it to be competitive and diverse at the front.
And the policy ideas that were coming out of the period that Collins talking about were dramatically different from where they are today in a way that I think, like, some people have even, like, lost sight of exactly where we were a couple of years ago.
There were ideas being proposed by not just government, but industry, to require a license to build frontier AI tools and for it to be regulated like nuclear energy.
Which would be historic for software development.
Yeah, right, unprecedented.
Yeah.
And for it to be regulated like nuclear energy.
with like an international-style nuclear-like,
sorry, an international-level nuclear-style regulatory regime
to govern it.
And we've moved, like, no matter what you think
about the right level of governance,
there are not a lot of people now
who are saying what we need as a licensing regime
where you literally apply for permission from the government
to build the tool, but that wasn't that far
in the rear-view mirror.
Yeah, and look, and we're also talking about bans on open source.
I'm like, we're still kicking around that idea at the state level.
And look, I, it all, you know, look,
For us who live and breathe the tech stuff on a daily basis,
this is, you know, this sounds insane, crazy.
But let me, you know, like, just to make it a little bit more real, right?
Like, the nuclear policy in the United States has yielded two, three new nuclear power plants
in a 50-year period since these organizations have been started.
And look, like you can, some people are pro-nuclear or some people are anti-nuclear.
I don't want to get into that debate.
The point, though, is that that was that was that was.
not the intended policy of the United States of America. That was the effect of putting together
this agency and what has come from that. And I think, you know, look, if we do the same thing
to AI, had we done the same thing in AI in that period of time, then you don't have the medical
advancements. You don't have the breakthroughs. You don't have all of the things that come from
this that are incredible. But beyond that, we lose to China. Full stop. You lose to China. And then
our greatest national security threat becomes the one who has the most powerful technology in
the world. Right. And I think the early concern on the open source was that we would be
somehow giving it to China, but then we've seen with deep seek, et cetera, that they just have it
anyways. Yeah. Yeah, exactly. Right, exactly. You know, the idea that we could lock this down,
I think, I think, you know, I mean, Mark Bennett talked about this. I mean, I think they've
debunked that a number of times. Yeah. Just to understand, was the previous administration,
what was their calculus? Was it that they were true believers in the fears? Was it that there was
some sort of political benefit to having the views that they had, especially on the crypto side.
I don't understand what is the constituency for anti-crypto stance.
How do you make sense of sort of the players or the intentions or motivation, just understand
sort of the calculus there?
Yeah, you know, I mean, look, I think that that's a really, I think that's a really hard one
to answer, and I'm not sure I can pretend to be completely in their minds.
I think there's a couple of different competing forces here.
Like, one is, you know, what are the constituencies that support?
sort of that administration? What are the constituencies that support that side of the aisle?
And I think that especially over the last 10 to 15 years, it has been very, very heavy focus
on consumer safety, which, look, a very important thing. And we're obviously in alignment
on that. I think everyone should be alignment. Have to protect consumers, have to be able to protect
the American public. But I think that a lot of that conversation has been weaponized. I think that
It is a big-time moneymaker.
I think a lot of these groups either get backing from very, very wealthy special interest
or they are small-dollar fundraising off of quick hits like, you know,
AI's coming for your jobs, donate $5, and we're going to, you know,
and we'll make sure that we take care of this in Washington for you.
And, you know, like, pretty easy.
You know, it's a pretty easy manipulation tactic.
You know, it's used like from a bunch of people.
Yeah, but I think that.
that that's like a very, that held very seriously true, right? And I think, you know,
the other thing here is that I think personnel is policy. It's the old saying personnel is
policy. And I think a lot of the individuals that were in very senior decision-making roles
within that White House and that administration came from this sort of consumer protection
background where they've seen this. That was a constituency. They were put in this position
to come after private enterprise.
Like, you know, that was, that was the goal.
Like, there's this whole idea out there, I think, among some of those folks that, you know,
Senator Warren has, you know, proposed this many times is, is like, if you're not getting, you know,
if you're not going after and getting people on a regular basis in the private sector,
then you're not working hard enough.
And, like, I just, you know, I think that that is probably like the second thing.
And, like, the third is just, we're at this very,
weird moment where being a builder and being in private enterprise is a bad thing to some
policymakers. It's not, you know, you're not doing good because you're earning a profit. And,
you know, they certainly won't say that. But the activities and the things that they're doing
are 100% alive with that type of idea. So, you know, I think that's the basic crux of it.
I think the things that motivated that approach were done in good faith.
And I think it's what you alluded to earlier, which was like, I don't share this view,
but there are a lot of people who believe that social media is poorly regulated and that
because policymakers were asleep at the wheel, we woke up at some point, I don't know,
sometime in the 2014 to 2018 period, and realized that we had technology that we thought was actually
not good for our society.
And I think that, whether or not you think that that's true or not, that I think that was, that has been a widely held view.
It's a held view on the right and on the left. It's a bipartisan view.
And so I think when this new technology came on the scene, this was a do-over opportunity for policymakers, right?
Like, we can get this right when we didn't get the last thing right.
And so I understand that motivation.
It makes a lot of sense.
I think the thing that we strongly feel is the set of policy ideas that came out of that good faith,
belief were not the right policy ideas to either protect consumers or lead to a competitive
AI market.
Like some of many of the politicians who were pushing concepts that would have really put a
stranglehold, I think, on AI startups and would have led to more monopolization of a market
that already tends toward monopoly because of the high barriers to entry already.
Those politicians, three years before, had been talking about how problematic it was that there
wasn't more competition in social media.
And then all of a sudden, they're behind, you know, a licensing regime, which is not
I don't think there's much economic evidence that licensing is pro-competitive.
It typically is the opposite.
The disagreement is less with the core feeling.
Like we want to protect people from harmful uses of this technology
and more from the policy concepts that came out of that feeling
that we think would have been disruptive in a problematic way to the future of the AI market.
Yeah.
Anecdotally, it seemed from afar that some of the concerns early on were almost, you know,
to match social media, like around disinformation or even like DEI concerns.
And then, you know, people were trying to sort of make sure the models were incompatible with sort of the sort of, you know, speech regime at the time.
But then it's kind of shifted to, oh, wait, no, is there more existential concerns around jobs or is AI even like nukes in the sense of like people doing harm or AI itself doing harm?
But it seemed to to escalate a bit and, you know, maybe aligned with that testimonial that you alluded to.
I experienced it as feeling like the gold posts always moved.
And one of the things that I said, like that I said that I started asking.
people when I was really trying to settle into this regulate use, not development, policy position
is, what do we miss? Like, if we regulate use primarily using existing law, what are the things that we
miss? And I haven't gotten very many clear answers to that. Like, you can't do illegal things in the
universe, and you also can't use AI to do illegal things. And typically when people list out the set of
things that they're most concerned about with AI, there are typically things that are covered by
existing law, probably not exclusively, but primarily. And so,
that at least seems like a good starting point.
Some of the other issues that I think are, like,
understandably ones that we should be concerned about,
have a range of different considerations associated with them.
Like if you're concerned about misinformation
or, like, speech that you think might not be true
or might be problematic,
there are significant constraints on the government's ability to regulate that.
The First Amendment imposes pretty stringent restrictions.
And I think for very good reason,
because you don't want the government to dictate the speech preferences,
policies of private speech platforms for the most part.
And so, those issues might be concerns, but they're not necessarily areas, I think, where you want the government to step in and take strong action.
And so I think there are things that we should probably do as a society to try to address those issues.
But government regulation maybe isn't the primary one.
And again, in most of the things that people are most concerned about, like real use of the technology for clear, cognizable real world harm existing law typically covers it.
I have a theory on this.
So I think everything that Matt just said is,
is spot on, but, but, you know, like,
then you're kind of sitting around,
you're kind of scratching your head.
It's like, okay, well, if use covers it,
and there hasn't been, you know,
a very incredibly fair rebuttal onto why use is not enough
in terms of focus on the policy regulatory side,
what's the answer?
I think we're experiencing sort of this,
I don't know if it's a phenomenon,
but we're experiencing this pattern on the crypto side, too,
which is we're having a very, very spiritual,
a debate on the crypto side of things on how to regulate sort of these tokens and how do you
launch a token in the United States. It's a security or is a commodity and this is sort of this ageal
debate that's plagued securities, traditional securities laws for years, but also certainly
with crypto industry. But what we have found is there are a number of people who have entered
this debate who are actually trying to get at the underlying securities laws. Like they
want to reform securities laws. They don't want to reform crypto laws. That is exactly.
evolve securities, and this is their only venue by which they can enter that conversation.
Because we're not having, there's no will from the Congress or from policymakers to go and
overhaul the securities laws right now. You know, it's just not there. But what is moving
is crypto. So people, you know, there are all these people that are now trying to enter this
debate and like, oh, we should re-look at this. And we're like, well, this doesn't have anything to
do with it. We shouldn't be entering this conversation, yet they're still pushing, right? And that's
kind of muddy the water. I think,
very similar thing is actually happening on the AI side, which is, you know, there are a number
of members of Congress that feel like, well, we missed it on the 96 Telecom Act.
Like, that wasn't, we didn't do good enough around then.
So we need to rewrite the wrongs through the venue of an AI policy conversation, right?
Because if you think about it, right, assuming that use doesn't go far enough for someone, right,
and this is the same conversation that we're having in California right now or in
Colorado right now. If uses does not go far enough, okay, well, then it would be really,
really simple if you could have a privacy conversation around this, if you could have an online
content moderation conversation, an algorithmic bias conversation around it, you could do all
of that, wedge it through AI, and then assuming AI is actually going to be the thing that we all
think it's going to be, now you've put basically a regulatory funnel on the other side, like you've
put a mesh screen where everything has to run through AI, and therefore it runs through this
regulatory proposal you put together.
Yeah.
The thing that I've really been wrestling with in the last few weeks is whether those
kinds of regimes are actually helpful in addressing the harm that they purport to want to
address.
Colorado is a really good example.
So there are all these bills that have been introduced at the state level.
Colorado is the only one that's passed so far that set up this regime where you basically
have to decide are you doing a high risk use of AI or a low risk use of AI?
And this would be for startups that don't have a general counsel, don't have a head of
policy can't hire an outside law firm to figure it out high risk, low risk. And then if you're high
risk, you have to do a bunch of stuff, usually impact assessments, sometimes audit your technology
to try to anticipate is there going to be bias in your model in some form, which maybe an impact
assessment helps you figure that out a little bit, but it's probably not going to eliminate bias entirely.
It certainly isn't going to like end racism in our society. There was a Colorado is now their
their governor, their attorney general, have put pressure on the legislature to roll back this law
because they think it's going to be problematic for AI in Colorado. And so there was just a special
session there to consider various different alternatives. One of the alternatives that was introduced
proposed codifying that the use of AI to violate Colorado's anti-discrimination statute is
illegal. That's consistent with the regulate harmful use framing that we've talked about. And
it's instead of having this like amorphous process where maybe you address bias in some form maybe you
don't this goes straight at it it's not a bank shot it goes straight at it where if someone uses
AI in a way that violates anti-discrimination law that would be that could be prosecuted the attorney
general could enforce and I don't I still don't understand why that approach is not is somehow
less compelling than this complex administrative paperwork approach I think it's kind of the reason
that Colin's describing which is like people want another a different
fight at the apple of bias, I suppose, but it's not clear to me that it's actually the best way to
effectuate the outcomes that you want as opposed to just criminalizing or creating civil
penalties for the harm that you can see clearly.
It's also, I mean, in policymaking and bill writing, it's really, really easy to come up
with bad ideas.
Yeah.
It's easy, right?
Because they're not well thought through.
The first thing comes to your head, someone publishes a paper on something, here we go.
It takes real hard work to get something that actually works.
And then it's even harder to actually go through a political and policy negotiation with a diverse set of stakeholders and actually land the plane on something.
Yeah.
I think that's part of the reason that people think that we are anti-governance because when we have, I mean, Colin, again, he lived this history.
I'm coming in late to it.
But like, as we were ramping up our policy apparatus, these were the ideas in the ecosystem, licensing, nuclear-style regulation, like flops threshold-based disclosures, really complicated transparency regimes, impact assessment.
assessments, audits, which are a bunch of ideas that we think are not going to help protect people
and are going to make it really hard for low resource startups. And so we've been trying to say,
no, no, no, don't do that. And so that sounds like deregulate. But for whatever reason, it's been
hard so far to shift toward like, here's another set of ideas that we think would be compelling
in actually protecting people and creating stronger AI markets. Right now, we don't see,
you know, terrorists or criminals being aided, you know, 1,000X with AI and in performing
terrorism or crime. Like when I ask people, like, what are you truly scared about? Like, give me a
concrete scenario. People, you know, they'll be like, oh, what about like bioterrorism or something?
Or what about, you know, cybersecurity, you know, theft or something? We seem very far away from that.
Is there any amount of development at, you know, in the next few years, any amount of breakthroughs
where you might say, oh, you know, maybe use isn't enough? Or do we think that that will always be a
I think it's conceivable.
I mean, and I think we've been open about that.
Like, we think existing law is a good place to start.
It's probably not where we end.
So Martin Casado, one of our general partners, wrote a great piece on marginal risk in AI,
basically saying like when there's incremental additional risk that we should look for policy
to address that risk.
And so the situation you're describing, I think, might be that.
I think what you're getting that is a really important question about just potential
significant harms that we don't yet contemplate.
We get asked often about our regulate use, not regulate development framework.
Are you just saying that we should address issues after they occur?
And I understand why that's a concern.
Like, there might be future harms.
And wouldn't it be nice if we could prevent them in advance?
But that is how our legal system is designed.
And typically, when you talk to people about ways that you could try to address potential
criminal activity or other legal violations ex ante before they occur,
that's really scary to people.
Like, Eric, what if we just learned a lot of information about you
and then predicted the likelihood that you might do something unlawful in the future?
And if we think it's exceeded a certain threshold,
then we're going to go and try and take action against you
before you've done it so that we can prevent future crime.
That you're laughing because it's laughable.
We don't want a kind of ex-ante surveillance,
both because it feels invasive,
but also because it often is ineffective.
Like, you might, it might, we might run some test that should,
shows that maybe you're likely to be predisposed to some kind of criminal activity,
but we don't know until you've done it, that you're going to do, that you've done it.
And so I think that kind of approach, again, I think it's motivated by a really valid concern
and a valid desire to prevent harm.
What if we could prevent harm before it's occurred?
The challenge is the regulatory framework, I think, probably won't do that.
It probably won't have the effect of preventing harm.
And there are all these costs associated with it, mainly from our perspective, inhibiting startup
effectiveness. Yeah. Mark, once told me in a podcast, he told me his joke, which is, man goes to the
government, you know, I go to the government because I have this big problem. Now I get a lot of
regulation. Now I have two problems. Okay, let's talk about the state of AI policy today. There's a lot
that's happened the last few months with the moratorium, the action plan. What are some of the things
that we're excited about right now? What are some of the things we're less excited about right now?
Why don't we give a breakdown of where we're at right now?
So I think given what Collins described about where things were a couple years ago,
it's great to see the federal government, certainly the executive branch,
but not just the executive branch.
I think this is in Congress across both aisles being supportive of frameworks that we think are much better for little tech.
So trying to identify areas where regulatory burden outweighs value
and where we can right-size regulation to make it easier for AI startups.
as Colin said, support for open source.
We were in a really different place on that a couple years ago.
Now it seems like there's much more consensus.
And again, it actually was across the end of the last administration and the current administration
around the value of open source for competition and innovation.
The National AIA Action Plan also had great stuff in it about thinking through the balance
between the federal government and state governments, which is something that we've done
a lot of thinking about.
There's an important role for each.
But we think the federal government should really lead regulation of development.
element of AI, states should police harmful conduct within their borders. And I think there's stuff
in the action plan that would try to ensure those respective roles. There's also a lot of stuff
in the action plan that wasn't really talked about much. It wasn't sort of the headline-grabbing
stuff that I thought was incredibly compelling in terms of, again, trying to create a future
for AI that just works better for more people. And a really good example is the stuff on worker
retraining that focused on different programs that could help workers.
if they're displaced as a result of AI, as well as monitoring AI markets and labor markets
to make sure that we understand when there are significant labor disruptions.
So I think it sort of gets at a point that you were alluding to a couple minutes ago
about like what happens when there's something really disruptive in the future?
Can you predict with certainty that there won't be this crazy disruptive thing?
And no, we can't.
There might be significant labor disruption.
Others at the firm have talked extensively about how typically there's always there worries
about labor disruptions when there's new technology introduced.
Typically, there are increases in productivity that end up being good for labor overall.
We think that's the direction of travel.
But you never know.
We can't predict it with certainty.
And so I think it's a really strong step to try to just monitor labor markets to see what the disruption might look like so that we're set up to take strong policy action in the future.
Can I just say one thing about the AI action plan?
Sure.
And I don't want to juxtapose this to what we saw under the Biden administration, which is incredible amount of activity in the Biden administration.
an incredible amount of activity under the Trump administration.
But, you know, look, I kind of view these executive orders
and these plans that come out from an administration
are very, very important.
And some of them have true policy.
They direct the agencies to do things,
to come out with rewards,
and then take under rulemakings and things like that.
But from an AI action plan perspective,
for me, it was so significant
because I think it turned the conversation on its head.
Before it was...
We have to only focus on safety
with a splash of innovation.
And now it is, we understand how important this is
from a national security perspective.
We understand how important this is
from an economic perspective.
We need to make sure that we win
while keeping people safe, right?
And that dynamic and that shift of rhetoric
is incredibly important because what that does is,
it signals to the rest of the world,
it signals to other governments,
that this is the position of the United States,
and we'll be the position for the next three and a half years.
And this is the position of the United States to the Congress.
So when the Congress is looking at potentially taking up pieces of legislation
or taking actions or even committee hearings,
which, you know, for the broad base of what we're talking about
are fairly insignificant, all of that is sort of kept in mind.
So now the conversation has shifted significantly,
and that is really, really important.
Speaking of winning, Colin, I'm curious for our thoughts.
on AI policy vis-a-vis China, whether it's export controls or any other, you know, issues we care about.
Yeah, I mean, well, I mean, look, first and foremost, we've talked about already.
I mean, we have to win, right?
And I think, I think that that is, that is at the main thrust of a lot of what we're doing here
and a lot of the way that we think about this from a firm perspective.
You know, I think first is making sure that the founders and the builders can build appropriately with appropriate safeguards.
and an appropriate regulatory structure.
The second is, how do we win
and make sure that America is the place
where AI is probably the most functional
and foundational vis-à-vis China?
You know, I think that
there has been a long conversation,
the diffusion rule that came out
from the Biden administration
specifically on export controls.
Many, I think, paned that proposal.
I think that that was
a lot of people
suggested it was probably too restricted
it wasn't the right way to think about things.
I think, you know, we have spent most of our time.
Matt leading this effort has spent most of his time, our time,
specifically focused on how are we regulating the underlying models
and how are we regulating, hopefully, the use of these models
versus specifically sort of on the export control piece.
What I will say, though, is very concerning sort of some of the proposals
that came out from the Biden administration,
some of the proposals that we've seen in the state level,
some of the proposals that we've seen
at the congressional level of the federal standpoint,
that dealt with specifically export controls
on models themselves.
And we're still kind of having this conversation.
There's a policy set that has been kicked around for a while.
It's called the Outbound Investment Policy,
which is basically how much U.S. money
from the private sector is flowing into Chinese companies.
And very noble, laudable, super supportive of that,
that concept, you know, we are a very sort of primary America, America first sort of organization
here. We're investing primarily in American companies and American founders. So, you know, we're very
supportive of it. But when you sort of edge into the idea that we might inadvertently ban
U.S. open source models from being able to be exported across the country, like by definition of
open source, there are no walls around these types of things. So that's one of the areas that
we've been very, very focused on. And I think obviously very important to make sure that we don't
have these very powerful technologies, U.S. main technologies in the hands of our Chinese counterparts
and the PLA and the CCP using this against us. But I also think that we need to make sure that we're
not extending too far and limiting the power of open source technologies to be able to kind of
be the platform around the world.
You know, the final point that I'd make here is we do ultimately and fundamentally have a
decision to make as, you know, the U.S., which is do we want people using U.S. products across
the world, which helps for a whole bunch of different reasons, but certainly on soft power
from national security perspective, or do we want people to use Chinese products?
the more that we lock down, obviously, American products,
the more Chinese, the Chinese will enter those markets
and sort of take a land grab in that space.
When you get into more, what happened with the moratorium
and the fallout that ensued?
I think this one is a bit complicated.
There was a perception about the moratorium when it came out
that it would have prohibited all state law
from existing for a 10-year window.
Obviously, that's a long period of time.
I'm not sure we would necessarily completely agree
with that policy stance.
that from our point of view
is a misinterpretation
for a whole bunch of different reasons
of actually what the language said
but sometimes in DC
a lot of times in DC
perception is reality
and that kind of took hold
but I also think that
there are also
strong competing forces
like we've discussed
right from the
I think the Dumer crowd
or the safety crowd
that were very very anti
that had used all of their
tentacles that they've spread out over the last decade to try and move in and try and kill
this. I think they also were successful in leveraging some other industries to try and come in
and also move forward to try and kill this thing. And look, you know, by virtue of the vehicle,
the underlying procedural vehicle, this reconciliation package that it was moving in, it was a partisan
exercise. It was going to be Republicans on Democrats, and that was that, right? And there was nothing,
even a prominent AI policy
that was going to be dropped in a reconciliation package
that was ever going to drag Democrat votes over it
because it was such a big
sort of Christmas tree style thing
that had all kinds of tax reform positions, etc.
And if you were in one of those situations,
the margins on the votes become very, very, very small.
So all it took was, you know,
one or two Republican senators
hitching their wagon to some of these
ideas that were out there to tank this thing, right? And look, I think that's going to be a
situation that you're going to fight in any sort of political policy legislative outcome or
any sort of any, any sort of issue that you're going to be running within the Congress, right?
But I think more so than anything, and we heard this repeatedly from a whole bunch of different
people and this is what we've also experienced. The industry was just not organized well enough,
right? And that's not just the industry. It's also the people who,
who care about this thing,
that aren't actually industry stakeholders.
The stakeholders who were pro some level of moratorium
or some level of preemption were just not organized.
And I think that that was both eye-opening moment,
but also an important moment.
Because I think what we have done in the preceding,
you know, three, four months since this thing has gone down
is we've taken a long, hard look at what we need to do
collectively from a coalition to be able to be able
be in a better position next time we're there.
And so what does that look like, right?
I mean, first and foremost, it comes with writing, doing podcasts, talking about these things,
talking about the details of what's actually in these proposals and what it actually
means for states and the federal government to make sure that we're fighting through the
FUD that's coming through because it's always going to be there.
There's misrepresentation all over the field.
The second piece is, let's all get on the same page, which I think we've, we've
worked very hard to do. And where we can find alignment, I think we've found that alignment
between big, medium, and little. And then I think the third and probably the most important
is, what are we doing on sort of the political advocacy side to make sure that we have the appropriate
tools to be able to push forward in a way that ensures that America continues to lead and
that we don't lose out on this race to China? And that's part of the reason that we have recently
announced our donation to leading the future pack, which will have, you know, several different
entities underneath it, which I think is designed to sort of be that political center of gravity
in the space. And that will fight at the federal level and the state and local level. So we're
happy to be a part of it. And we expect, you know, there will be others that join this sort of common
cause fight on the AI side. If we could wave a wand, what would we like to be done at the state
level versus the federal level versus how should we think about that interplay
compared to where we're at now.
So I think there are the helpful answer here comes from the Constitution.
Constitution actually lays out a role for the federal government and a role for state
governments.
Federal government takes the lead in interstate commerce.
So governing a national AI market and governing AI development, we think is primarily
Congress's role.
Sometimes when people say that, I think what other people hear,
for some reason is states should do nothing. And we have been, we've tried very hard to be very
deliberate in not saying that and making clear that states have an incredibly important role to play
in policing harmful conduct within their jurisdictions. So criminal law is a perfect example.
There is some criminal law at the federal level, but the bulk of criminal laws at the state level.
Like when you think about routine crimes, if you are going to prosecute someone, prosecute a perpetrator,
it's likely that that would occur under state law. And so to the extent we want to take account
of local activity that would, where there's criminal conduct involved, and we want to make sure
that the laws are robust enough to protect people from that activity, that's going to be primarily
state law. Oddly enough, I mean, as Colin is describing, like we, this isn't the delineation
that we've started out with. There are a lot of state laws that have sort of taken the approach
of sometimes explicitly, Congress hasn't acted, so we have a responsibility to act. And that's true
to some extent, like you can act within, states can act within their constitutional lane.
Some of what states have done have gone outside that lane. And so we actually just this week
released a post on potential dormant commerce clause concerns associated with state laws.
And the basic idea there is that there's a constitutional test that says that states cannot
excessively burden out of state commerce when that greatly exceeds the in-state local benefits.
And so courts actually weigh that. There's a balancing test. Are there?
that harms cost to out-of-state activity,
do those significantly outweigh the benefits on the local side?
And we think that at least for some of the proposals
that have been introduced, it's likely that they won't,
that the benefits are somewhat diminished relative
to what the proponents think they are,
and that the costs are significant,
like the cost of a developer in Washington state
for complying with a law that's in California
or a law that's in New York is gonna be significant.
And so our hope, I think, is not that the dormant
Commerce Clause ends up serving as a function that makes it hard for states to enact laws,
but actually just serves as a guidepost for states around the kinds of laws that they might actually
introduce. And I think it pushes in the direction that's consistent with our agenda,
which is to take an active role in legislating and enforcing laws that are focused on harmful
use. Looking in the next six months to a year, what are the issues that we're most focused on
or that we're thinking about are going to, you know, be playing a role in the conversation?
Yeah, I think it's first and foremost some level of federal preemption.
And I want to be very specific about this.
Again, to Matt's point, we're not talking about preempting all state law.
We're talking about making sure that we have a federal framework specifically for this model regulation
and hopefully how the models can be used, right?
I think that's going to be so, so critical because we can't, just like any other
technology, no technology can live under a 50 state patchwork. And that's been the biggest issue
that we've been fighting over the last year and a half or so. So I think that, I think that there
are some other sort of policy sets that I think will be handled beyond that, that I think can
kick into sort of workforce training. I think there's some literacy things that should be coming
up, obviously there's a huge robust conversation around data centers and energy that I think it
will be really, really important. But above all, I think most of our time and energy will be
focused on trying to have some level of federal standard here to try and drive the dividing line
between the federal and state government, which I think Matt has already done a ton of great
work on. Yeah, I think this is just a super exciting policy moment for AI. There's the last
couple years where I think there are a bunch of ideas that have been proposed, and for the reasons
that we've discussed, we think those ideas fall short, both in terms of protecting consumers
and in terms of ensuring that there's a robust startup ecosystem. Most of those laws, I think,
have actually not succeeded in passing. So, like, there were a number of laws introduced at the
state level in this past year's legislative sessions that we thought had a strong likelihood
of passing, and I think to date none of them have passed. Colin has also been building out
the expertise and skill set and capacity on his team. We just hired Kevin McKinley to
lead our work in state policy, and he, I think, will help us to take a real affirmative position
in the legislative sessions ahead on what might actually be AI policy that's good for startup.
So instead of being in the position of saying no, because we're sort of starting late and kind
of with one hand behind our back, I think we're in a position to really actually try to articulate
and advance a proactive agenda in AI that's compelling.
I think Colin hit the main parts of it, ensuring proper roles for the federal and state governments,
focusing on regulating harmful use, not development,
and there are specific things that you can do there
in terms of increasing capacity and enforcement agencies,
making clear that AI is not a defense to claims brought on
under existing criminal or civil law,
and technical training, I think, for government officials
to make sure that they can identify and prosecute cases
where AI is used in a harmful way.
And then all this infrastructure and talent stuff
that Collins is describing worker retraining AI literacy.
We've also given some thought to
the idea that has been articulated by a number of lawmakers and was in the National AI Action Plan
of creating a central resource housed in the federal government. And you could also do it in state
governments as well that lower some of the barriers to entry for startups, you know, compute costs
and data access. And we think that's really compelling in terms of ensuring that startups can
compete. And that idea, like many of these, is bipartisan. It's been supported by the current
administration. It was supported by leading Democrats over the last couple of years. So that's
the kind of thing that we are hoping that when we have the room in position to really advocate
for an affirmative agenda that we'll get some traction in policy circles we're not always in
a hundred percent alignment with other people in the industry you know and and i think i think that
that's you know big medium little you know across the board there's other sort of like consumer
advocacy groups that obviously feel differently about these things i think for the most part
the industry is generally aligned
on some level of a federal standard here
and understanding that the thing again
that won't work is a 50 state patchwork.
And I think that that's super, super important
because I think for the first time
you actually have this sort of alignment there
and if you have that sort of alignment,
that's kind of momentum that you can
to actually push things over the finish line
and get something done.
And I think, look, also the Trump administration
to their credit has also been incredibly supportive
of this idea too.
There's a, like, that's an incredibly important point.
One criticism, usually raised in sort of an implicit criticism sort of way, is, hey, you're the little guys, but often you align with the big guys.
So aren't you just saying, aren't you just in favor of a deregulatory agenda that works for big tech?
And one of the things that I think is really extraordinary about the little tech agenda is it's really nonpartisan and it doesn't take a position on Big Little.
It basically says, here's the agenda.
And when you agree with us, we'll support you.
and when you disagree with us, we'll oppose you.
And that's not party line.
It's not Big Little.
And so I think what we saw over a certain – the phase that Colin was referring to
kind of initially in the recent set of AI policy was a phase of divergence between Big and
Little.
Licensing regime, Biggs were sort of pushing it.
Little was concerned about it.
Then there was a period of convergence.
And I think actually if you look at like the National AI Action Plan comments across a range
of different providers,
As Colin saying, like a lot of them, they had some core similarities.
So lots of large companies have advocated for federal preemption.
We don't oppose that just because big companies are advocating for it.
We think that that's good for startups.
I think it's possible.
I'm curious.
I mean, this is really, you know, Colin really understands this in a way that I don't.
Like how the political chips will fall.
I think it's possible we're in a period of some divergence.
And one thing that we hear repeatedly, which is sort of funny, is people will bring us stuff and they'll say,
industry agrees with this.
So we expect you to agree.
You can't. The industry's already agreed. You can't disagree. And we say the big parts of the industry have agreed, but we sometimes we agree with them, but sometimes we have different views. And so when we disagree, it's not because we're trying to blow up a policy process or make it difficult for lawmakers who are trying to move something forward. It's because when we're looking at it, we're looking at it through this particular lens. And I think, I hope it's not the case, but I think there might be more fracturing than the months ahead.
Yeah, I agree with you on that. And by people, he means lawmakers, just to be specific.
Yes. That's a great place to wrap.
Colin Matt, thanks so much for coming to the podcast.
Thanks very much.
Thanks for listening to the A16Z podcast.
If you enjoyed the episode, let us know by leaving a review at rate thispodcast.com slash
A16Z.
We've got more great conversations coming your way.
See you next time.
As a reminder, the content here is for informational purposes only.
Should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security.
not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies
discussed in this podcast. For more details, including a link to our investments, please
see A16Z.com forward slash disclosures.