The Dispatch Podcast - Congress and Artificial Intelligence | Interview: Adam Thierer
Episode Date: September 15, 2025AI is racing ahead. Regulation? Not so much. Kevin Williamson talks with Adam Thierer, senior fellow at the R Street Institute, about the opportunities and risks of artificial intelligence. They dive ...into the policy fights shaping its future, the role of Big Tech, and how AI could reshape global competition. The Agenda:—Defining AI—Hardware vs. software—Economic and geopolitical implications of AI—Job replacement concerns—Tech skeptics Show Notes:—Defending Technological Dynamism & The Freedom to Innovate in the Age of AI The Dispatch Podcast is a production of The Dispatch, a digital media company covering politics, policy, and culture from a non-partisan, conservative perspective. To access all of The Dispatch’s offerings—including access to all of our articles, members-only newsletters, and bonus podcast episodes—click here. If you’d like to remove all ads from your podcast experience, consider becoming a premium Dispatch member by clicking here. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
During the Volvo Fall Experience event,
discover exceptional offers and thoughtful design
that leaves plenty of room for autumn adventures.
And see for yourself how Volvo's legendary safety
brings peace of mind to every crisp morning commute.
This September, lease a 2026 XE90 plug-in hybrid
from $599 bi-weekly at 3.99% during the Volvo Fall Experience event.
Conditions apply, visit your local Volvo retailer or go to explorevolvo.com.
Fly Air Transat
Seven Time winners
Champions out again
By the seven time world's best leisure airline champions, Air Transat
Welcome to the Dispatch podcast, I'm Steve Hayes.
Artificial Intelligence is advancing at breakneck speed,
but regulation isn't keeping up.
Our own Kevin Williamson sits down with Adam Thier,
senior fellow at the R Street Institute,
to unpack the promises and perils of AI.
They explore the policy debates shaping its future,
the influence of big tech,
and what AI means for global competition.
Here's their terrific conversation.
Adam Tier rhymes with beer.
Seems like it should have more syllables than that
from the way it is spelled.
Welcome to the Dispatch podcast.
We are here to talk about artificial intelligence and artificial intelligence policy.
And I have a question that may sound really very elementary to start with, but I think it's a worthwhile question, which is, how do we actually define what AI is?
I mean, we're talking about technologies that do jobs that people used to do an automatic transmission as an AI, but of course, we don't really think of it that way.
So somewhere between SkyNet and people making dopey videos on YouTube about most interesting
in the man in the world commercials and the nudge I get on my Corolla when I'm drifting
out of my lane, that kind of looks like a very large spectrum of technology we're talking
about there. So when we're talking about AI regulation, how do we know what we're regulating?
Well, that's a great question, Kevin. And ironically enough, a couple years back, the Congressional
Budget Office studied this issue. I'm sorry, the General Accountability Office studied this issue.
and found that there was no consensus among the experts on how to define artificial intelligence.
And in fact, in various state and federal bills that have attempted to regulate already artificial
intelligence, we have seen dozens of conflicting definitions.
That shouldn't be necessarily all that hard to believe because artificial intelligence is probably
the most general of all general purpose technologies to come along in modern times.
It touches every aspect of our lives, our economy, our society.
And so defining it has proven very, very hard, including to the people who are experts in the field.
Generally, speaking the way I think about it is it's the application of knowledge to task by machines in such a way that it mimics or maybe in some cases exceeds human reasoning.
But that's a clunky definition.
And everybody's got a different favorite one.
There are at least three different definitions already written into federal law.
And how do they differ from one or two?
Well, they're mostly at the margins, the technical differences, and it's not necessarily
radically different, but in some cases, it's the scope of what they try to incorporate in the
definition.
It'll last three sentences versus two.
But the bottom line is that this is a term of contention.
It's something that has proven to be very, very difficult to nailed out exactly what
we mean by it.
And I imagine there must be some people in the marketplace who would like to define this
in such a way as to exclude their products.
so that they may escape the iron fist of California regulators and federal regulators and Oklahoma
regulators. Well, Kevin, I know you're a student of history and you know a lot about how policy
works and that's exactly right. I mean, the reality is I've spent 34 years covering emerging
technology policy now and have witnessed various industries and companies and other players
try to throw each other under the bus using definitional differences, just basically saying,
I'm not a deployer, I'm a developer. I'm not a distributor. I'm a distributor. I'm
integrator. And these legal terms of art, all four of them I just mentioned, are actually in
artificial intelligence bills right now at the state and federal level, leading to these games
where people try to rig the system to, again, throw somebody else under the bus and cover their
own butts. And so unfortunately, we know a lot from the old analog era that this is the way
traditional information technology markets have often worked. You know, broadcasters say,
you know, we're not the problem. Cable says, we're not the problem. It's satellite that's
the problem. Satellites, we're not the problem. It's this new internet thing. And so on and so on and so on. And that's an ugly but unfortunate real history in America where once you have a lot of embedded regulation on the books, people attempt to gain it to gain favor and then to create what some people call sort of a regulatory moat around your business model or your sector. And a lot of that is already playing out.
Well, speaking of history, let's keep on the historical perspective for a second.
A lot of the argument we hear about AI relies on analogies that are Cold War analogies.
You know, it's the arms race. It's the space race. People talk about Sputnik moments with, you know, Chinese AI and that sort of thing.
Are these good analogies? Should we be talking about this issue in a different way?
Well, there is some applicability to that. When the Chinese launched their Deep Seek application earlier this year, a lot of people referred to that as a Sputnik moment for the United States, it certainly was.
was a surprising moment because the Chinese proved that they were able to create a world-class
large language model at a very low cost using less compute power than American technology companies
had utilized to create theirs. It was certainly a wake-up call in that regard. It had some
historic analogy to what had happened with the space race. That being said, AI is a lot more
complicated. I mean, space was complicated enough, but AI is so much more multi-dimensional
that you have to be careful with your Cold War analogies and other types of technology,
history analogies. We hear a lot, for example, about like, well, maybe we need...
I think, by the way, space is probably literally more multidimensional than AI, but there are
three dimensions there in space, yes. Yeah, fair point. A little language policing. Go ahead, please.
Sorry, down. Absolutely. So the point I was making, though, is that AI,
as a general purpose technology with aspects, you know, covering so many different fields
is ultimately leading to a lot of different debates.
But the bottom line is that it's more complicated than traditional analog era,
coldware era debates.
We hear, for example, a lot about like, well, what does, say, nuclear deterrence
or teach us about, you know, AI regulation?
I think that's of limited value.
You're talking about, you know, nuclear technology being, you know, radioactive materials
and uranium and things that are quite dangerous by nature, whereas AI is built on data,
on information and computation, and trying to control a thing like uranium or nuclear missiles
that come out of that process is different than trying to control the code that goes into
an algorithmic or computational system. So you have to be very, very careful about these
historical analogies. That actually raises a question that was going to get to later, but we can
get to it now. So there are two parts to AI, of course. There is the code, as you talk about,
And they're also a matter of chips and hardware.
Do we need essentially separate regulatory approaches for the equipment end of things versus
the information end of things?
That's the way some people would like to regulate it.
Some people want to regulate both and do it very aggressively.
But you're making a good distinction between the hardware and software side of things.
And it certainly shows how complicated regulating artificial intelligence can be.
But a lot of people view the physical systems as a choke point that they can more easily
regulate than the underlying code. For example, what would be the physical systems? Well, they would be the
large data centers where we house the array of chips that basically create the computational
capacity to power these large language models and other algorithmic systems. We could also control
the exports of those sorts of things or the machinery that goes into creating them. And that's been
proposed. And that's a huge debate right now in this administration as it was in the previous one.
we could go further and try to regulate other types of computational systems, physical systems.
But at some point, that's very, very challenging in, it's equally challenging, if not more,
when you try to regulate the underlying data that goes into these models.
But I could cite numerous regulations that try to cover all of that and much more.
There's just so many layers now to AI policy proposals.
Yeah, and two, maybe a close-out the historical analogy issue.
As it turned out, space competency and nuclear power, you know, the two big Cold War issues from the 1950s, 1960s, ended up being economically important, but economic competition wasn't really the central matter there. With AI, it seems that economic competition, particularly with the Chinese, seems to be much more central to the argument. And I wonder if we might not, well, first of all, I'd like you to address that just as an issue. But then I wonder if we might not look back. And 20 years from now, think,
that we were over-emphasizing the issue of economic competition and under-emphasizing how
disrupted this could be in military intelligence, mischief-making context?
No, it's a great point. The two are very closely interrelated, and the reality is, is that
global economic strength and competitiveness has a strong bearing on geopolitical strategic advantage.
And a lot of scholars now identify that and understand that part of this so-called AI race with China
is not just about economic might, it's about also geopolitical strength because these general
purpose technologies, if you perfect them to do one thing in the economic realm, they're going
to be perfected to do another thing in the military realm. It's a dual-use technology.
But then there's an added layer to it, Kevin, which is that basically this is also about
values. These technological systems come with certain types of values embedded. The Chinese
tend to embed elements of control and censorship within their systems. We've learned this from
what they've done in the world of realm of telecommunications. They have
programs that they've set up that the CCP has created, like the digital, like the Belt and Road
Initiative and the Digital Silk Road Initiative that are meant to export not just their products,
but their values. And their belief is that if they can support the broad-based diffusion of the
Chinese AI stack, the full stack of all the hardware, the software, and everything in between,
then they can essentially spread their influence across the globe. And so that's not just economics.
that's geopolitical strategic advantage that has a strong bearing on our national security.
And this is what makes it different than the old days, where the Soviets were, of course,
trying to spread their influence in different ways, but doing it in a more limited capacity
and maybe arming their allies with one part of that.
But now the Chinese say, we're just going to sell our products everywhere.
And then we're going to make people dependent upon us.
And then that ends up creating a different cultural frame globally because all of a sudden
America's products become less important.
important. And, you know, this is, I use the controversial analogy, which angers some of my
conservative friends. But, you know, during the Cold War period, one of America's greatest
advantages was Hollywood in popular culture. We were able to spread like Americanism across the
globe just by the sheer popularity of our cinema and our television and our radio and our music.
And that gave us some important advantages and built new allies who wanted to be like more like
America. Well, you can imagine if the roles would have been reversed and, like, somehow
the Soviets would have come out on top of that with their propaganda and their efforts.
It seems silly to even think about. But with China, this is a more legitimate concern.
Their products are already widely diffused, including in the United States, and come with certain
embedded values or dangers in them. So this is why the stakes are major in the debate about AI.
Yeah, I think the obvious some maybe counter example there is the emergency of the Internet.
which was very much an American-driven phenomenon, and came with all sorts of implicit American values about openness, transparency, freedom of communication, those sorts of things.
I don't think the United States really was very programmatic about seeing to it that that spread that way.
It seems that there was just, as you mentioned, because those Soviet sitcoms weren't all that great, and nobody wanted to buy Russian cars.
The need or desire to interface with the United States, not only economically through markets, but also culturally, led to,
a diffusion of American values without a program for doing so.
It just kind of came about naturally and organically.
Yeah.
And I think that is not something you can count on to necessarily happen again.
We're in a very different era today than we were in the immediate post-cord war years.
And in my reading of your work suggests that you favor a generally market-based approach
with a stronger and more, I suppose, activist and deliberate kind of federal.
hand. You have, in the pieces I've read by you anyway, correct me if I'm wrong here, I'd say
you're giving the Trump administration sort of a solid B, kind of a good butt in terms of its
regulatory approach. And of course, the other 20th century issue that we are still with us, that
Congress won't do its job, and we'll get to that later. But tell me what you think about the way
the Trump administration has proceeded so far, and it's a high policy in these executive orders and
such things, some successes and also some caveats, I suppose. Absolutely. I'd give it more like a
solid B plus, maybe even an A-minus. I think they're doing great work so far for the most part
with a couple of notable exceptions. I've written about this in a series of short essays for the
R Street Institute where I work as well as in a longer new study I did for the Civitas Institute
called Defending Technological Dynamism and the Freedom to Innovate in the Age of AI. And the story
that I tell in those papers in that study about what's happening in the Trump administration's
early months is that they basically are trying to undo the regime or the vision that the vision
that the Biden administration had crafted over the last four years, which was very much rooted in
sort of fear and loathing about the new world of AI and computation.
The Biden administration had one of the longest executive orders in American history on
AI, on anything, but it was on A.I. And it was 111 pages long where they were essentially
trying to write their own bill for how to regulate AI. And then they had a big framing document
called the AI Bill of Rights. And they basically cast AI in very sort of nefarious schemer terms like,
oh, it's going to destroy our rights, our jobs, our lives, our liberties, you know, and, you know,
it's going to discriminate against everybody. And so they actively pushed for a lot of unilateral
top-down types of control on the AI world. And they did it by basically sort of thumbing their
noses at Congress and saying, well, you're not going to do anything. So we are. And of course,
there is a real problem with Congress not doing its job on a lot of things these days.
But the reality is that was not a good excuse for the Biden administration just to run wild.
Enter Trump. And Trump during the election only promised one thing about tech and AI.
And he said, I'm going to abolish that Biden executive order. And in the first three days in office, he did it.
And within a couple of days after that, he introduced a new executive order. And then a series of
essay, I'm sorry, a series of other executive orders and speeches came from him, the vice president, and other
people in his office in science and technology policy, that more fully embraced AI opportunity
and AI possible growth opportunity and benefits for society.
And so that was a very big change, almost a paradigmatic change of just what I call innovation
culture approach in our government about are we going to put a red light in front of a new
emerging technology or we're going to give it more of a green light.
And so the Trump administration has spent the first six months basically doing that.
upending and getting rid of the old Biden regime and then trying to establish a new framework in
its place to counter not only what Biden was doing, but also, as we can get into, some of what's
happening overseas in Europe, and then also within our own states, especially some large blue states
that are trying to aggressively regulate AI. So that's where we stand today. And I think the new so-called
AI action plan that was just issued a couple of weeks ago and the remarks the president gave were
very, very strong, very strong pro-innovation, pro-freedom kind of vision for this emerging
technology. Again, tethered back to what we just discussed, Kevin, about what it means for geopolitical
strength and national security. I think when you talk to people, especially on the hill, about,
you know, why care about AI, the first thing on their lips is China, China, China. Then they get
eventually to like potential other benefits for society, especially for like public health and other
sectors and other things. But really, I think there's a lot of understanding now that like,
this is no longer a game. This is the most crucial technology, general purpose technology of our
lifetimes. And it's important that America leads on it. And the Trump administration has
reoriented the ship in the right direction, in my opinion. You know, the conversation about
China, I think is an interesting one. The Biden administration, as I'm sure you noticed at the time,
had a very kind of New Deal, great society orientation. They're very strong.
on assumptions rooted in industrial policy, that sort of thing, economic steering by the point
of the federal government. And you talk to them about China, and you'll get a little palaver
about human rights, democracy, militarizing the South China Sea and all that. And you say,
what do you want to do about that stuff? And you really wouldn't ever get an answer.
The thing that really got them upset was, well, they're stealing our jobs in Ohio. And they're
stealing jobs in North Carolina. And they're stealing jobs.
from, you know, Massachusetts, wherever.
And I worry a little bit about that element in the Trump administration as well,
where there are a number of people in his administration who have been very hostile to things like
automation, certainly hostile to globalization per se, a great deal of hostility toward trade.
And I worry that this, we must conserve jobs first and foremost, above all other priorities,
kind of attitude, will also shape their attitude toward the regulation of.
of technology, especially if, you know, I mean, they won't be around. We hope there won't be
around 10 years from now, although, who knows is Donald Trump we're talking about, but whoever is
making these decisions in 10 years, if we've seen AI wipe out a whole bunch of jobs between now
and then, particularly jobs for that very politically influential and sensitive slice of the population
who are college-educated would-be suburban professionals who tend to vote and make campaign donations
and have kids in school and that sort of thing,
who seem likely to be the ones who are most radically affected economically by this,
although between us I'm sort of an optimist about the employment end of things.
I don't really worry too much about that.
But it's a big worry in the political conversation.
Not everyone sees the world the way I do.
So do you worry that the prioritization of make work,
of the jobs numbers above all other things will warp this conversation?
Kevin, your concerns are well-founded.
And I think there's even more to it than what you laid out.
I mean, obviously, for both the left and the right, concerns about what AI and automation can mean for jobs and the future of professions and skills is as a palpable concern.
Always has been.
This goes back to the time of the lights, right?
I mean, we always have perpetual concerns about how technology and automation will undermine jobs, employment, skill sets, whatever.
That's still with us.
It's a bipartisan concern.
It's a question of what are you going to do about it.
And we can get into that because it's controversial at the margins about some of the proposals that are out there, robot taxes.
and limits on automation.
We saw debates last year in the two big union strikes, right?
Doc workers in Hollywood.
What was front and center?
AI and automation.
You can expect more of that,
and you can even expect some Republicans like Josh Hawley
and others to cross those lines and say,
hey, we're with these guys.
We need to have limits or something.
But that's not it.
There are other reasons you should be concerned.
Number one, AI is becoming part of the broader-based sort of trade war
and debates about re-industrialization and reshoring.
And there's like an entire question about like,
well, do we have to make everything here domestically? Can we trade with foreign partners?
A big part of the debate that the Trump administration is now taken over from the Biden administration,
where both of them were a little bit internally confused, is export controls.
Like what should America's leading innovators be allowed to sell to other players and to whom and what grounds?
The Trump administration got rid of Biden's old executive orders and other types of export control rules,
only to replace it with a set of rules that are somewhat more confusing in some ways,
including with one-off deals with folks in the Middle East and Saudi Arabia and now even with China to sell Nvidia chips when they're also trying to impose restrictions on other ally countries where they cannot get these things.
So there's all of that that layers into this on top of that.
And then we haven't even mentioned what is beyond the sort of realm of economics, sort of the social concerns about AI and about traditional digital technology that are playing into this with a lot of folks, including many conservatives now wondering,
Like, well, is AI going to be good for the family and good for children and good for, like, society more broadly?
So you have a toxic brew there of different types of concerns and forces coming sort of all at AI with its long knives and saying, wait a minute, timeout, we need guardrails.
We need preemptive restrictions either for jobs purposes or trading purposes or, you know, election interference or, you know, whatever it might be, child safety copyright, we haven't mentioned.
there's just so many layers where you can imagine people coming out of the woodwork and saying stop, stop progress right here.
Not long ago, I saw someone go through a sudden loss and it was a stark reminder of how quickly life can change and why protecting the people you love is so important.
Knowing you can take steps to help protect your loved ones and give them that extra layer of security brings real peace of mind.
The truth is the consequences of not having life insurance can be serious.
That kind of financial strain, on top of everything else, is why life insurance indeed matters.
Ethos is an online platform that makes getting life insurance fast and easy to protect your family's future in minutes, not months.
Ethos keeps it simple. It's 100% online, no medical exam, just a few health questions.
You can get a quote in as little as 10 minutes, same day coverage, and policies starting at about two bucks a day, build monthly, with options up to $3 million in coverage.
With a 4.8 out of 5-star rating on Trust Pilot and thousands of families already applying through Ethos, it builds trust.
Protect your family with life insurance from Ethos. Get your free quote at ethos.com slash dispatch.
That's E-T-H-O-S dot com slash dispatch. Application times may vary. Rates may vary.
This episode is brought to you by Squarespace.
Squarespace is the platform that helps you create a polished professional home online.
Whether you're building a site for your business, you're right.
or a new project, Squarespace brings everything together in one place.
With Squarespace's cutting-edge design tools, you can launch a website that looks sharp from day one.
Use one of their award-winning templates or try the new Blueprint AI,
which tailors a site for you based on your goals and style.
It's quick, intuitive, and requires zero coding experience.
You can also tap into built-in analytics and see who's engaging with your site
and email campaigns to stay connected with subscribers or clients.
and Squarespace goes beyond design.
You can offer services, book appointments,
and receive payments directly through your site.
It's a single hub for managing your work
and reaching your audience
without having to piece together a bunch of different tools.
All seamlessly integrated.
Go to Squarespace.com slash dispatch for a free trial,
and when you're ready to launch,
use offer code dispatch to save 10% off your first purchase
of a website or domain.
Let's talk a little bit about that
because you've written a good deal about your involvement in this debate on the kind of
interfactional issues on the right, where you've got, you know, some people on the right
who see the world more the way I do, who are kind of, you know, trade and innovation-oriented
libertarian types. You've got other folks who are really worried about this technology, as you mentioned
for its effect on the family. And this debate has played out in a very, you know, kind of public
and informal way in terms of, you know, people writing essays and responding to them, you know, the way
the way debate's actually supposed to work. I was rather pleased to see some of that. It almost made me feel nostalgic for older times. But I wish I guess it's the only thing you can really feel nostalgic for. But do you want to maybe speak a little bit more in detail about what that debate has looked like and who some of the participants have been and what the major points of you are? Yeah, I'd be happy to. So obviously, we find ourselves in an age of political realignment. And there's some interesting things happening on both the left and the right with regards to technology and innovation.
The left's having a rethink about what we mean by abundance and whether we should restrict it.
And, you know, can we embrace technology innovators or should we continue to lock them down and
license them and permit the living hell out of them?
You know, and that's a whole fight going on for the soul of the left in terms of where they think.
It's been so much fun to watch people like Ezra Klein figure out that if you don't build houses,
there won't be any houses.
Yeah, if you don't build anything, right?
I mean, it's like a big part of what they're saying is just like, let's build things again.
Let's open the door to opportunity to give people the chance to take some risks.
Let's stop being like Europe.
Let's be like what we used to be in America, like embracing risk and opportunity.
And I think a lot of the Democrats have unfortunately abandoned that.
I should just mention, Kevin, that in my opinion, the single most libertarian policy document
produced by any government or government official in my 34 years of current technology policy
was the Clinton Gore administration's 1997 statement about internet policy called a framework
for global electronic commerce, in which that administration, in a full-throated way,
embraced market-based opportunities and light-touch policy for the Internet, digital technology,
and electronic commerce. It provided the baseline, the basis, the foundation of policy to allow us
to beat Europe in the first round of the digital technology wars, and we trounce them.
You know, right now in the world today, 18 to the 25 largest digital technology companies in the
world, U.S. headquartered. There are only two in Europe.
of the world's seven companies that are eight companies that are trillion-dollar companies, seven of
them are American, none are European. You know, I could go on down the line with all the statistics
about just how we completely destroyed Europe on technology policy, but policy choices were at
the heart of that, and the policy choices were made primarily by Democrats. That's an important
story. And then something went very wrong, and they changed their attitude. They became more
skeptical of large technology companies. They started fearing like discrimination. A lot of the sort
of woke concerns came into this and they tried to layer on a whole bunch of new labor and
environmental regulations and we know where this is all met. But now we turn to the right. And on
the right, there are different but real schisms, largely attributable to things like, well,
what does technology mean for individuals, for families, for society, for values? And there has always
been a sliver of the conservative movement, sometimes it's a big sliver, sometimes it's a small one,
that is concerned about the implications of new technologies, not just on economics, like jobs
and so on and so forth, but it's particularly on the family unit or on societal values.
And that has now reasserted itself in the social media age with a vengeance. And a lot of
conservatives, especially on a Maga populace, are very strongly in support of trying to more
aggressively regulate social media platforms, and now by extension, artificial intelligent
platforms, it's all essentially the same platforms, to effectively say, we want them to be more
pro-family or pro-humanism or pro-whatever. And it's not really clear what that always means.
And so there's a major debate happening right now between these factions. It's happening even within
the Trump administration at its very highest levels. And some of those conservatives have even
come out against deregulatory efforts on this front on the grounds that, like, we can't trust
technology companies.
This is all just a big tech front to like keep markets free of all these meddlesome regulations.
We'll take our chances with states regulating the living hell out of AI because we trust tech
companies less.
And so there could be real unfortunate ramifications to that mindset of taking to the extreme,
including with things like this administration seeming to favor a lot of the same antitrust
remedies that the Biden administration favored. And those large technology companies, whether you love
and or hate them, are the first line of like our, you know, war against China and, you know,
engaging in global markets are like large technology companies. It's not the only part of the
story. But if we intentionally hobbled them, that's shooting ourselves in the foot as that
proverbial race gets underway. And I think there's a battle for the soul of the right going on right now
over this issue. And it's not really clear who will prevail. I'm obviously more on what's called
the tech right. And I spent 10 years at the Heritage Foundation. I created their digital technology
program in 1993, 94. But the reality is that most of what I say and think about these issues
has been rejected by Heritage now. And if I walked on the door there these days, I would not get
a lot of it. I don't think I'm allowed to walk in the door. Yeah, I wouldn't probably not be
allowed to walk in the door. No, no, I was just going to say something's profoundly changed and
it's trade not only on this front, Kevin, but obviously on some other fronts like trade.
when I worked at Heritage, the other thing I did at Heritage was trade and industrial
policy issues, and we were sort of very pro-free trade and very skeptical on industrial
policy. Heritage like other conservative groups have sort of switched on those things now and take
a very different approach. And so this reflects a new, what I call a sort of a realignment
of the technology right. And it's not really clear which of these factions is going to ultimately
prevail. But again, it's happening at the highest levels of our government.
Yeah, there's an interesting kind of parallel here. I don't want to derail the conversation
too much, but it was just popping into my head that, you know, the conservative movement as such
or conservative tendency goes through this every so often. And there was very similar period around
100 years ago of very intense anti-capitalist, anti-industrialism, anti-kind national and internationalism
in the conservative world. But it was primarily an elite and literary phenomenon. It was led
by people like T.S. Eliot and J.R. T.R. Tolkien and Ezra Pound and people like that. There was
a how to put it, a sensibility that these kinds of new systems were putting decisions in the hands
of people we couldn't trust, right? And for Ezra Pound, it was Jews because he was a raging
anti-Semite and a crazy fascist. For T.S. Eliot, it was more quietly Jews, but, you know,
bankers and that sort of thing. He didn't enjoy his time working at the bank when he worked there.
You know, Tolkien was very concerned about kind of, you know, titans of industry and that sort of thing,
making short-term decisions rather than decisions in the long-term interests of people.
The people on the other side were largely people associated with that industrial working class
because they were experiencing this radical improvement in their standard of living.
And now we kind of see it the other way around where the elites, for lack of a better term,
tend to be more libertarian, more technology, more globalist in their outlook,
more in favor of free trade and relations with other countries and more cosmopolitan in their views.
And the people who are more nationalistic and hostile to change in innovation tend to be
blue-collar communities or the kind of upper-middle class professionals associated with those
struggling blue-collar communities. Even though they have seen improvements in their standards of
living, obviously, in the last 30 years, they've been disappointed by it. It hasn't been what they
expected and what they wanted. And so it seems that the class polarities have been sort of
reversed there, don't you think? I mean, that's a great point, Kevin. I hope you write that up.
Maybe you already have, and I missed it. But I think this is an excellent topic about, like,
what is it that drives this fundamental hostility to various emerging technologies?
It's something I've studied throughout my life, but there's, you know, monocausal explanations just
don't work. There's often many complicated explanations and they're historically, based on
historical context, like what else is going on in the society or the country. So there's a number
of different factors that play into this. I mean, obviously going back to our conversation about
skepticism about globalism more broadly and like, you know, engagement of American companies on the
global stage. And, you know, and, you know, conservatives are taking very different views
on like how our government should be overseas in many different ways and how active our companies
should be on the global stage and a skepticism about multinationals. And, you know, the most amazing
irony to me is that when I was cutting my teeth in the field of information technology policy
back in the 1980s, the fundamental concern of public policy was what we were going to do about
so-called information scarcity. We lived in a world that some people even decried as one of
information poverty. And now today when you ask people, what's the fundamental problem in the
field of information communications technology? They say, it's information overload. We've got too much.
We've just got so much stuff. And a lot of it's very confusing and it's uncertain how it works and you
wonder what it means for society. But I always try to remind people like, well, you know, what's the
story? You know, were we better off in an age of information poverty when we only had a handful of
stations or a couple of newspapers if you were lucky in your local community? And maybe you had a bookstore or
library. I didn't at my small Illinois town. So, you know, these things are very
historical, you know, contextual in terms of how they play out. But I think you're on to
something. I think in a strange way, I mean, we're going to get really off the rail here from
AI, but I wrote about this in one of my last books. It's almost like we've reached the top
of Maslow's pyramid and we're all fully self-actualizing now and we've got so much time to
self-actualize. We're like, oh my gosh, all these smartphones, all of these digital
technologies, all these websites, all these things. It's like we're
finding reasons to hate in a world of abundance.
But I always try to remind people, a world of abundance is better than a world of scarcity
on almost every dimension.
But it will have challenges and trade-offs.
I would never, ever deny that.
Poor societies fight over food and land, rich ones fight over status.
And now the whole world is rich, or at least most of the world, is very rich now.
And so we've moved away from those traditional kinds of concerns.
Another kind of parallel, you know, left right here is the, is that question of values that we were just talking about where you see, you know, Colorado, some blue states, a tendency toward wanting to take a heavy-handed approach to AI regulation on sort of social justice grounds, worried about things like racial discrimination and, and traditional kind of progressive concerns of that sort.
And then on the right, you've got Trump and people around him worried about woke AI.
as they put it. And the word woke just has to be said loud in the middle of the sentence always because that's just, it should always be written in all caps. And you had a funny observation about Trump administration putting out a memo saying that they should take the words misinformation and climate change out of the risk management framework. And you said, you know, that would be fairly easy to do since they're not in there to begin with. And but they're worried about this, you know, kind of impending a political point of view on, on, on the
things, and whether it's a healthy one or an unhealthy one. And on the one hand, I think it's
easy to exaggerate that within the domestic context of American politics. On the other hand,
as we started off saying at the beginning of our conversation, there are going to be some
fundamental assumptions about how this technology works and how it's deployed and what these
systems actually look like that do encompass within them and sort of embed radically different
points of view, the kind of, you know, Anglo-American traditional classical liberal point of view
versus the, you know, more surveillance and control-oriented Chinese point of view, or at least
Chinese Communist Party point of view. I don't want to tar the entire Chinese civilization with
the current leadership. How do we deal with one without opening ourselves up to an unproductive
and fruitless fight over the other? Yeah, this is a wonderful question, Kevin. Thanks for asking
it because I have a tremendous fun following this debate because the debate about so-called
woke AI is really interesting. And it's really based on a broader debate about what we mean by
algorithmic fairness, a term we hear a lot these days. Like, are these algorithmic systems fair?
Are they unbiased? And the funny thing is, is both the left and the right have their own complaints
about AI and algorithmic systems being fundamentally unfair or biased. But they mean completely different things
by what they mean by unfair and biased. And of course, President Trump and many in the MAGA movement
have said that like, oh, all of these social media sites and platforms and now AI, they're just
fundamentally biased against conservative viewpoints or traditional values. And therefore, we should
take some steps to potentially remedy that in some way, including now with an executive order that
basically creates a process by which you can investigate, like whether or not these systems that
the government might contract with, are fair and unbiased. However, that is defined. The problem is
at the exact same time they're doing that, the left, is going around saying, oh, yeah, you're right,
they're biased all right, but that's because they're fundamentally biased against people of color and
they're inherently racist and they're just giant, you know, race, you know, racism generating
machines. And unfortunately, both sides have policy levers. They want to pull to try to make that,
you know, counter that problem, that perceived problem, whether it's real or not. Let me just give you a
quick feel for how that's playing out. There are currently in the United States, Kevin, over a thousand
A.I. Bills pending, actually closer to 1100. That's an astonishing number of bills for any
technology policy issue or any policy issue period. Almost all of these bills, not all, but a huge
chunk of them, over 90 percent, are state and local bills. And 250 of them, fully a quarter of them,
are from just four blue states, Colorado, California, Illinois, and New York. And some of the
bills in those states and others are of the sort of lefty woke AI view that the Trump people fear,
which is saying like, oh, yeah, we've got to deal something preemptively about these potentially
racist algorithms and discriminatory AI systems. So let's have preemptive audits and algorithmic
impact assessments and will create new like AI regulatory bureaucracies that'll sort of bless
whether or not your algorithm can go off into the world, you know, if it's past some preemptive
check. It's very European technocratic style regulation married up with sort of those
lefty sort of disparate impact kind of concern. It sounds like the California environmental impact
statement. Exactly right. So the California effect is alive and well here, but ironically, strangely
rather, it's being led by Colorado first. It's maybe the Denver effect before it's the Sacramento
effect. But California, of course, has their own bill like this. And there are a lot of other bills
like this that have been pending in the United States this year. Now, it's not the only type,
but that's a major type of bill that's pending. At the exact same time this is happening, Kevin,
You have certain red states like Texas and Florida and others saying, we are going to pursue, and some of them have already passed, and some of this has been litigated to Supreme Court, bills that would try to eradicate algorithmic or social media bias in large digital systems of an anti-conservative nature.
And they specifically are concerned about potential deep platforming of certain people or viewpoints.
And of course, that's been litigated always Supreme Court, like that issue.
And so, you know, these conflicting worldviews of what we mean by algorithmic fairness, it's like,
it's a broad-based debate.
We can go back to the time of, like, Greek philosophy, like, what is fairness?
What is justice, right?
These are now playing out in the world of algorithms.
And you have people on both sides trying to inject the government into it to say, we'll allow some grand, old wise man, to decide, like, what this is, you know, that is fair.
Or maybe an entire community board.
A lot of the things that are proposed by the lefty states are like, we should have more community engagement.
basically that's NEPA. It's like NEPA for AI. Like everybody comes to a town all meeting and we
decide like with a boat. Like, is this AI algorithm good one? Is this one a fair one? It never gets
anywhere. Nothing will ever get done. It'll all be stopped. And so that's the fight we're having
right now. Both sides trying to squeeze AI under their own worldview of what is fair, unbiased,
you know, non-discriminatory algorithm. So that fight is going on right now in real time.
What's baffling to me about that fight in some ways is that the precedents we have of, you know, unfair content moderation decisions being made.
You know, Facebook trying to suppress the New York Post's Hunter Biden story, which is everyone's favorite example, or the, you know, various cases of deplatforming of people for having unpopular political views, that sort of thing.
These were human decisions.
These were not decisions that were driven by algorithms.
These were decisions driven by people like Mark Zuckerberg and people who work at companies like Facebook and Google.
I think that speaks a little bit to our earlier issue of our earlier discussion of cultural alienation,
that there are people who are living in old Ohio factory towns who maybe don't know what Mark Zuckerberg stands for or really wants politically,
but they see the guy in this, he's not one of us, and I don't trust him.
And I don't want him making my decisions about making decisions on my behalf about what I can read and what I can't read.
And I think the assumption there is that, you know, the Zuckerberg sensibility being already partly cyborg to start with, of course, is relatively easy to transfer onto an actual machine instead of a person who's kind of an approximation of a machine.
Kevin, really quick, if I just add on that point, it's an excellent point you're making because I think obviously a lot of people are alienated by some of the personalities.
They're at the top of the technology pyramid in this country and some of their language and rhetoric and their crazy proposals sometimes that they make themselves.
policy decisions they make, including like how to regulate speech on their own platforms,
they've backfired miserably.
The good news is that for the most part, there's been a course correction.
They've found ways to get back to a more sensible kind of moderate position of saying, like,
look, we want to keep smud and like child porn and other crap like that off of our systems.
But boy, we didn't need to do all that stuff where we were like, you know, you remember the episode
with Google when they basically was spitting out images of the founding fathers being people of
color and it just didn't have any historical reality to it, right? It was just trying to be woke.
It was trying to be fair and balanced. It was crazy. And Republicans have never forgiven them for
that kind of nonsense. But they corrected. That stuff doesn't happen as much now. It doesn't really
happen at all if you try to test these systems. So there is a course correction. Social pressures
and some political can come down without having to regulate this stuff to debt. Conversely,
looking at the left, the left ignores the fact that all of their concerns about disparate impact and
like civil rights violations and all this stuff. That's already fully addressable under all of our
civil rights laws. Every single state has a huge body of civil rights laws. And we also have
unfair and deceptive practices authority, variety of other consumer protections. We have recall
authority to pull products from market when they're proven unsafe. The reality is that AI and
algorithmic systems and computational ones already regulated in all sorts of ways. Without having,
we don't need to have a centralized. We never had a federal computer commission.
to deal with this sort of thing before or, you know, a centralized kind of like computer agency.
We don't need that for AI. We need to rely on general law. And we need to focus on like how social forces
and other types of norms and court and the courts can work together in conjunction to probably come up
with a better governance mode for this exciting new technology. At the same time, though, you do favor
a single federal standard in most cases rather than a variety of federalist, you know,
approach to state standards. At least at least a stronger hand from Congress to basically make sure
that interstate algorithmic commerce is somehow protected.
My first of my boring 10 books was on federalism and interstate commerce
in the technological age that I wrote for Heritage Foundation in 1998,
where I openly struggle with these questions about how to draw these lines
between proper role of federal state government with regards to a lot of emerging technologies.
The problem is we're not dealing in a world of like wheat and pig bellies anymore, right?
I mean, algorithms kind of just move across borders seamlessly and across the globe,
and they don't stop between North and South Carolina and say, let's pay this toll now
and let's move to a different regulatory regime.
At least they shouldn't.
But a lot of states are moving in that direction.
And at the dawn of the digital age with the Internet, we didn't do it this way.
We had a national framework, and the states were just not really aggressively regulating at the time,
so we got lucky, but we were able to set a pretty clear, light touch, pro-freedom-to-innovate
kind of vision for the world, and we kicked butt.
And it was a great vision.
and I think it was a constitutionally sensible one because there are legitimate interstate commerce
concerns at risk at stake here. There are also national policy considerations. If we're really
serious about confronting China and other potential hostile adversaries on this front,
well, we need to have a somewhat more coherent federal plan and federal strength, national
strength, to make sure that we can compete with them. So the states can't just be allowed
to run wild. Doesn't mean that they can't do anything. There's still plenty of things they can do.
But yes, the federal government does need to assert a role and, you know, I think that's fine.
I don't think there's anything wrong with like the Congress setting a national framework for this like they have for other national technologies and then say within boundaries the states can do these things where it's more parochian character, AI and education, AI and elections, AI and law enforcement, AI and those sorts of more parochial concerns, let them do it, let them have it.
They can even do economic development stuff and all that jazz, even though I think it'll fail.
That's because it can be contained within their state, doesn't have interstate externalities or spillovers.
So I really do believe a national framework of some sort is essential in for Congress to have its voice.
With Amex Platinum, access to exclusive Amex pre-sale tickets can score you a spot trackside.
So being a fan for life turns into the trip of a lifetime.
That's the powerful backing of Amex.
Pre-sale tickets for future events subject to availability and varied by race.
Terms and conditions apply.
Learn more at amex.ca.ca slash y-Mex.
Did you lock the front door?
Check.
Close the garage door?
Yep.
Installed window sensors, smoke sensors, and HD cameras with night vision?
No.
And you set up credit card transaction alerts,
a secure VPN for a private connection and continuous monitoring for our personal info on the dark web?
Uh, I'm looking into it.
Stress less about security.
Choose security solutions from tell us for PC.
of mind at home and online. Visit tell us.com slash total security to learn more. Conditions apply.
Those of you who are listening on audio and not watching the video should know that I'm
wearing an orange shirt, although what I'm saying Next doesn't have anything to do with that.
But, you know, I think of the success of Austin, Texas as a technology center as being kind of a good
example of how to do it. And also I worry a little bit about the waning political influence of
Generation X, because we're the ones who really remembered the tech boom, the things that went
with it, the politics that kind of backed it up, why those things worked and why that was such a
prosperous time. You know, Austin ended up thriving because it was relatively cheap. It's not as cheap
anymore, but it used to be pretty cheap, and certainly cheap compared to California. There was no
state income tax, so it was easier to get people to move there because there were houses and no state
income taxes. You had a big university who was graduating a few decent, a few thousand decent
engineers every year. So you had an educated workforce to pick from. And other than that,
you kind of got to do what you wanted to. You know, Michael Dell started his company in his dorm room
as a University of Texas student. You didn't have to, you know, go out and apply for 65 different
licenses and all that kind of thing. And even though what's going on in this particular iteration
of technological innovation is different from the early days of the Internet in a lot of ways,
different stakes, different kinds of players, different global geopolitical,
situation, but I think some of those lessons really still do apply, which is that if you want
to have a system that produces the kind of prosperity relative to the rest of the world that we
really enjoyed at the end of the 20th century and in the beginning of 21st century,
which is the reason we pulled away economically so much from Western Europe, and one of the
reasons why we recovered from the financial crisis so much better and more readily than they
did. We have some precedence for what you can do in policy, and one of those things is what
you're talking about the way the internet was regulated in its early days, and really the way
it continues to be regulated for the most part. And then basic things like get higher education
right, get housing right, get taxes right, have a generally light but intelligent, you know,
regulatory footprint. One of the things that I think is different, though, is that, or one of the
important things that I think is different, is that in the early days of kind of internet
entrepreneurship, it was very, very decentralized.
It was anybody can go start a company.
You have whatever dumb service you have, you put the words, the letters.com on the end of it,
and suddenly you're a Fortune 500 company.
I mean, that was a bad model for a while, but you know, so you sort of get what I meant.
Innovation was quite easy.
The upfront costs for starting a technology company in the 1990s were fairly low.
It's my impression, correct me if I'm wrong here, but the AI industry seems like it has an
inborn kind of economic tendency toward cartelization.
That it is inevitable that it will be dominated by a relatively small number of big firms in that the amount of capital that's really needed to play in a serious way in this industry just means you're either going to be a state-backed entity of some kind or you're going to be, you know, a firm that's got the kind of resources and market cap of a Facebook, Google, and Apple, a big company like that.
Am I wrong about that? Am I right about that? Or is it I hope you're going to say more complicated?
Well, it is more complicated, but let's be clear. What you're saying is right in one sense that there are certainly economies of scale and various other serious investment realities that necessitate some large players and large capital pools to come together to try to build the computational capacity it takes to compete at scale globally with China and the rest of the world on this front. In the early days of the internet, a lot of those small mom-pop players eventually consolidated and became larger companies or they
started in garages like Google and others, and then they grew to be something bigger.
There was a guy selling books who decided he could sell other things as well.
That's right, right? And think about this, Kevin. If we were having our conversation just
even three years ago, would we have been discussing open AI, anthropic, invidia? I could go on
down the list. The reality is what matters is Schubertarian change, like the churn, right?
the fact that we get new players, new investments, new innovations.
And, you know, we don't have to count exactly how many firms are in every market,
but in the AI market, we've got a ton of competition right now.
But yes, it's true.
We're not going to have like, you know, three dozen different, you know, giants that stock the AI landscape.
There's going to probably be seven or eight that do most of the major investments.
But it doesn't mean you can't grow an ecosystem around them within them.
I mean, you look what MET is doing with Lama, a massive open source.
AI platform that allows countless other players to build on top of it. Open AI just announced a new
type of system like that. And there are many, many other platforms and players out there. So I think
the story is a little bit more complicated, but again, economies of scale are real. I mean, just here
quick numbers for you, Kevin. Already this year, the seven largest USAI firms have spent $102 billion
of CAPEX on AI infrastructure and data centers. And we're on our way to spending 400 billion
million dollars this year overall. Last year, total, we only spent $100 billion, but even that,
our private sector, was 12x what China spent. So even if you wanted to concede that there is some
market power here or some large players, there are enormous benefits with that power. No, if it
was a monopoly, there would be enormous problems. But we're not there. We're not there. And I could
go back to the old history that you and I remember, Kevin, of the early internet days when we
say, what are we going to do about AOL time Warner? Oh, my. Yahoo, the monopolist, Nokia and
Motorola's lock on the smartphone world. We'll never get around that. What happened? They're all
yesterday's news. They're in the hall of shame, right? We laugh at them now just like you did.
But at the time, entire forests were falling with all the paper that was being filed to try to
regulate those companies. I had to spend endless days of my life in 2001 and two dealing with
AOL-all-time Warner horror stories. It's like history just repeats, but now we've got even more
players and people still complain. One of my critical political memories of that era was the
Microsoft antitrust case and watching someone have to explain to Thomas Penfield Jackson,
the judge presiding in the case, who seems like a very smart and decent guy but was in no way
on the cutting edge of technology, like what a web browser was and, you know, and how a mouse
worked and that kind of thing. I remember thinking to myself, he's obviously a smart guy in his
field. I don't think he's a fool. I don't think he's a bad actor or anything like that.
But probably not the person who should be making these decisions or presiding over the process
by which these decisions are made. And think of how much faster the technology world is evolving
since the time that Judge Jackson was trying to deal with like Microsoft systems, right?
That seems downright primitive by comparison. You could, you know, we're, we're, we're
old enough, we could remember the AT&T breakup, which actually made some sense because the government
was so close with AT&T and protected its monopoly. But people forget on the exact same day that
our Department of Justice pursued the case against AT&T, it dropped a 13-year antitrust investigation
against IBM. An antitrust investigation that Robert Bork once famously referred to in the New York
Times as the Department of Justice antitrust Vietnam, because it was such a quagmire.
And 13 years of paperwork filed for nothing, only for what to happen?
In the mid-80s, the personal computer revolution.
That antitrust case was all about mainframe computing.
And the courts and the experts had entirely missed the boat on where markets were moving.
And so I'm excited about the fact that the AI world we live in today is a completely different one that none of us could have predicted 10 years ago, even five years ago.
And I don't know what's next with quantum computing on the horizon and neurotechnology, brain machine interfaces, all sorts of biotech revolution, robotics.
We don't know which direction these markets are going, but I'm confident we'll see.
new players and new opportunities from them. So top three policy things you'd like to see happen.
I've already mentioned one of them. Obviously, we need a national framework. We need to Congress
to finally step up and do its job again. Congress has become a non-actor. All right, so there's no hope.
It's, I, you make a good point. Your skepticism is well founded again, Kevin. I think the reality
is it's going to be hard for Congress to assert itself and have that national policy framework.
And we could end up having the state-by-state battle. We've already had it for many years and
things like privacy policy and cybersecurity. And it's had real.
cost and consequences. It's hard to know the true opportunity cost of what we've lost,
but it's no doubt that there's a lot of small and mid-sized entrepreneurs that are struggling to
deal with compliance regimes on a state-by-state basis, especially when there's different
definitions of what we even mean by artificial intelligence and all the terms of art that go into
it. So a national framework would be really, really crucial. I think the Trump administration
and Congress have come to an agreement on some other important things, however. I mean,
there's been calls for greater investment in this area. And obviously, we have to be careful about
industrial policy kind of things. But our government's finally getting more serious about at least
making sure that AI is more widely available and diffuse across the globe and within our own
country. I think those are smart steps. I'd also like to see our federal government, however,
take a close look at exactly what all of that existing regulatory infrastructure we already have
out there is doing. We have 441 different federal agencies or department right now, Kevin.
at the federal level, and every one of them is interested in getting their paws all over
AI. And so Congress needs to exercise more authority over this. The Trump administration is
asserting some of its authority to say like, hey, you got to keep things in check. We've already
seen the consequences of over-regulating some robotics and AI markets. Look at what the FAA has done
by keeping drones from becoming really a vibrant market in the United States. They lock that market
down. What happened? China and DJI moved in and took control. Our law enforcement agencies,
our firefighters, they buy Chinese drones today. Why? Because the American marketplace was regulated
to the ground. Literally couldn't get off the ground because of FAA regulation, a cautionary tale.
So FAA, FDA, EEOC, SEC, all these different alphabet soup agencies, they're already trying to regulate
AI in their own sectoral verticals, their silos. We need to make sure that those agencies,
the administrative state, doesn't go run away with that power and like quash innovation
while we're talking a big game about AI action plans and fighting China because it's what happens
on the ground in the swamp every day that matters the most to how innovation happens in the
world. So that would be my third priority is like getting agencies under control and giving them
better guidance for what they did. Gotcha. You just reminded me of one of my favorite tech
regulation stories where the early days of radio, the federal government adopted this rule about
low-powered transmitters. You probably know this story. And it was kind of important for a while
because it would low-powered transmitters and private hands would interfere with local commercial
radio and that sort of thing. And then it just kind of sat there for a long time and it didn't really
become much of an issue. But there was this little lobby, you know, people saying that, you know,
we need to reform this rule and get rid of it. It doesn't really matter anymore. It's not doing
any good. And eventually someone got around. It got scrapped, I think, during the Reagan administration,
maybe it had been adapted in 1930 or something like that. But essentially, getting rid of that rule is
what made it possible for people to have mobile phones and then, and then Wi-Fi after that. And if that
rule has been in place, these markets just never would have been able to really come into existence
without changing it. I'm probably oversimplifying that story because it's in my memory and I've
been written about it in a couple of years. But I also think of the itty-bitty tax that you get if you
still have a landline and you have long-distance service on your landline and you're still paying an
emergency tax that was implemented to pay for the Spanish-American War, which is over and has been
over for some time. But these things have a way of sticking around. Is there anything I haven't asked
you about that you'd like to talk about that we should talk about no i think we covered a lot of ground
there kevin i mean obviously there's all sorts of sectoral concerns here that we we would have to
spend entire other podcast as talking about issues having ranging from national security to
a i and education to ai and law enforcement and you know government uses it's it's very very
complicated multidimensional uh i will just say this that um america faces a really crucial moment
a decision has to be made as it does at the beginning of the dawn of every technological age
about what sort of policy vision will come up with for this new world that we're confronted with.
And we face a choice. Are we going to take all this innovation and innovators? Are we going to put
them in a regulatory cage? Or are we going to allow them to be born free? Are we going to allow them
to engage in a world of, you know, entrepreneurialism and permissionless innovation to go out
and change the world for the better? And, you know, we're confronted with this every generation.
We have like a major decision for a major new sector of technology. And we're there right now for
AI. And these consequences will be with us for many, many decades to come if we get it wrong.
Adam Teer rhymes with beer. Thank you for your time today. And I hope we can continue this
conversation at some later date. Thanks so much. Thanks, Kevin. I enjoyed it.
Thank you.