Factually! with Adam Conover - Debunking AI’s “Existential Risk” with Arvind Narayanan and Sayash Kapoor
Episode Date: March 18, 2026Will AI obliterate all of humanity? Will it destroy all of our jobs? There are so many questions swirling around the existential threat that AI poses, and even more completely hypothetical an...swers. This week, Adam brings back past guests Arvind Narayanan, professor of Computer Science at Princeton, and Princeton PhD student Sayash Kapoor to give expert perspective on our current moment. Their newest essay, AI as Normal Technology, is a rational and evidence-based exploration of AI that offers an alternative vision to the idea of AI as a potential superintelligence. SUPPORT THE SHOW ON PATREON: https://www.patreon.com/adamconoverSEE ADAM ON TOUR: https://www.adamconover.net/tourdates/SUBSCRIBE to and RATE Factually! on:» Apple Podcasts: https://podcasts.apple.com/us/podcast/factually-with-adam-conover/id1463460577» Spotify: https://open.spotify.com/show/0fK8WJw4ffMc2NWydBlDyJAbout Headgum: Headgum is an LA & NY-based podcast network creating premium podcasts with the funniest, most engaging voices in comedy to achieve one goal: Making our audience and ourselves laugh. Listen to our shows at https://www.headgum.com.» SUBSCRIBE to Headgum: https://www.youtube.com/c/HeadGum?sub_confirmation=1» FOLLOW us on Twitter: http://twitter.com/headgum» FOLLOW us on Instagram: https://instagram.com/headgum/» FOLLOW us on TikTok: https://www.tiktok.com/@headgum» Advertise on Factually! via Gumball.fmSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Transcript
Discussion (0)
This is a headgum podcast.
Hey there, welcome to Faxley.
I'm Adam Kanover.
Thank you so much for joining me on the show again.
This week we're talking about AI,
but specifically, we're talking about the biggest claims
that are made about AI.
And those claims have gotten pretty fucking big.
We hear every day that AI is about to destroy the entire software industry,
or that, in fact, it's going to take every single job in the world
and will all be reduced to huddling on the breadline,
hoping the AI gods hand us out a little UBI scrap so that we can eat.
After that, we're told, AI is going to become super intelligent and decide to destroy all
human life and turn us all into paperclips or whatever.
And, you know, so I think we have to ask, what are we supposed to do with these kinds of
predictions?
I mean, these are thought experiments at root.
There's someone looking at current trend lines and extrapolating them out based on what
they think is going to happen based on some formula that they've come up with. But I think we need to
remember that, you know, thought experiments are just that. They're thought experiments. They're a limited
human using their limited human capacities to predict what they think is going to happen.
They are not based on real world evidence about the world around us. They are not truly science.
They are on some level, literal science fiction flights of fancy. But these claims are made
so consistently by such smart people or such seemingly smart people that it can make these
outcomes seem more likely, more solid than they otherwise might be. In fact, folks in charge
of making policy for us and our lives are listening to these arguments. So it is critical
that we take a skeptical look at these claims. And this week on the show, I am bringing back
two of my favorite AI researchers to do exactly that. These two are some of the most clear-eyed
thinkers about the actual capabilities of this technology, what it can do and what it cannot do,
and what the future might actually look like taking into account, you know, that we are limited
humans trying to predict it, okay?
Now, before we get to that conversation, which I know you're going to love, I want to remind
you that if you want to support the show, head to patreon.com slash Adam Conover, five bucks a
month gets you every episode of the show ad-free.
We got a lot of other wonderful community features as well.
And if you want to come see me on the road doing stand-up comedy, well, March 23,
and 21st, I'll be in Hartford, Connecticut.
April 2nd through 4th, I'll be in Sacramento, California.
April 10th through 12th, I'll be in La Jolla, California.
On April 18th, I am taping my new comedy special at the Den Theater in Chicago.
I'd love to see you there April 18th, and May 8th through 9th, I'll be in Kansas City, Missouri.
Head to Adam Conover.net for tickets.
And now I would like to welcome this week's guest.
Arvin Nairan is a professor of computer science at Princeton, and Sayyash Kapoor is a Ph.D.
student there. We had them on in
2024 to discuss their book
AI Snake Oil. And since then,
they have written one of the most influential essays
about the future of AI called
AI as normal technology.
These two are some of the clearest
thinkers about AI.
They are so good at separating the truth
from the bullshit. I'm so excited to have
them back. Please welcome Arvind and Sayash.
Arvin and Seyash, thank you so much for being
on the show again. It's wonderful to have you back.
It's great to be back. Thanks for having us.
Adam. So I last had you guys on the show less than 18 months ago. How much has or hasn't
advanced in that time? What has happened? And what notably hasn't happened that folks might have
predicted? So I think one of the main changes we've seen over the last 18 months is that
AI models have continued to get better at many different tasks. One of the tasks that has been
especially notable is writing code. So if I recall correctly in our last conversation, both of us
mentioned that we were enthusiastic early adopters of using generative AI for writing code.
I think now that has become the mainstream view.
More software engineers than not use AI tools to write the code that they sort of have to
do for the job.
And it is really quite fascinating how quickly these capabilities have progressed.
One thing that has not happened, though, is that back then many people predicted that
this type of capability improvement, the fact that AI systems are getting better, let's say,
at writing code, would lead to many people losing their jobs.
In particular, for example, if someone had predicted that AI systems would be so good at writing code in 18 months,
many people might have concluded that software engineers would also be out of a job.
That has basically not happened.
What has instead happened is that these productivity benefits that have accrued to software engineers
have basically become part and parcel of what it means to be a coder these days.
So if you are a programmer, you're almost expected to be much more productive than you were, let's say, 18 months ago.
you're required to use these tools.
And the demand for software has also gone up,
what we expect of software engineers,
let's say to produce over the course of a working day,
has also gone up.
But it hasn't notably led to any decreases in jobs for software engineers.
I want to add one nuance if I could.
A lot of what Syash is saying is happening in the software industry itself,
you know, companies like Google or Microsoft or whatever,
a lot of early adoption of AI for software engineering.
But if you look at the median software engineer,
I think they probably work in an industry like healthcare or airlines or, you know, finance, something that is regulated much more risk-averse.
And we see a stark difference in the rate of adoption.
A lot of those folks have barely even heard of these new tools and are certainly not using them on a day-to-day basis.
Right.
The person programming, the medical record system at my hospital hopefully isn't using Claude Code yet.
like maybe we want them to pump the brakes a little bit and
and figure out exactly how this technology works.
And yeah, it seems as though like this tech has changed some of the nature of what it means
to be a software engineer without necessarily changing the numbers.
I mean, you've seen some big companies, I think Salesforce and some others,
lay off a bunch of engineers and they say it's because of AI.
But then I also see commentary that says, well, also it's just they overhired when interest
rates were low and now they're doing a sort of normal pruning or, you know, you don't need AI for
companies to lay people off. They would do that anyway. It's a little bit hard to tell whether or not
there's, there's been large job reductions in software engineering in my view. But you say we
basically haven't seen it. Yeah, that's exactly right. So I think one thing that's also happened is
many software companies that are not AI companies have also seen some pessimism from the markets.
For example, I recently read about Jack Dorsey's company, Block.
laying out several thousand people
and the reason given again was
generative AI.
But what they didn't mention
was that their stock prices had fallen
like I think 80% from their highs.
And so if your company is not doing so well financially,
you might want to take another look at your balance sheet
and perhaps AI is a convenient excuse at that point
in laying off the thousands of workers that you overhired.
Well, thank you for that.
What I wanted to have you on to talk about first, though,
is some of the very long,
range large predictions we've seen about what was going to happen with AI.
There's been this sort of like culture of thought experiment predictions about, you know,
as AI gets more advanced, it'll become exponentially more powerful.
It'll start improving itself.
And this is what leads to these sort of existential risk or ex-risk hypotheses that, you know,
someone, people even claim to be able to calculate what the chance of.
are that like an AI is going to run amok and turn us all into paper clips or start a nuclear war
or that sort of thing.
And it's a little bit difficult to separate, you know, the truth from the fiction to the truth
from the hypothetical.
You guys are better than almost anyone I've talked to at demystifying, you know, the scientific
or engineering side of AI.
What do you think about this pursuit, first of all, of.
of figuring out existential risk?
And what do you think about the specific predictions that we've seen over the last year or two?
I'm glad that people are thinking about catastrophic and existential risks.
I think that is something we should have research on.
But where Siyash and I depart is the claims that these risks are so well justified and so imminent
that there should be policy responses to this.
For example, governments, you know, in some extreme versions of the policy proposals,
only a few licensed companies should be able to build these really powerful AI systems
or we should pause AI development altogether.
All of those rely on really extrapolating dramatically from these thought experiments in a way
that's just not justified by the evidence.
Let me mention about these probabilities in particular.
So I work with some of the folks, for instance,
that's a place called the Forecasting Research Institute,
who use these teams of expert forecasters
to be able to generate forecasts of what might happen in the future due to AI.
And there's a range of things you can try to forecast.
And I like this forecasting activity.
I think it's useful.
That's why I spend a lot of my time helping them do it better.
But there's an important difference between a couple of different types of things you can
try to forecast.
Let's say what's going to be the energy consumption of data centers in 20,
29. You know, important to make educated guesses, and a lot of these forecasters have a lot of
practice making educated extrapolations about the future so we can have policy responses to that.
When you go from that to this kind of X-risk forecasting, look, I read there a 753 page report
that they put out a year or two ago, so you don't have to. And let me tell you, you know,
you can look at all the detailed reasons given by the expert forecasters, or you can have 12
people who are high, get together in a room, and come up with reasons for why AI might or might
not kill us. And I could put those two in front of you and you won't be able to tell which
a switch. Wow. So it's literally things like AI might decide that if it kills all humans,
global temperatures will decrease and that'll make data centers run faster. That's better for
AI. So it might decide to kill everybody. And so let's increase this number. Or
AI might decide to colonize space instead of Earth. So let's decrease this other number. And then we'll
will multiply those six numbers together and come up with a probability.
Like, okay, I mean, look, I mean, you know, if that's what folks want to do and they, you know,
it's an interesting, amusing activity, maybe thought provoking, I'm fine with that.
But the idea that that level of thought experiment is actually what should influence policy
and the response that society collectively takes to managing the risks and benefits of AI,
that to me is just totally unjustified.
Yeah, I mean, what would those numbers even be based?
on, you've read the report in more detail than I have.
I mean, I've read various blog posts and papers about this,
certainly not the 753 pages.
Like, what do they claim to be making percentage-based judgments based upon?
Because to me, it seems very difficult.
Like, sure, an AI in the future might decide to kill everybody increased by 1% or 2%
or multiply by 5.
of like, how do you even begin to quantify such a thing?
I mean, I think that's the problem is you can't.
Like, I mean, all these probability estimates are usually based on a combination of like two or three different things.
One is you need to have a similar sample of things that have occurred in the past.
So for example, for the data center construction estimates, you might look at how data centers have been constructed in the last few years.
You might try to draw trend lines and so on.
But with AI killing us all, it's not.
like we have this stable sample of AI systems trying to exterminate humanity that we can draw
conclusions from. So that goes out of the wind. One other thing you could do is come up with some sort of
deductive claim. You come up with like a theory that tells you what what AI systems are likely to do
as they become more powerful, let's say. And you know, this is the kind of reasoning we use for things like
nuclear catastrophes. We have scientific reasoning to say that, okay, if we have nuclear experiments go
this is the probability that there will be a catastrophe, and we are conducting these many nuclear
experiments every year or every decade, and so this is the likelihood that something will go wrong.
We can even use this type of reasoning for things like asteroids leading to existential risks.
For example, we can look at the rate of past asteroids sort of coming close to the Earth or
near misses to the Earth, and we can look at how many of those occur every year and so on.
This is an example of the first kind of reasoning, which is called inductive reasoning.
For AI systems, I think neither of these methods work, right?
Because we don't have this stable repository of, like, past experiments that can tell us what the risks might be.
And we also don't have very credible deductive reasons for why AI systems might go wrong, which we can extrapolate from.
Now, I will say there is one category of sort of risks or reasoning that we should tend to take more seriously,
which is that not that AI systems on their own will become very powerful,
but that we'll start using these systems in more and more misguided ways.
For example, we've already seen AI systems being used in ways that really shatter what democratic norms we've had in past.
For example, in the use by the government, by the AIS, for targeting people.
And we could imagine a world where a government recklessly deploys the AI system to, let's say, control its ballistic missiles,
or even worse, to control its nuclear arsenal or whatever, that really would be a cause for concern.
But notably, there's a cause for concern for democracies in general and not really specific to AI systems.
We should be concerned about sort of this regardless of whether it's an AI system that pulls the plug,
that has the power even to do this in the first place, or an authoritarian desperate.
And I think in both cases, the problem would be with where the power lies.
And I think that's something that the AI debate often misses by focusing exclusively on the technology alone.
Yeah, what I notice is that a lot of these predictions are all about drawing trend lines for the technology,
that the speed of AI improvement will grow and grow and such and such a result will happen.
And even if that were a reliable calculation or prediction or extrapolation, what it leaves out is people.
And you guys, I've read your piece on this, you point out that the most unpredictable thing is how people are going to use the technology.
That, for instance, the rise of AI psychosis, which we've talked about on this show, or at least
people sort of starting to behave in weird ways once they're using large language models,
you know, to a large degree, researchers were surprised by that happening because people are
fundamentally less predictable than technology is. And so is that what is going on that like,
it's, hey, sure, there's technology, but, you know, if you're a forecast, you're going to have a
much harder time predicting the behavior of individual,
folks or large groups of people.
That's a great observation, and that's a big part of it.
And I think to do a better job of this, we have to get specific to the kind of risk we're talking
about.
If we lump together all kinds of risk into one bucket called AI risk and try to draw some
trend lines, we're just going to miss a lot of what's happening.
And this reminds me, you know, back in the creationism debates, there was a guy called
Duane Gish, he was a creationist.
And apparently what he would do, a rhetorical technique that he had is he would throw out so many
arguments by the time you addressed one or two of them, he would have 10 more. And his opponents
named it the Gish Gallup. And I've heard this. Yeah, a lot of what I see from the X-risk folks
reminds me of the Gish Gallup. It's like, oh, there's this risk and that risk and the third
risk and, you know, 10 other risks. Hold on. Let's talk specifically about these risks. And when
you look at them one by one, a very different picture emerges. So if I may, let me give you two or
three examples to illustrate how different our analysis might end up being. One risk that folks
have been worried about a lot is the use of AI for political misinformation and use that for
foreign election interference and so forth. And there were so many worries leading up to the
2024 elections, not just in the U.S., but I think like 60 countries around the world at elections.
And Syash and I did an analysis of an actual database that wired created of the way that AI was used
in these elections. And it turned out,
that these fears did not come to fruition. And it turns out there's also a lot of good
theoretical rating on why we shouldn't have expected that those more alarmist predictions would
happen. And that's because the way that election misinformation works, and when we learned
this, it was both surprising, but also, oh, of course, is that it's targeted primarily at one's
own side because when political elites are leading on their supporters, those people have a much
suppressed degree of skepticism. And that's how a lot of misinformation operates. And for that,
you don't need something highly persuasive generated with, you know, a deep fake generator.
It could be literal video game footage being passed off as war footage. And given who it's
coming from, someone who you already support, people are so naive that they're willing to
amplify that. But if something's coming from the opposing side, people are going to be extremely
skeptical. And it doesn't matter if you have AI or something else. It just doesn't break through,
unlike a kind of hyperdermic needle syringe theory. Sorry, go ahead. Oh, I was just going to say that,
you know, the idea that, you know, deep fakes are going to be created. They're going to persuade
the populace one way or the other. If you look at the events of the last couple months in the
United States, we had footage of two different, you know, killings of citizens by the government,
government officials and looking at the same footage, people on different sides said,
oh, this woman was trying to kill the agent with her car.
And the other people said, no, she was trying to get away, right?
And I have my own view about which one of those is true.
And I think clearly one of them is true and one is not.
But I can't help but notice that half of the people who looked at the footage came to
a completely different conclusion.
And there's not AI generated footage.
So you don't need, it's one of those things where people will.
behave in fucked up ways, regardless of whether or not you add AI. And so maybe AI is the less
important factor in what happens with political misinformation, perhaps. That's a perfect example.
Thank you. Really appreciate that. Let me very quickly give a couple of others, one where it is a real
risk, but the way in which it's being portrayed as maybe something we don't quite agree with, and that's in
cybersecurity. So it is true that advances in large language models are allowing companies to,
to find new vulnerabilities in software that could be exploited by hackers.
Now, the kind of AI risk view of this is that this could lead hackers to take over critical
infrastructure, et cetera.
Well, maybe, but it's not up to the technology.
It's up to us.
And many companies are behaving responsibly.
One of the things they're doing is they're withholding the release of their model so then
they can work with a variety of software companies to use these advanced capabilities to find
and fix their bugs before the model is released out to everybody.
So it's become, at least so far, a technology that helps defenders over attackers.
So we have a lot of agency in choosing the course of AI risk.
And then there's biarisk.
Again, I think the analysis there is different.
Syash has a paper on AI and biarisk.
If you wants to talk about that, I'll leave it up to him.
Please.
So, yeah, so I think biarisk is another one where we've heard a lot of concerns going back
like maybe three or four years at this point that advanced.
AI systems would allow rogue actors to essentially create the next pathogen. And there was this paper
that came out of MIT that even said that, you know, like once these models are openly
released, once they're released for anyone to use, that that could lead to the next pandemic.
Now, we looked into this claim in a little bit more depth and in the years since there have been
a lot of studies. And it turns out that all of the people writing these studies essentially were
on to something in that language models do allow people to get information about bioweapons.
But they did not realize that that same information was actually also available on public platforms like Wikipedia.
And so when that's the equation, when you have language models able to publicly sort of document things that are already known to people,
I don't think that'll lead to as much of a risk.
What we really need to do is to compare what happens if people have access to the internet,
and then they also have access to this language model on top of it.
And so then in the last year or so, we've seen multiple large-scale studies come out of this,
which have shown that essentially there's no significant difference.
between these two. Adding a language model to the mix does not make it easier for people
to create bio-weapons, to get information about bioweapons. And even if it did, even if we were
concerned about this state, the best response might be to shore up a completely different
part of the equation, not to restrict AI systems, but rather to show up our ability, for
example, to screen the creation of pathogens. There are so many concerns around tabletop DNA
synthesizers. That's one technology that makes it easy for people to manufacture
by pathogens, let's say, in a lab.
And as of now, the controls that we have on these types of technologies
are much poorer than they could be if we, let's say, even adopted AI to improve
what tools could be built.
And so that's the kind of solution that is left out when we look at it purely as an AI
problem.
Folks, this episode is brought to you by Alma.
You know, a year from today, who do you want to be?
What version of yourself would you like to meet?
Do you want to feel less anxious or feel more like yourself?
Maybe your relationship is stronger or the grief feels smaller.
What if that thing that you've been secretly worried about just took up less space in your mind?
Well, the right therapist can help you get there and Alma can help you find them.
As I get older, time seems to move faster.
And while sure, sometimes that can be scary.
One thing I really appreciate is the perspective I get on my life in therapy.
Seeing how far I have come helps to make it so much easier to set long-term goals for myself.
I feel empowered to make change, real and meaningful,
change in my everyday life. With Alma, finding change can be within your reach too.
Alma has a directory of 20,000 therapists with different specialties, life experiences, and
identities, and 99% of them take insurance. You deserve to feel like that future version
of yourself. A year from today isn't that far away. So get started now at helloalma.com
slash factually. That's helloalma.com slash f-c-d-u-a-l-L-Y. You know how to spell factually,
but that's how you spell it.
Hello, hello alma.com slash factually.
When your company is growing fast,
order fulfillment can make or break your success.
ShipStation's platform brings order management,
rate shopping, inventory and returns,
warehouse systems,
and comprehensive analytics all in one place,
saving customers 15 hours per week on fulfillment.
Shipstation compares rates across all major global carriers,
USPS, UPS, and FedEx,
including your own negotiated discount rates
to find you the best shipping option
on every order with discounts up to 90% off.
With ShipStation, everything you need to manage getting orders to customers is in one place.
Connect to over 200 sales channels instead of five to seven disconnected tools, you've got one.
You know, my business is growing, and if I ever get to the point where I'm running my own
e-commerce operation, this is going to save me an incredible amount of time.
So take some of the tedious tasks of running your own business and put them on autopilot
with Ship Station.
Try ShipStation free for 60 days with full access to all features, no credit card needed.
Go to Shipstation.com and use code factually for 60 days for free.
60 days gives you plenty of time to see exactly how much time and money you are saving on every shipment.
That shipstation.com code factually, shipstation.com, promo code factually.
What about the existential risk arguments about AI itself becoming more and more,
self-aware, super-powerful, the super-intelligence argument, basically, that, you know, because
what you're describing as bio-weapons, misinformation, okay, there are still human factors in here,
and you're talking about, hey, maybe the humans didn't need it or the humans are going to
behave in weird ways anyway. But what about AI itself becoming basically a superorganism
that we can no longer control? You have these arguments about, oh, will the AI be aligned with
human needs or will it, you know, make some decision that it wants to destroy humanity? And then,
of course, it being in control of all of these human systems that would allow it to destroy the world,
right? It needs to have power on the world as well. What do you make of these types of arguments?
Yeah, maybe let me start with a philosophical point. And then there's also more to say on the
specifics of the alignment being deceptive and so on. And I'm sure Syash will have a lot to say
about that as well. At a high level, we're skeptical about the extent to which superintelligence
is even a coherent concept. There's just this idea that if you built this galaxy brain, so to
speak, and with the way that data center construction is going, we're kind of on track to build
a galaxy brain. But there's the idea that as these systems become computationally more powerful,
there is inevitably a single dimension called intelligence at which they're going up.
At some point, they're going to surpass the human level of intelligence,
and they're going to keep shooting up past that point.
That model of thinking about intelligence makes a lot of sense for close domains like chess,
and that is what happened in chess, right?
I want to say around 15 years ago, chess computers started becoming clearly better
than even human world champions.
and nowadays, the human world champion would have something like one in a million odds of beating a chess computer.
So that model is correct.
That is clear super intelligence or superhuman ability at playing chess.
But is the chess model accurate for other kinds of intelligence?
The kinds of intelligence that matter, the kinds of intelligence that give people power in the world,
the kinds of intelligence that affect whether or not we're going to lose control of AI systems.
And this is an area where we vehemently disagree with people who think.
think that superintelligence is going to be an existential risk. And that's because most real-world
tasks, we don't think are computationally bottlenecked. Human abilities are not limited because of
our biology. Our abilities are limited either because the task itself is inherently hard and we're
doing as well as it's possible for any intelligent entity to do at that task or it's bottlenecked
because we lack some knowledge about the real world.
And whatever that knowledge is, AI is also going to lack.
So let me give a couple of examples.
So one worry is that AI is going to be superhuman at having medical knowledge, for instance.
But that'll only happen.
It'll happen not because AI is computationally powerful,
but if we allow AI to acquire that medical knowledge by doing experiments on people,
autonomous medical experiments.
Yeah.
Right?
That's a very real, you know, a physical world.
bottle neck. It's a normative and moral ethical legal bottleneck. And it's not something, I hope,
we're going to be so easy to blow away just because we're impressed by the efficiency gains of the
you got to get people into the autopsy table, right? Like at some point, you need a physical body.
You need someone to walk into the examination room for the research to be done. Exactly. Exactly.
And so very quickly, the other kind of bottleneck is there's a worry that, oh, AI is going to be
superhuman at writing so well that it's so persuasive that it will, you know,
it will trick people into giving up control of powerful systems like nuclear reactors or something
like that. Well, we don't think that's the case either because it's not, again, it's this
hyperdermic needle model that there can be some piece of text that is just because it's written
by a super intelligent entity that's so powerful that it's going to compel someone to act
against their own interests. And there are some studies that are claiming to test AI ability
at persuasion and they're useful studies. But what they're looking at is can a
persuade someone to give $10 to charity?
That's the kind of thing someone might do anyway without a lot of persuasion.
What they need to test is can AI actually persuade someone to act against their own financial
and legal interests?
And I'm not really aware of studies showing that AI can be superhuman or anything can be
superhuman at that because it's just inherently limited in the ability at which you can
persuade people to do those things.
So that's where our skepticism of this whole concept of intelligence as a one-dimensional
thing comes from.
It also, I also want to point out when you say, you know, computers became better at humans than chess years ago.
It just reminds me of this constant argument that, you know, artists are going to be out of a job and we're going to, you know, just watch, you know, TV shows made by AI and et cetera.
Because I'm like, well, yeah, computers became better at chess than humans over a decade ago.
People still watch other people play chess.
Nobody gives a shit which compute, no one, we don't sit around watch computers play chess.
That's not the chess championships.
The most famous chess players are famous in the chess world and people pay money to see them.
And so on some level, we still care about human ability more than we care about AI ability, just in terms of what our actual priorities are.
But also, like, you make a really good point about real world constraints.
I also wonder if when people imagine the AI superintelligence that's going to decide that,
you know, the world should be cooler and so let's kill off all the humans.
Isn't the AI that's being postulated in that example a different kind of thing than what we're
building?
I mean, what we're building right now are like large language models that respond to prompts
from people, right?
That's the predominant, you know, 95% of the time that is what people are talking about.
That's not one world spanning intelligence that has control over, you know, every possible, you know,
lever of our society that's able to, you know, change global warming or whatever.
Like, it seems like they're almost postulating like an Isaac Asimov level artificial intelligence.
It seems so divorced from the actual technology.
Do you have that hunch as well?
I think that's exactly right.
And I think another thing that's almost postulated that sort of sneaks in there in that
description is that we would willingly hand over control of all of these important systems.
to an AI, even if at any level of capability, let's say.
In some sense, I think, like a lot of people have said that our views on this topic are too liberal
that we want AI systems to have more, I don't know, AI development to have more leeway than
some of the people who think we're likely to face existential risks.
But in some sense, I think this is a much more conservative argument.
What we are saying is that if we end up in a world where AI being aligned with our interests
is our only line of defense, we're already done for.
That's a world that we've already crossed the threshold where we've given over power to these systems to take actions in the real world, to take important actions such as, you know, controlling nuclear systems or controlling global temperatures and so on.
And regardless of what the technical capabilities of these AI systems are, regardless of whether we can in principle develop systems that might make these decisions for us, I think we're also making the normative argument that we shouldn't end up in a world where AI systems can make these decisions for us.
that if we end up in this world, that's a really bad outcome for all of us involved.
How do you feel when you see headlines over the past couple weeks that, you know,
the U.S. military is using AI systems to strike targets in Iran?
And to me, it's a little bit unclear exactly how they are, have been using, you know,
Claude or Open AI and how they intend to use them in the future.
But, you know, given that you guys are deeper into this than I am,
what do you make of that?
Is that a first step to doing what you're talking about?
I mean, I definitely think it's a cause for concern.
I think it is concerning that, A, the public has very little awareness of what exactly these decisions are being made for.
I mean, it's a very different world if, let's say, these systems are being used for making, I don't know, gathering information on people or figuring out things about like a certain person from certain documents, as opposed to a world where these systems are being used to make important decisions in the chain of command or make important decisions about.
which areas will get the next strike.
And I think just the fact that we don't have information
about which of these two worlds we live in
is already concerning.
I think another cause for concern
is just the broader sort of backsliding
in terms of democratic norms.
So I think this sort of the fact that we don't have a lot of information
about how these systems are being used,
it's just a symptom of this broader problem
that Congress, for example, does not have oversight
over the executive anymore.
And you know, this is the type of thing
that leads to,
potentially concerns around broader catastrophic concerns around AI becoming more plausible.
So we wrote this essay called AI as normal technology where we said that, look, what we are
doing is describing the world we live in and making predictions about it, but we're also prescribing
certain things for what conditions need to hold, for AI to continue to be a normal technology.
And one of them is that we retain democratic oversight over how these systems are used.
And I think if we end up in a world where they are used without accountability, without more input into how these sort of systems should be deployed, then I think that's brought a cost or concern for everyone.
Yeah. And I want to get into the normal technology argument a little bit more in a second. But first, so what you're describing is, you know, the cause for concern here is problems in our political system, right? That Congress doesn't have oversight over the executive. That could lead to greater problems over AI. That was already a problem.
problem, even if, you know, Claude wasn't being used in bombing Iran, that would be a concern
about that bombing anyway. So again, it comes back to the behavior of people being, you know,
the primary cause for concern and the thing we need to get our arms around. And so before we move on
to that other paper of yours, which I really want to discuss, I'm curious if you have any view
as to why these thought experiment-based extrapolations have become so prevalent.
in the world of AI research.
Because the people who are making these predictions are very smart people, largely.
They consider themselves to be very rational.
Some of them are part of a group that they call themselves the rationalists.
And they try to, you know, apply the best possible epistemology to their information gathering
and decision making.
They pride themselves upon that, et cetera.
And yet they are making what to me seem like very basic errors, you know, not taking
into account the unpredictability of people, having an overall sense of hubris about your
ability to make predictions, you know, sort of confusing a science fiction image with the reality
of what we're actually building. And so why do you think that this is happening? I mean,
sometimes I think it's like somehow benefits the AI industry to postulate a false science
fiction fear rather than focus on the real one. You know, if you're worried about the
super intelligent AI destroying humanity, maybe you're less worried about how they're collaborating
with the federal government to drop bombs or to enrich themselves or, et cetera.
Why do we see this sort of, you know, extreme level of basic error in these arguments?
So as much as I disagree with most of the folks making existential risk arguments, I kind of get
where they're coming from.
One, I think there is a very strong selection effect in what is the set of people who are working on AI.
So going back to my own decision way back in college more than 25 years ago to major in computer science,
it's because I thought one day maybe I could be part of this world changing technology.
I mean, I didn't buy the existential risk fears, but I did think that this could change everything.
And my views have toned down a little bit as I have matured and seen some of the barriers to adopt.
the limitations, downsides, biases, and all of those things.
But nonetheless, the belief that this technology is in some way exceptional is what drove
me to computer science.
And I think you're going to see a very strong concentration of folks who are self-selected
in this way, who are predisposed to think that this is unlike anything we've seen before.
I think that's the first thing.
And the second thing is among technologists, it's very, very common to extrapolate from
technical capabilities to, you know, what the effects are going to be. And like we've already
discussed, ignore or not really appreciate all the uncertainties that come from human behavior.
And it's that same thing that is leading to predictions of 50% job losses within the next few years.
Again, totally out of line with what more sophisticated economic models predict out of line with what
we're seeing so far already with, you know, the early indicators from software and other
industries. And maybe the third thing is a kind of naivete perhaps about how to go about policy
making. There are these decision-making frameworks such as expected utility. Oh, you know,
human extinction has a cost of basically negative infinity and, you know, what is the potential
benefit, et cetera? And if you do a little bit of arithmetic, you're going to conclude that if there
were some world authoritarian with the ability to pull a lever, the safe course of action is to go slowly
on AI. Okay. Well, you know, so far kind of makes sense. We can dispute some of the, some of the probabilities,
as we have done. But what they often miss is that going from our world to a world where there is
some world authoritarian that can pull the lever and control the direction of AI, that already has an
incredible cost to democracy, civil liberties, so many other things they care about, you know, that we should
all care about. So I think those
are all some of the factors.
Right. We already don't live in that world.
And if we were to live in that world,
we'd already have a big problem, regardless of
what the AI was going to do.
Exactly.
It just seems so
irrational to me to have gone
down that road, though. I mean, the first time that I,
I've talked about this at the show before, the first time I
sort of encountered the AI existential risk argument
being taken seriously was I was
following the charity evaluator,
well, which is a group that was associated with the effective altruism movement.
This is about 10 years ago I was reading this before all the Sam Bankman-Fried stuff.
And they were just evaluating, hey, you know, this group that provides malaria bed nets
saves more lives than this group that provides clean water.
Here's a bunch of very rigorous research, right?
I was like, that's really cool.
I think that's a cool project.
And then I started looking in their materials.
And I was like, whoa, are they evaluating climate change?
No, they're not doing any work on climate change.
That's really interesting.
Oh, what are they working on over here?
oh, they're working on AI existential risk mitigation.
And I was like, how did the same people who were doing this very rigorous evidence-based
research start worrying about thought experiment outcomes?
And I realized, oh, it's partially because their funding came from the big tech industry.
That like the people who founded this organization were basically funded by, you know,
Facebook dollars and, you know, tech billionaires who had become philanthropists,
who themselves were worried about this.
and they were sort of driven because of where they were socially into studying this.
But again, it shocked me that people who are so rigorous would then sort of when thinking about AI,
you know, go off on these flights of fancy.
And I guess AI is such a difficult area.
There are, there's so much that is unpredictable about it.
There's there's so much confusion about it.
And I talk to a lot of people.
about it. You two are two of the only folks I can talk to who I feel like are sort of making clear
evidence-based, you know, scientific sense on what might happen. Is there a reason that like
overall America's sort of intellectual infrastructure is having trouble grappling with AI?
It seems to drive smart people crazy and I can't figure out why. And do you have any intuition
as to that? Everything you said so far makes sense. I'm just curious if you have any
thoughts. I mean, I guess one other reason, as you were talking about Kivvel and this focus on
rigor, one reason that comes to mind is that, you know, I think as a community of computer
scientists, we are pretty good with numbers. We feel comfortable when we are in the domain of
numbers. And I think we often forget that these numbers represent something true and unique
about the world. And it only makes sense to collapse this world into a set of numbers when there
is like this clear link between the world and what the numbers represent. That's certainly true,
perhaps for giving out malaria nets. That's true for other interventions that we could make in the
world. And what we've found with AI is that because we don't have this link between what will
happen in the real world and what AI will do to the world, that these numbers have become ends
of themselves. And I think that is what is leading a lot of people to first collapsing like this
rich diversity of what happens in the world to a number and then trusting this number too much. So even if
the rationale behind a lot of the numbers is, as I've said, the fact that, you know,
AI might decide to colonize the space or whatever.
Once you've converted this world into a set of numbers, we tend to put too much weight
into the numbers themselves, rather than looking back at the link between what these numbers
represent.
I think I've seen it over and over again in computer science.
Like intellectual endeavors, we focus on benchmark numbers, which is how well AI systems
perform at like a group of tasks, for example, software engineering tasks.
But once we've converted these tasks into a benchmark,
we then forget about what was involved in coming up with a benchmark.
We only look at the numbers going higher.
This is what all the air companies label or brand themselves as when they release a new model.
And I think this focus on numerical outcomes as opposed to anything else that's happening
has both been for the better and for the worse of this community,
the better in the sense that we've been able to make rapid progress for things that can be quantified.
And it has made our intellectual culture worse in the sense that we've forgotten about everything that can't be quantified.
and what our numbers lose when we convert the real world into the set of numbers.
So, yes, that is maybe the best explanation of this that I've ever heard.
And it kind of explains a lot of what has gone wrong with intellectual life in America over the last couple years.
That, yeah, we use numbers to measure and describe the world,
but the number of numbers that we have access to is limited.
And we sort of forget, okay, I guess.
got some numbers. Oh, and I have a formula that shows the numbers will get bigger. Hey, what if that
goes on forever? What if the numbers get super big? What if it's like in a video game where I break
the game and my level get so high? I can defeat all the enemies and right. But we forget,
well, hold on a second. Numbers are our limited human way of describing the infinite variety
of the universe. And maybe there are limiting numbers we don't know about that haven't entered
into our calculations yet.
It's very easy once you've said,
okay, I have a three and I have a five and I have a ten
and I understand the relationship between these.
What if I extrapolate out?
I can draw a line that goes way up.
But like there's a lot of shit between you
and the other side of the line
that might be happening in reality
that you have not included in your calculations.
So it's a matter of like,
it's not just hubris.
It's like taking a reductive approach
to the real world
and not remembering that you did that in the first place.
It's not remembering the part where you reduced it down to a number.
And that maybe something else is going to complicate that is going to happen in the real world again in the future.
What a beautiful explanation.
I want to make sure we talk about your essay AI as normal technology,
which you wrote after the last time we had you on the show.
And this has really taken off as a way to understand what's going on with AI
or it's gone sort of semi-viral in the community of people who talk about this.
Can you give me the basics of what that argument is?
Because I also think it's often misunderstood.
Yeah, definitely.
So here are some of the main points.
One, as Syash already mentioned, it's not just a matter of predicting the future of AI.
We have a lot of agency in shaping that future.
And when people talk about agency, they're very often limiting themselves to the development
of these models and they throw up their hands.
Oh, these companies are so powerful.
What can we do?
How can we slow down the development of these models?
No, our position is it's not all about the models.
We take existing theories of how technology gets adopted and diffused into society,
and we specialize it to AI, and we elaborate upon those theories.
And there are four stages in our framework.
The first one is the models.
The second one is how those models get turned into useful products,
like Cloud Code, which we talked about a little bit.
The third stage is early adopters starting to play with that technology.
And the fourth stage, and this is really,
the longest lasting hardest stage is organizations making adjustments to take advantage of this,
you know, new productivity laws need to change in many cases to even allow this technology
to be adopted in various sectors, all of that messy stuff that needs to happen.
And our view is that each of these four stages has its own logic, has its own pace,
just because capabilities and the models are advancing rapidly, doesn't mean that, you know,
doctors are throwing away their workflows and their, you know, liability and they're carefully
considered expertise and just throwing it all to chat TPT. That's just not happening despite some
of the claims one might hear otherwise, you know, in the media. And these other stages, because we get
to control, for instance, the way in which people, companies who are deploying AI are going to
end up using it. We have lots of avenues in order to be able to push things in a better direction,
both in terms of safety.
So in the past, we've had lots of arms races with technology in various different sectors,
whether it's railroads, whether it's steamboats, et cetera.
But we have ways to push safety in a healthier direction through regulation by incentivizing
companies to stake their reputations on their safety record, that sort of things.
That's one thing.
And secondly, when it comes to the labor effects, all of the other things we should worry about,
but also the productivity benefits,
those are also going to unfold over a period of a few decades.
They're not going to unfold over two years.
So there's a lot of work to do,
but we do think that we should approach it
with a spirit of cautious optimism
as opposed to panic,
thinking that the whole world is going to change in two years.
I'm sure I left some stuff out.
Maybe Syash can fill me in.
Please.
Maybe one analogy that comes to mind
that sort of captures a lot of these arguments very well,
is that of self-driving cars.
So we've had prototypes of self-driving cars, like cars that could sort of turn around or accelerate or break for at least the last two decades.
DARPA organized this grand challenge back in 2005.
Lots of companies submitted for prototypes.
One of those went on to become Waymo, which is the sort of pioneer in self-driving today.
But it wasn't until two decades of experimentation that we even got to the point where this technology could be useful in a small select handful of cities.
And that wasn't because these cars lacked the capabilities.
All of these cars could sort of turn around, accelerate, break,
like go from one point A to point B.
But rather, we needed to make these cars reliable enough
to operate autonomously.
We needed to figure out how to avoid edge cases.
We needed to roll them out first on a few miles,
then a few dozen miles, then on a few hundred miles and so on
before we got to the point where they became even slightly useful products in the real world.
And that is how we see a lot of AI taking shape.
over the next few years. Of course, we have models that are highly capable today already,
but to get to the point where we can deploy these models, especially into high-risk applications,
we'll need to do so much experimentation, we'll need to figure out what the edge cases are,
we'll need to figure out how to make these models safe enough to adopt. And, you know,
in cell driving cars, we've seen that if your product is not safe enough, you might even turn out
to be a market failure. We've seen this with Cruz, which was much less safe, which had a poorer
track record compared to Waymo and is essentially out of business now.
So there are also these market incentives and regulations that will incentivize AI safety.
And I think we should continue to hold the line with policymakers and with these institutions
and keep demanding high levels of reliability and safety before they are deployed.
And I think that's a process that will unfold over like tens of years or decades rather
than just the next year or two.
And so what you're saying is that in that way, AI is similar to most technologists.
The technology is invented, and then it takes a number of decades to figure out how humans are going to use it, to use it in ways that are not deadly, that are actually providing economic value that humans actually enjoy using that fit into human life.
There's sort of a process of fitting in that has to happen.
And I understand that you make this argument because that's in contrast to the claims made by extreme AI boosters who say that, hey, this stuff is going to take over.
And in three years, everything is going to be different, which I continue to see constantly.
I mean, even just in the world of code, you see, oh, in two years, you know, all of the way that we write software is going to be completely utterly different.
No one is, you know, we're going to see mass layoffs, et cetera, et cetera.
you're saying that, no, this is, well, tell me how big the claim can be.
I mean, you mentioned railroads.
And that's an interesting one to me because like, railroads truly did transform the economy,
transformed the country.
They were a massively impactful technology.
You know, if you look at the history of 20th century and late 19th century America,
it really is the story of how railroads transformed the country.
and yet it took a while, right?
And regulation was involved,
and it was not like no one snat their fingers
and suddenly railroads are running the country, right?
But there were also a lot of problems,
a lot of bad things that happened.
We had monopolization by large businesses.
We had a lot of death and destruction.
We had a lot of bad outcomes as well.
So is that what you're saying,
that like AI maybe mid-case scenario
is something like a railroad,
road. That's right. And there's, you know, there's an even more ambitious analogy that if the
Industrial Revolution, which we think is entirely plausible, we're not, we're not pushing back on
the long-term potential of AI. And that's something we try to make super clear in the essay. We do think
it's going to be transformative exactly how transformative is to be figured out. It's really more about
the timescale. It's about the amount of agency that we have. And it's about, you know,
things that might go wrong in the meantime and how we can avoid that. And I think the Industrial
Revolution analogy is helpful because it helps us see a couple of things. One of the things that
happened in the first few decades of the Industrial Revolution was working conditions were horrendous
because of mass migration from towns into cities, overcrowded apartments, there were no labor
safety laws, so many people lost a limb or their lives, you know, 14-hour workdays,
child labor, so many horrible things.
The modern labor movement actually came out of that.
And what is going to be the analog for AI?
I think that needs to be figured out.
I don't think we have a great answer yet.
But it's on a more positive note,
I think it's also an example of how,
in some sense, everything could change
and yet largely stay the same.
Before the Industrial Revolution,
most labor was manual labor.
The idea that what we're doing here
could constitute work, that would have sounded ludicrous.
But because we've been able to mechanize so much manual labor,
the nature of what work means has shifted.
It's quite plausible that we'll have another similar shift with AI.
A lot of the cognitive work, the number crunching,
the day-to-day coding and software engineering,
perhaps even medical diagnosis, a lot of the work of lawyers,
will in the future be done by AI.
But nonetheless, that will be a world in which,
which people, professionals, still remain in control because you need accountability, because
you need to deal with the unknown unknowns. Just like, just because we have, you know, cranes
and other construction equipment doesn't mean we let those machines operate autonomously.
We still need operators who are in charge and can take responsibility for the outcomes.
That is a similar future that we envision for a lot of cognitive occupations when it comes
to AI.
That's interesting. So, as opposed to a future.
which we need, you know, universal basic income or something like that because nobody is working
at all.
You postulate something a little bit more like the Industrial Revolution where the nature of work
changes, but we still need people to do lots of work, right?
We still need an economy full of people doing things because there's still stuff that humans
can do that technology can't do.
I mean, I guess I'll say that the Industrial Revolution still brought a lot of death
and destruction and a lot of.
bad outcomes.
And so in this way, AI might as well.
Yeah.
You think so?
Yeah, absolutely.
I mean, at least in the near term, that is what a lot of our work is focused on, is
avoiding the worst outcomes.
I think the good news is that we have so many more tools in our toolkit today than
we did.
Like back when the Industrial Revolution came around, I think we have labor protections.
We know how we can hold companies accountable.
That's a big one.
Even if we don't always do the best job of actually holding.
holding them accountable. I think we know a lot more about just macroeconomics, what happens when
the share of money paid to capital versus labor changes and how we can keep that in check.
We have tools like the federal budget that we can use to really change the trajectory of where
people investings. And a lot of our thinking recently has been on the question of how we can
use these tools appropriately to manage the transition. And I guess in some sense,
we're optimistic both because we think we have all of these tools and if you're used well,
we can do a lot of good with them, but also because we don't have to make this decision
in like the next two months or next two years. I mean, there are these claims that whoever does
not have, let's say enough capital holding out in the next three years or so will become
part of a permanent underclass of people without any labor power anymore. We completely reject
that notion. We think we have the time and the tools to do positive experiments, to see
how we can change these regulations,
how we can enforce new regulations
to make outcomes better for the public at large.
And, you know, frankly,
that's a lot of what takes up
our intellectual energy these days.
Okay, this is really interesting to me
because, you know,
you're pushing back on the most extreme claims
of AI boosters or the sort of maximal claim, right?
And you're arguing basically for kind of a middle ground.
Hey, this technology is going to be transformative
but on a sort of normal time frame
for transformative technologies
like railroads or mechanized
you know, a construction of goods or whatever.
And you say you have some optimism as a result.
I will say that I want to speak for,
I think a lot of my audience,
is probably more pessimistic about the technology in general
or just has a sort of negative emotional,
you know, viewpoint towards it, right?
A lot of folks who listen to this show, I think, are of a mind of like, this is, why are we building this stuff at all, right?
It all seems bad.
And I think a lot of that is based on you guys are saying, hey, this is a normal technology.
We can, you know, we need to make sure that the right amount of the benefits are going towards labor rather than capital.
We need to make sure that the federal budget is properly written.
We need to, you know, constrain these companies.
companies to some degree. And I think a lot of people look around our society and go, well, hold on a
second. All of those things have been going in the wrong direction for the last 50 years. All of the
benefits of capitalism have been going towards capital, not towards labor. If you look at, you know,
the trend lines for, you know, wages of working people versus, you know, the top point one percent of
folks in this country. The federal government seems completely unable to do much of anything at all
other than wage war, like anything that requires the legislature is completely frozen and has been for decades.
And as far as our ability to rein in corporations who are making decisions that affect our democracy,
we've been completely unable to do that.
Again, apart from AI, right?
We've seen new railroad-sized monopolies form and control how our country works, you know, just based on the Internet as a technology.
So not to mention we haven't talked about environmental destruction, you know, the environmental
toll that the data centers take also seems to be something that we've been unable to regulate.
So I guess I'd ask like, you guys do seem to have some level of optimism.
Hey, all we have to do is regulate capitalism properly and AI will be fine.
Well, it doesn't seem like we're capable of doing that.
So don't you have concerns about the future of this technology, period?
because it, I think a lot of people's concern is that it's going to exacerbate hugely all of the
current problems of, you know, technology-driven capitalism that we've already been experiencing
over the last couple decades.
Like you said, there are all these problems. These are real problems. They've been happening
without AI. We're just not sure that putting the brakes on AI is going to get us out of these
problems or that we're doomed to repeat the path with AI that we've taken with other recent
technologies like the Internet and social media. I do think there is some learning that is going on.
Yes, it's true that Congress doesn't do much, but that doesn't reflect policy as a whole.
So, for instance, policymakers, in our view, we consult with them pretty often, did look at
how late they were to addressing the harms of social media,
and for the most part,
they seem to be taking a different approach with AI.
In state legislatures,
which is where most of the action is going on,
more than 1,000 AI bills per year are being introduced.
A lot of them are passing.
State Attorney Generals are going after many companies
for many kinds of violations.
So those are all pockets of good news.
We can't be sure that everything is going to be
all roses and fairy tales, and that's not the point we're making.
But we are making the point that we have a lot of agency.
A lot of the alarmism around AI is coming from the claims that we're seeing from tech
leaders themselves about massive scale job losses.
And look, it's possible that this is part of some devious strategy that ultimately is in
the interest of the companies.
But right now, I think the line that tech leaders are taking.
is neither backed by evidence nor really in their own interest because the AI backlash is really
powerful. And what it's leading to is a lot of protectionist regulation. For example, we're seeing
this in New York that AI can't give people legal information or legal advice or that sort of thing.
And so what we're seeing now in our view is kind of a worst of both worlds. It is trying to
put the brakes on AI in a way that's actually not protecting what needs to be protected,
because the real threat is not massive job loss in two years.
And at the same time, it is actually preventing a lot of the benefits that could occur
because one of the things we're seeing in the legal space is that it's so expensive to hire a lawyer.
And in fact, if we had more reliable AI systems, they could help plug some of those gaps a little bit.
So the concern in our minds is not that policymakers would do nothing.
They are doing stuff.
They're doing a lot of stuff, but the concern is that policy is actually not informed by the best
evidence, and there are ways to do it much better. And so that's what we're trying to put our energy
towards. I'd love for you to make a little bit for me of a positive case for why I should be
optimistic about AI, because one of the things that I, or not why I should be optimistic,
but tell me more about the benefits for a society that you potentially see. Maybe it's because I'm getting
a little bit older, right? I'm in my 40s now. When I was younger, the internet was invented. I was
like, clearly this is a wonderful thing. I can talk to anybody from around the world. I can publish
whatever, right? I'm very excited about it. Now, I'll just say my emotional constitution. I look at
AI and I'm like, okay, great. Someone says, I can make a website so quickly now. And I'm kind of like,
yeah, all right. I mean, that's fine. You could make a website before. You could hire a web developer,
like big fucking deal, you know? I, I, uh, to me, I, I, uh, to me, I, I,
I'm not seeing the ways that, you know, I see society getting much better as a result of this
technology.
I see how people think they can make money off of it, right?
But I'm not like going, oh, there's like a big problem that I think could be solved.
And so what is, you know, when you guys say, hey, we shouldn't slow it down because we might
miss out on the benefits.
Just paint a little bit more of a picture for me of what those benefits could be because
I really believe that you guys have a clear view of how we should protect.
people, right? So assuming that we're able to protect folks properly and put good policy in place,
what's the positive vision? Maybe I can take a stab at this. Please. And I'll do a very academic
thing of giving like a big picture macro economic view, if you will, and then give a couple of
smaller examples of how I think it'll materialize. So if you look at people's attitudes towards
AI and whether they feel a general sense of optimism or pessimism, I think it's very much
dictated by where they live.
We are seeing in developing countries and countries that have high growth rates,
for example, in India and China,
people are a lot more optimistic about the impact of AI than they are here in the US.
Yes.
And I think part of that, of course, is just how we perceive technology.
I'm from India.
And so I can talk about how, so we've seen the country transform over the last few decades.
India has a GDP growth rate of about 6%,
which means that the country's economy doubles in size every 12 years.
And even my own lifetime,
I've seen the impacts that this general prosperity,
like just in terms of the macroeconomic factors,
has had on people's lives.
When I go back to India now,
I can't drive in my city anymore
because the optimal route for getting from point A to point B
has completely changed.
There are like 20 new highways that have come around.
It takes one third the time to get there,
get from place A to place B, as it did when I was younger.
And I think there is this general spirit
that you can see the technology actually,
like impacting people's lives,
much more clearly. And so when we bring in AI or new technology into the picture, I think people are
much more likely to see it as sort of at least continuing the current state of economic growth.
And they see it as contributing to this broader prosperity, even if specific individual occupations
might feel threatened, for example, the IT services sector in India. If we come to the US,
I think the US is a much more mature country. It is well developed. For the last, I think, 200 years or so,
the country has had a growth rate of about 2%.
Now, this might seem like a small difference,
but what it means is that the doubling size of the economy
is now 35 years instead of 12 years.
So you need to wait 35 years until you see the economy actually doubled in size.
And so the impact of technology is largely something that goes on in the background.
I felt it myself.
I've been here for the last five years.
And the route I take, like basically to work or any other place, has barely changed.
I still take the I 95 and I drive down to D.C.
I still take the same route to my office every day.
And that really shows up in this sort of background sense of stagnation that many people feel.
And, you know, it really makes a qualitative difference as to how you approach technology.
And so if we come back to the specifics, I think in terms of the micro level,
I think the impact of AI might be very similar in, let's say, India and the US.
Both of these are countries where access to legal justice, for example, is something that's really lacking.
a lot of people, even in the U.S., lack access to basic legal help.
For example, if they're asked to vacate their property by a landowner,
they don't have the right language to respond to this request.
Don't have a lawyer.
And so many people do, in fact, end up vacating the request, even if it's an illegal request.
When you have sort of concerns around medicine, most people don't have access to a second opinion.
You can no longer just sort of send your documents to a doctor and get access to good medical advice.
and we indeed are getting to the point where these second opinion types tasks, AI is already good enough.
When our parents see their medical reports, I mean, I definitely send all of these reports to Claude or Chad GPD just to get a sense of whether the doctors are right.
And in some cases, we've had experiences where, you know, the doctors were wrong.
Someone described my mom this thing called a beta blocker for high blood pressure, which is like 30 years old medicine.
And, you know, we've far moved beyond the point where we don't need to describe these drugs.
anymore. This is a very sort of high risk drug that you shouldn't describe for someone
as hypertension. And that's the sort of thing where we see people's lives improving in sort
of small day-to-day ways. I think there's also going to be bigger changes that will make
people a lot more product. So sure, like today we can use, someone can use AI to create their
own website. They could have hired a web developer to do it. But if all 30 million businesses in
the US wanted to hire a software engineer, it would just be too expensive. There's no way that
all of these companies would be able to get access to a good web developer.
What I think will happen is that you'll just make software engineers so much more productive
that instead of relying on, let's say, the hundred-odd companies that develop most of the
software used by all businesses in the US, every single business will be able to employ a software
engineer, just like they do a number of other occupations.
They have someone who does their finances or manages people.
They might have a software engineer because it's so cheap to do it, because software engineers
are so much more productive that a single software engineer can actually develop all of the tools
required for every single business. So that's the sort of like almost seismic shift we see taking
place. And notably this type of shift has happened before and has continuously happened.
For example, in the US alone from I think the 1950s to 2025, 75% of the jobs that existed
did not exist back in the 1950s. And because this change has happened slowly, it's not so visible.
But I think that's the change we envisioned with AI as well.
you. That really illuminates so much, especially about the emotional, you know, valence that
people have towards AI. I think it matters. I think a lot about, you know, the sort of background
situation in which you live, you know, and economic growth being a big one. If you're living in
a state or a country or a city that is growing rapidly, you sort of expect more good things
to happen to you. I imagine often living in New York City when, like, the
Brooklyn Bridge was built and they start building skyscrapers.
Like new things are going up all the time.
Oh my God, the world is changing.
I have a new job I didn't have before.
I'm doing better than my parents did, et cetera, et cetera.
Or living in, you know, California where I live now during the 60s and 70s.
People are flooding in.
New people are moving there.
New neighborhoods are being built every day.
Oh, wow, there's a sense of possibility.
But I think in America over the past couple decades since the 90s, people have experienced
in many ways
the quality of life has receded.
People feel that something has been taken from them.
They're like, I'm not doing as well as my parents.
Life is more expensive.
What happened?
And so when AI is presented to them,
their instinct is,
this is going to take more from me.
This is for people at the top.
It's not for me.
It's for everybody else.
And that's because of something deeper
that's gone wrong in America
over the past couple of,
years than AI itself.
And in many ways, maybe that means it is a normal technology.
That's probably how I would have felt if when the railroads had been invented if,
you know, I was not doing well.
I'd be like, this is something that's going to make my situation even worse.
So I really like that point of view because it makes me think differently about,
it takes the emphasis off the technology itself and off of the technology itself and off of
the argument about it and re-centers it back on what has gone wrong in our economy and our society
and our democracy over the past couple decades. And that is the thing that we need to focus on
first, which is how I have always felt on some deep level that like, no, we're talking about
the technology too much. We need to refocus on people and on the human shape of our society.
It sounds like you guys agree with that on some level, right?
Yeah, for sure, definitely.
And let's not forget if we are talking about the technology that AI can be seen as two things.
One is the AI industry, it's all the data centers and what AI companies want it to be.
But there's also AI as a tool in your own hands.
And this is something that's different about generative AI and agentic AI compared to previous waves of AI,
like predictive AI, which was a tool that governments and companies would use on you to figure out, you know, should you be hired for this job? Should you be eligible for certain welfare benefits? What your insurance rates are going to be. Various things that happen in the background. That's still going on. That is problematic. But now it's this new thing where you have agency. You can decide what it is for you. And if I give, if we have a couple of minutes, I want to tell a story. I've never told this story before. But your podcast is special. Thanks for having us on.
Oh, thank you.
AI has been a big research tool for us, but for me, the most profound impact has been in my personal life.
More than two years ago, I started using these newfound AI coding abilities to make little learning games for my kids on their iPads.
There are so many of these learning games available on the app store, but they're absolutely horrendous.
I would not let my kid anywhere near them because they're meant to be addictive to draw the kid in and eventually
make money, right? So, I mean, I've, you know, I've looked at 20 to 30 of them, none of them were
remotely suitable. And of course, they learn from books. They mostly learn from books and from
parental interaction and from school and so forth. My kids are young, six and four. But I did find
that there is a little niche for technology and for AI. I don't want to claim that this is the
main way they learn, but it is an important way they learned because I was able to use it to do things
that they can't easily do in a book.
For instance, I built them a phonics app that would break down a word into sounds and
teach them to pronounce that sound, to help them learn phonics.
And this was so tremendously useful.
My son was reading at three.
And this was very surprising.
So I kept doing this.
At this point, I have three different learning apps.
Each one takes about an hour to build.
It would have taken several days without AI.
I would have never built it.
It would have been inconceivable.
And, you know, they spent half an hour or two.
an hour on it every day. And collectively, I think it's actually put them about a great level ahead
of where they would have been without the technology. Again, the technology is not the main thing.
You need the right home environment, constant reinforcement of, you know, why it's important to
spend time learning and so forth. And a lot of the time when they're using the app, I'm there with
them. So it's not about just leaving them on their iPads. But if you have that right environment,
if you exercise your agency to figure out what you want AI to be for you, it can be something
really powerful. Yeah, and I like that story because the emphasis is on you as a father,
right, using the tool to do something for your children and, you know, because you had the
ability to you're an environment where, you know, you could take the time and do so, but the technology
gave you an ability you wouldn't have had otherwise. Exactly. I think the focus on agency
is really important because so often I think that the boosters are like, well, you just need to learn to
use it. You have to use it. And really the thing that, the more accurate thing we can say is,
you have the choice of whether or not you use it, how you use it. And then we as a society do have
agency over how we allow this technology into our society where we, you know, give it ability
where we don't. A good example of this is like in my industry in Hollywood, you know, the Writers Guild
was the first union to win protections against AI in a major union contract.
But also, the writers ourselves are the ones who are doing the work still, right?
And to the extent that AI is used in the Hollywood writing process,
that is being determined by the literal people who are doing the job right now.
It's not something that is being foisted upon us.
We are all players in the economic system,
and we all have the ability to accept or reject.
or constrain or not constrain,
we just have to take it.
And, you know, there's plenty of barriers to that
that are put in the way by capitalism
and by the horrible structure of our democracy.
But we have just as much agency
there as we do in any other respect,
which is that, hey, sometimes we need a collective action
on a wide scale to set our society right
when it's tilted in the wrong direction.
So I really love the argument
that you make about normal technology
because it means we face the same problem
that we have always faced as a species
and as a society and a culture,
which is to create a democratic society
that provides everybody with what they need
and gives them a voice,
which is exactly as difficult as it's always been.
And so saying, oh, well,
we're going through the Industrial Revolution again,
that is both frightening and comforting.
Right?
Because, yeah, a lot of people were hurt
and a lot of good things came out of it.
And that's what brought us to where
are today and we face the same challenge we've always faced.
Which is, I'm still processing how I feel about that, but it is still in contrast to what
the boosters tell us, which is we have no choice and it's all going to happen whether we like
it or not.
And it's going to happen very quickly.
You guys are saying, no, we, humans are still in control and we still need to do what we've
always done.
And do I basically have it right?
I think you basically do.
you have it so right.
Thanks for putting it better than we did.
Oh, I did my bet.
Look, guys, daylight savings time happened.
I'm a little bit zonked out.
I feel like I'm a little bit less articulate than normal.
But I feel like we've had a really good conversation.
So thank you both so much for being on the show.
Where can people find your work, especially if they want to check out these very influential
essays that you've written over the last couple of years?
Where can folks find them?
We have a website called AI as normal technology.
it's normaltech.a.i.
And we also wrote this for the Knight First Amendment Institute at Columbia University.
So that's another website where you can find an academic version of the essay.
And we continue to update the normaltech.a. website with more of our writing.
Thank you so much, Arvin and Saj for being on the show.
You really do better than almost anybody else at demystifying the technology and explaining it in a way that is both evidence-based.
and I don't know, connected to the world as I think we live in it as people.
I can't thank you enough for coming on the show.
And I hope you'll come on again in another 18 months or so to update us.
That sounds amazing.
This has been such a fun conversation.
Thank you.
Thank you so much for having us.
Well, thank you once again to Arvin and Siyosha for coming on the show.
I hope you enjoyed that conversation as much as I did.
You want to support the show.
Head to patreon.com slash Adam Con.
Over five bucks a month.
It gets you every episode of the show ad free for 50.
bucks a month. I'll read your name in the credits. This week, I want to thank
Brendan Peterman, Ultrasar, Chris Rezac, quotidiofile, John McAvey, and Quinn M. Enoch's.
If you'd like me to read your name at the end of the show and put it in the credits of
every single one of my video monologues, once again, patreon.com.com.com.combe,
slash Adam Conover. Of course, if you want to come see me on the road, Hartford, Connecticut,
La Jolla, California, Sacramento, California, April 18th, the Den Theater in Chicago,
or in Kansas City, Missouri, head to Adamconover.com.net for all those tickets and tour dates.
I want to thank my producer, Sam Radman and Tony Wilson.
Everybody here at HeadGum for making the show possible.
Thank you so much for listening, and we're going to see you next time on Factually.
That was a HeadGum podcast.
