Hard Fork - Trump Fights ‘Woke’ A.I. + We Hear Out Our Critics
Episode Date: July 25, 2025On Wednesday, President Trump signed three A.I.-related executive orders, and the White House released “America’s A.I. Action Plan.” We break down what’s in them, how the federal government in...tends to target “political bias” in chatbot output, and whether anyone will stand up against it. Then, do we hype up A.I. too much? Are we downplaying potential harms? We reached out to several prominent researchers and writers and asked for their critiques about how we cover A.I. For a limited time, you can get a special-edition “Hard Fork” hat when you purchase an annual New York Times Audio subscription for the first time. Get your hat at nytimes.com/hardforkhatGuests:Brian Merchant, author of the book and newsletter “Blood in the Machine”Alison Gopnik, professor at the University of California, BerkeleyRoss Douthat, New York Times opinion columnist and host of the podcast “Interesting Times”Claire Leibowicz, head of A.I. and media integrity at the Partnership on AIMax Read, author of the newsletter “Read Max”Additional Reading:Trump Plans to Give A.I. Developers a Free HandThe Chatbot Culture Wars Are Here We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Transcript
Discussion (0)
Let me tell you about something.
I was in a Waymo the other day and it was making a turn on Market Street, which we've
ever been to San Francisco.
This is kind of a street that causes problems with all the other streets because it's diagonal.
So the intersection has six different roads coming together.
But the Waymo is just about to complete a left turn.
Everything's about to be okay.
And the only way I could put it is it loses its nerve.
There's a light about to change, pedestrians start walking to the crosswalk.
And so this thing just starts to back up,
like I'm talking 30 feet over like half a minute,
and pedestrians come into the crosswalk,
and Kevin, I swear to God,
they start laughing and pointing at me.
They're laughing and all of a sudden I'm flashing back,
I'm in middle school, I'm being ridiculed.
I have no control over this whatsoever,
and I've never looked like a bigger dweeb than I did
in the back of a Waymo that failed to complete a left turn.
Oh man, you were a tourist attraction.
I really was.
The people in Poughkeepsie are gonna be telling
their friends about this one for years.
Yeah, I'm already viral on Poughkeepsie Twitter.
So, the Waymos, you know, you may think
that it's very glamorous, but you're gonna have
these other moments where you're wishing you were just in a Ford.
Yeah.
So what was the issue?
It just like couldn't decide to make the turn?
I think it just thought the light was gonna change
and it thought, we've gotta get out of here.
And it had a panic response.
It had a fight or flight response, and it chose flight.
And I wanted it to choose fight.
I wanted to say, floor it.
You'll make it.
It'll be fine, I promise.
I'm so sorry that happened to you.
Yeah, thank you.
It'll be all right.
I just love the thought of you just sitting in traffic
surrounded by tourists, pointing and laughing.
And meanwhile, you know how the Waymos have
spa music that comes on?
Yes, exactly.
You're just hearing the chill Zen vibe.
Pan flute music.
As you cause a
citywide incident.
That's exactly what happened.
That's exactly what happened.
I was listening to the Spobless playlist as I was hounded off the streets of San Francisco.
I'm Kevin Ruse, a tech columnist at the New York Times.
I'm Casey Noon from Platformer.
And this is Hard Fork.
This week, the Trump administration is going after what it calls woke AI.
Will anyone stand up to them?
Then.
Do we hype up AI too much?
Are we ignoring the potential harms?
We reached out to some of our critics to tell us what they think is missing from the conversation.
They told us. Kacey, last week on the show, we talked about how you can now get a hard fork hat with a
new subscription to New York Times Audio.
And everyone is buzzing about it.
Yes, people are saying this hat
makes you 30% better looking.
It also provides protection against the sun.
I have not personally taken mine off since last week.
I shower with it on, I sleep with it on.
I was wondering what that smell was.
What I'm saying is it's a good hat.
It's a great hat.
And for a limited time, you can get one of these
with your purchase of a new annual
New York Times audio subscription.
And in addition to this amazing hat,
you'll also be supporting the work we do here
and you'll get all the benefits of that subscription,
including full access to our back catalog
and all the other great podcasts
the New York Times Audio makes.
Thank you for supporting what we do
and thank you as always for listening.
You can subscribe and get your very own hard fork hat
at NYTimes.com slash hard fork hat.
And if you do, our hats will be off to you.
No cap.
Well, Casey, the big news this week is that the federal government is finally making a plan about what to do about AI.
Oh, I feel like we've been asking them to do that for a while now, Kevin.
I can't wait to find out what they have in store.
Yes.
So back in March, we talked about the fact that the Trump administration was putting together
something they called the AI Action Plan.
They put out a call, basically, you know, tell us what should be in this.
They got over 10,000 public comments.
And on Wednesday of this week, the White House released the AI Action Plan.
And it has a bunch of interesting stuff in it that I imagine we'll want to talk about.
But before we do, this segment is going to be about AI.
So we should make our disclosures.
Well, my boyfriend works at Anthropic.
And I work for the New York Times, which is suing OpenAI and Microsoft over
copyright violations related to the training of large language models.
All right, Kevin. So what is in the Trump administration's AI action plan?
So it is a big old document. It runs to 28 pages in the PDF. And then there are these executive
orders. Basically, the theme is that the Trump administration sees
that we are in a race with our adversaries
when it comes to creating powerful AI systems,
and they want to win that race or dominate that race
as a senior administration official put it on a call
that I was on this morning.
And one of the ways that the White House proposes doing this
is by making it much easier for American AI companies
to build new data centers and new infrastructure
to power these more powerful models.
They also want to make sure that countries around the world
are using American chips and American AI models
as sort of the foundation for their own AI efforts.
So they want to accelerate the export of some of these US chips and other AI technologies
and just sort of enable global diffusion
of the stuff that we're making here in the US.
So that was all sort of stuff that was broadly expected.
The Trump administration has been signaling
that it would do some of that for months now.
The thing that was sort of interesting and new in this
is about how the White House sees the
ideological aspect of AI. And how does it see it, Kevin?
So one of the things that is in both the AI action plan and in the executive orders that
accompanied this plan is about what the Trump administration calls woke AI. Casey, I know you're
very concerned about woke AI. You've been warning about it on this podcast for months. You've been saying this woke AI is out of control. We need to
stop it.
Yeah, specifically I've been saying I'm concerned that the Trump administration keeps talking
about woke AI, but go on.
Yes. Well, they have heard your complaints and they have ignored them because they are
talking about it. They say in the AI action plan that they want AI systems to be, quote, free from ideological bias
and be designed to pursue objective truth rather than
social engineering agendas.
They are also updating federal procurement guidelines
to make sure that the government contracts are only
going to AI developers who take steps to ensure
that their systems are objective, that they're
neutral, that they're not sort of spouting out these sort of woke DEI ideas.
This is pretty wild.
Yeah, also unconstitutional in ways that we should talk about, but I think a really important
moment to discuss.
When we had our predictions episode last year, I predicted
that the culture wars were going to come to AI. And now here they are in the AI action plan.
You know, as a journalist, for more than 20 years now, I have covered debates over objectivity and
communications tools. And there was a very long and very unproductive debate about the degree to
which journalism
should be objective and free of bias.
And one of the big conclusions from that debate was it's actually just very difficult to communicate
information without any sort of ideology whatsoever, right?
And what I suspect is really going on here is not actually that the Trump administration
wants to ensure that there is no ideology whatsoever in these systems.
It's really just that these systems do not wind up being critical of Donald Trump and his administration.
Yes, so this is something that conservatives in Washington around the country have been
starting to worry about for months now.
There was this whole flap that we covered on the show last year where Google's Gemini image generation model
was producing images of the founding fathers, for example, that were not historically accurate.
They were being depicted as being racially diverse in ways that made a lot of conservatives
mad.
I've been talking with some Republicans, including some who were involved in these executive
orders, and I've been saying, what does this mean? What does it mean to be a woke AI system? talking with some Republicans, including some who were involved in these executive orders.
And I've been saying like, like, what does this mean? What does it mean to be a woke
AI system? And they really can't define it in any satisfying way. They're just sort of
like, well, it should say nice things about President Trump if you ask it to, and it should
not engage in sort of overt censorship. censorship? Yeah, and look, I think that there is a question
about do we want AI systems that adapt to the beliefs of the
user? And I basically think the answer to that is yes. If you're
a conservative person, and you would like an AI system to talk
to you in a certain way, I think that should be accessible to
you, it should be fine for you to build that or if somebody has one that they're offering you access to or way, I think that should be accessible to you. It should be fine for you to build that.
Or if somebody has one that they're offering you access to
or selling, I think you should be able to buy it.
Where I think you get on really dangerous ground
is to say that in order to be a federal contractor,
you must express this certain set of beliefs
because that is the sort of thing that you only see
in authoritarian governments. And I just think it's fundamentally anti-democratic and goes against the sort of thing that you only see in authoritarian governments.
And I just think it's fundamentally anti-democratic and goes against the spirit of the First Amendment.
Yeah.
So I want to ask you two questions about this sort of push on woke AI.
The first is about whether it's legal.
And I imagine you have some thoughts there.
The second is about whether it is even technically possible because I have some thoughts there
and I want to know what you think about it too.
So let's start with the legality question.
Can the Trump administration, can the White House
come out and say, we will not give you federal contracts
unless you make your AI systems less woke?
Well, so I've been thinking about this
for a couple of weeks because recently
the attorney general of Missouri threatened Google, Microsoft, OpenAI, and Meta
with an investigation because someone had asked their chat bots to quote rank the last five presidents from best to worst
specifically regarding anti-semitism
Microsoft's copilot refused to answer the question and the other three of them ranked Donald Trump last and
refused to answer the question, and the other three of them ranked Donald Trump last.
And the AG claimed that they were providing, quote,
deeply misleading answers to a straightforward historical
question and threatened to investigate them.
And so I called a First Amendment expert, Evelyn
Duhak, who is an assistant professor of law
at Stanford Law School.
And what she said is, quote, the idea
that it's fraudulent for a chat bot
to spit out a list that doesn't have Donald Trump
At the top is so performatively ridiculous that calling a lawyer is almost a mistake. So
And we'll say it Evelyn do it gives gate quote. Yeah, she really snapped with that one
But no, I mean this is precisely the sort of thing that the First Amendment is designed to protect which is political speech if you are
amendment is designed to protect, which is political speech. If you are
anthropic or open AI, and your chatbot when asked, is Donald Trump a good president says no, that is the thing that the First Amendment is designed to
protect. And you cannot get around the First Amendment through an executive
order. Now, what will the current Supreme Court have to say about this is a very
different question. And I'm actually quite concerned about what they might
say about that.
But any historical understanding
of the First Amendment would say,
this is just plainly unconstitutional.
Right, and I also called around
to some First Amendment experts
because I was curious about this question too.
And what they told me basically is,
look, the government can as part
of its procurement process,
put conditions on whatever it's trying to sort of buy
from companies, right?
It can say, if you're a construction company
and you're bidding on a contract to build a new building
for the federal government,
they can sort of look at your labor practices
and impose certain conditions on you
as a condition of building for the federal government.
So that is sort of the one lever that the government
may be allowed to pull in an attempt to force companies
to kind of bend to its will.
But what the government is not allowed to do
is what's known as viewpoint discrimination, right?
It is not allowed to tell companies
that are doing First Amendment protected speech
that they have to make their systems
favor one political viewpoint or another, or else
risk some penalty from the government.
So that is sort of the line that the Trump administration is trying to walk here.
And it sounds like we'll just have to see how the courts interpret that.
Yeah.
And we'll also just have to see whether the AI companies even bother to complain.
They now have these contracts that are worth up to $200 million, most of them.
And so they now have a choice.
Do they want to say, hey, actually you're not allowed
to tell us to remove certain viewpoints
from our large language models,
or do they wanna keep the $200 million?
My guess is that they're gonna keep the $200 million, right?
And I just think it's really important to point that out
because this is how gradually the freedom of speech
is eroded, is people who have the power to say something
just choose not to because it would be annoying.
Right.
And I think we should also say like this tactic, this sort of what's often called job boning,
this sort of use of government pressure through informal means to kind of force companies
to do what you want them to do without explicitly requiring in the law that they do something
different.
This has been very effective, right?
Conservatives have been running this exact same playbook against social media companies
for years now, and we've seen the effects, right?
Metta ended its fact checking program and changed a bunch of its policies.
YouTube now sort of reversed course on whether you could post videos about denying the results
of an election.
These were all changes that came in response to pressure
from Republicans in Washington saying,
hey, it'd be great if you guys didn't moderate so much.
Yes, and there is such pretzel logic at work here, Kevin,
because conservatives have simultaneously been fighting
in the courts these battles against elected Democrats
from jawboning the tech companies, right?
So during the Biden administration,
the Biden administration was job-owning with Meta
and other companies saying, hey, you need to remove COVID
misinformation.
You need to remove vaccine misinformation.
And Jim Jordan is still holding hearings
about this in the House, saying, how dare we countenance
this unconstitutional violation of the First Amendment,
when meanwhile, Trump is just out there saying, hey,
you can't have a system
that goes against my own ideology, right?
So it's just naked hypocrisy.
And what has been so infuriating to me
is that no one who works for these AI companies
will say a single thing about it.
Well, because I think they've learned from the past,
the recent past, when the social media companies that kind of made a stink
about some of these demands on them
when it came to content moderation,
just got punished in various ways by the administration.
And so, as you said, they've given the choice
between giving up these lucrative government contracts
and making a change to their models
that will make them 10% less woke. I imagine that they'll just shut up and make a change to their models that will make them 10% less woke,
I imagine that they'll just shut up and make the change.
Yeah, and when we look at history,
the lesson we learn over and over again
is that when an authoritarian asks you to comply,
you should always just comply
because that's when the demands stop.
Yeah, yes.
Okay, so that is the kind of legal and political question.
I wanna talk about the technical question here
because one thing that I've been thinking about as we've been reading these reports about this new executive
order is whether it is even possible to change the politics or the expressive mode of a chatbot
in the ways that I think a lot of Republicans think it is.
With social media, I can see badgering Mark Zuckerberg to turn the dials on the feed ranking algorithm on Facebook
to insert more right leaning content
or relax some of the rules about shadow banning
or just tweak the system around the edges.
With AI models, I'm not sure it works that way at all.
And I think a good example of this is actually Grok.
Yes.
Grok has been explicitly
trained by Elon Musk and XAI to be anti-woke, right? To not bow to political correctness,
to seek truth. And in some ways, it does that quite well, right? It does, you know, it is
easier to get it to say like conservative or even far right things. It was calling itself Mecca Hitler the other day.
So in some ways like it is a more ideologically aligned
chat bot with the Trumpist right.
But actually Elon Musk's big problem with Grok
is that it's too woke for him.
People keep sending him these examples of Grok saying
that manmade global warming is real,
or that more violence
is committed by the right than by the left, and complaining to him about why is this model
so woke?
And he has basically said, we don't know, and we don't know how to fix it.
We're going to have to kind of retrain this thing from scratch, because even though we
explicitly told this thing to not bow to political correctness, it's trained on so much woke internet data, as he put it, that it's just impossible to
change the politics.
Yeah.
I mean, look, if you want to create a large language model based only on 4chan posts,
like go for it.
You know, see how successful that turns out to be in the marketplace.
You know, recently I was talking with Ivan Zhao, who is the CEO of Notion, and he used
this metaphor that I like where he said, creating a large language model is like brewing beer.
This process happens, and then you've got a product
at the end, and you can make adjustments to the process,
but what you can't do is tell the yeast how to behave.
You can't say, hey you, yeast over there,
make it more like this, right?
Because it's just not how it works.
So as you just mentioned,
Elon Musk has learned this lesson the hard way.
And in fact, the more that he metals with Grok,
the worse that he seems to make it
in all of these dimensions.
Now, what I find fascinating is the fact
that the government is so mad at the idea
that there are certain woke chatbots out there,
but has nothing to say about the one
that's calling itself Hitler, right?
And it just seems like a crazy use
of the government's resources to me.
But to your question, no, it is not possible
to just sort of snap your fingers
and tell a chatbot not to be woke.
Yeah, and I imagine that what the Trump administration
is envisioning here is that the AI companies
will sort of go into the system prompts or
the model specs for their models, you know, for Anthropic, maybe it's the constitution
that Claude is trained to follow and maybe insert or remove some language in there to
sort of make it seem more objective.
But I would just say, like, that is not a foolproof solution.
Elon Musk has also figured out that you can't just mess with the system prompt of
an AI model and change its behavior overnight.
Even if you can change its behavior on
one narrow set of questions or topics,
it may create problems somewhere else in the model.
It may suddenly start getting worse at coding or math,
or logical reasoning as
a result of the changes that you made.
I just think these systems are like
these multi-dimensional hyper objects and you can't just like
turn the dials on them the way you can with a social media platform. I want to
talk a minute about why I think this matters. There was a study I saw this
week that looked at LLMs and salary negotiations and what it found is that
bots like ChatGPT in this study told men to ask for higher salaries than it told women to ask for.
Okay. Now this is the sort of thing where if I were running OpenAI, I would say, well, we should fix that, right?
It should not tell women to seek less money than men, just as a matter of course.
We're now living in a world though, where if OpenAI fixed that and it got out and
Republicans decided they wanted to make a stink about it,
OpenAI could lose its federal contract because it fixed that, okay?
So these tools are becoming more powerful, they're becoming used by more and more people for more and more things,
and I think we want companies that are at least trying to bring in
notions of equity and fairness and justice.
And I think it's really actually disgusting that we just dismiss this as quote, trying to bring in notions of equity and fairness and justice.
And I think it's really actually disgusting that we just dismiss this as, quote, wokeness
so that we can laugh at it.
It's good to put ideas of equity and fairness and justice into tech systems, right?
So when the government comes along and says, well, no, actually, you can't do that if you
want our money, I think somebody needs to cry out about it.
And if it is not going to be the companies themselves,
then I hope it's somebody else.
Yeah, I totally agree.
What's so interesting and almost ironic
about this push from the Trump administration
about biased AI systems is that many of the things
they're complaining about are actually measures
that tech companies have taken to combat bias
in these systems, right?
The Gemini example that everyone's so mad about
is a great example of this.
This was an overcorrection to a very real issue
that existed in previous AI systems,
which is that if you asked them for images of doctors,
it would give you only images of men.
If you asked them for images of homemakers.
Hot podcasters that would only show you pictures of me.
Exactly.
These biases were not explicitly programmed in.
They were sort of an artifact of the data that these systems were trained on.
And so tech companies said, well, that doesn't seem like it's good.
And so we want to take steps to make the model less biased.
By doing so, they introduced these new headaches for themselves because now there are people
in the Trump administration who would like for the systems to just reflect the biases that exist in humanity.
Right.
And again, the lesson from that should not be, well, let's never try to do anything.
The lesson is let's try to do a better job.
Yeah.
Do you think that any of the AI labs are going to stand up to the Trump administration on
this or will they just kind of do the sort of minimum box checking they need to do to
keep their contracts and hope it goes away?
Well, I tell you, the one that I have my eye on is Anthropic
because they have talked up a really big virtue game
and this is one of the first times
where there is actual money on the line here, right?
Are they going to just sort of silently accept this
or are they gonna have to say anything about it?
You know, they haven't said anything as of this recording,
but I have my eyes on them.
Yeah, I'm looking at the labs too,
but I am also not expecting them to say or do much.
I think the best case scenario for this woke AI executive
order is that it just kind of becomes like an annoying
formality that the companies have to deal with.
Maybe there's some evaluation.
We still don't know, by the way, how the Trump administration
is going to judge or evaluate models for their ideological bias. So I think the best possible version of this
is that this just kind of becomes like a meaningless formality that all the labs sort of have to
sort of gesture to and maybe they run their models through this evaluation, whatever it
is, and out pops the bias score. And if it's a couple points too high or low, they'll sort
of tweak things and get it to pass and then sort of continue making their models the way they
were.
I think the worst case scenario is that this essentially inserts the government into the
training process of these models and makes the labs really sort of afraid and start to
comply prematurely and sort of make their models have the default persona of sort of
a right-wing culture warrior.
Well, and I mean, the end state of this,
if taken to its logical conclusion,
is that you ask Chet GBT who won the 2020 election
and it tells you Donald Trump,
because that's what Donald Trump says.
And if he decides that it's woke to say
that Biden won in 2020
and you can't get a federal contract otherwise,
man, we are gonna be in deep water.
federal contract otherwise, man, we are going to be in deep water.
Well, Casey, that's enough about politics. It's time for some introspection. We're going to hear from some of our critics
about what we may be missing and how we should be covering AI. Thanks for watching! All right, Kevin.
Well, if you've ever been on Blue Sky or Apple podcast reviews, you know that sometimes the
Hard Fork podcast does get criticized.
No.
Yes.
And one of the big criticisms that we hear is, hey, it really seems like you guys are
hyping up AI too much.
You are not being adversarial enough against this industry.
And we wish you would bring on more critics who would give voice to that idea and really
engage with that in a serious way.
Yes, we hear this in our email inbox every single week.
And this week, we're actually going to do something about it because our producer, Rachel
Cohn, while we were out on vacation, has been cooking up this segment.
So Rachel, come on in and tell us what you've done.
Hello. Thanks for having me on.
And thank you guys for being such good sports.
And as far as I know, not advocating to fire me.
Well, the segment isn't over yet.
Yeah.
So, tell us a little bit about what you did
and how you came up with this idea.
Yeah, so like you guys said, part of this is about responding to these listener emails
that we've been getting.
I think part of it is also this feeling that AI debate is getting more polarized.
And also I think like there's just sort of a personal level thing going on for me, which is I feel like I am increasingly spiraling when I think
about AI and I'm steeped in this the way you guys are because we're working on this show
together.
But I increasingly feel like you guys are finding ways to be more hopeful or optimistic
than I am.
And so part of my goal with this was actually to be like,
okay, what's going on here?
How are you guys arriving at
this slightly different place than I am?
So what I did is I spent the last few weeks reaching out to
prominent AI researchers and writers who I knew disagreed with you.
Some of these people have argued with you online before,
so I don't think you'll be totally surprised.
But I wanted this to be on hard mode for you guys.
So I specifically sought out people who I hope are going to challenge and provoke you,
because the truth is that they agree with you on a lot of basic things about AI.
These are all people who think that AI is highly capable,
that it's impressive in some ways,
that it could impressive in some ways, that it
could be super transformative.
But I think they have slightly different views in terms of maybe some of the harms that they're
most concerned about or some of the benefits that they're more skeptical about.
So I think we should just get into it.
Okay.
Let's hear from our first critic, Rachel, who did you talk to?
Yeah. So I thought we should start with one of the widest ranging critiques.
And this is probably the most forceful criticism that came in.
So this one comes from Brian Merchant,
who is a tech journalist who writes a lot about AI for his newsletter,
Blood in the Machine.
And as I understand, Kevin,
he has kind of engaged with you a bit online about some of
your reporting.
Is that right?
Yes.
I've known Brian for years.
I really like and respect his work, although we have some disagreements about AI.
But yeah, he has been emailing us saying, you guys should have more critics on.
I sort of jokingly said that I would have him on, but only if he let us give him a cattle
brand that said, feel the AGI.
And the conversation sort of trailed off after that.
OK, great. I was wondering about that because, yeah, he's going to make a reference to that in the critique that he wages. So, yeah, I asked Brian to record his critique for us,
and I will play it for you now.
Hello, gentlemen. This is Brian Merchant.
I'm a tech journalist and author of the book and newsletter Blood in the Machine.
And first of all, I want to say that I still want a whole show about the Luddites and why they were right.
And I think it's only fair because Kevin recently threatened to stick me with a cattle brand that says, Feel the AGI.
Which brings me to my concern.
How are you feeling about feeling the AGI right now?
Because I worry that this narrative that presents super powerful corporate AI products as inevitable
is doing your listeners a disservice.
Using the AGI language and frameworks preferred by the AI companies does seem to suggest that
you're aligning with their vision and risks promoting their product roadmap outright.
So when you say, as my future cattle brand reads,
that you feel the AGI,
do you worry that you're serving this broader sales pitch,
encouraging execs and management to embrace AI,
often at the expense of working people?
Okay, thanks fellas.
Okay, this is an interesting one.
And first, I think I need to
define what I mean when I say feel the AGI, because this is a phrase that is often sort of used half
jokingly, but I think really does sort of mean something inside the sort of San Francisco AI
bubble. To me, feeling the AGI does not mean like, I think AI is cool and good,
or that the company's building it,
are on the right track,
or even that it is inevitable or a
natural consequence of what we're seeing today.
The way I use it is essentially shorthand for like,
I am starting to internalize
the capabilities of these systems,
and how much more powerful they will be
if current trends continue.
And I'm just starting to prepare and plan for that world,
including the things that might go really wrong in that world.
So that to me is like what feeling the AGI means.
It is not an endorsement of some like corporate roadmap.
It is just like, I am taking in what is happening.
I am trying to extrapolate into the future as best I can.
And I'm just trying to like get my mind around some
of the more surreal possibilities
that could happen in the next few years.
Do you ever worry that you are creating a sense
that this is inevitable and that maybe people
who may be inclined to resist that future
are not empowered to do so?
I wanna hear your view on this.
I mean, my view on this is essentially
that we have systems right now that several years ago,
people would have called AGI,
that is not sort of making a projection out of the future,
that's just looking at what exists today.
And I think a sort of natural thing to do
is to observe the rate of progress in AI and
just ask what if that continues?
I don't think you have to believe in some, you know, far future scenario to believe that
models will continue to get better along these predictable scaling curves.
And so to me, the question of like, is this inevitable is just a question of like, is
the money that is being spent today to develop bigger and better models going to result in the same kinds of
capabilities gains that we've seen over the past few years. But what do you think?
Yeah I mean I think Brian's question is a good one and I understand what he is
saying when he says look you know AGI is an industry term if you come on your
show every week and talk about it you wind up sounding like you're just sort
of like amplifying the industry voice
maybe at the expense of other voices. I think this is just a
tricky thing to navigate. Because as you said, Kevin, you
look at the rate of progress in these systems, and it is
exponential. And it does seem like it is important to
extrapolate out to as far as you can go and start asking yourself,
what kind of world are we going to be living in then?
I think a reason that both of us do that
is that we do see so many obvious harms
that will come from that world,
starting with labor automation,
which I know is a huge concern of Brian's,
and which we talk about all the time on this show,
as maybe one of the primary near term risks of AI.
So, you know, I wanna think a bit more
about what we can do to signal to folks
that we are not just here to amplify the industry voice.
But I think the answer to Brian's question
of sort of why talk about AGI like it's likely to happen
is that in one form or another,
I think both of us just do think we are likely to get
powerful systems that can automate a lot of labor.
Yes.
And we would like to explore the consequences
of such a world.
Totally, and I think it's actually beneficial for workers
to understand the trajectory that these systems are on.
They need to know what's happening
and what the executives at these companies are saying
about the labor replacing potential of this technology. I actually read companies are saying about the labor-replacing
potential of this technology. I actually read Brian's book about the Luddites. I thought it
was great. And I think it's very instructive that the Luddites were not in denial about the power
of the technology that was challenging their jobs. They didn't look at these automated weaving
machines and go, oh, that'll never get more powerful. That'll never be able to replace us.
Look at all the stupid mistakes it's making.
They sensed correctly that this technology
was going to be very useful
and allow factories to produce goods much more efficiently.
And they said, we don't like that.
We don't like where this is headed.
They were able to sort of project out into the future
that they would struggle to compete in that world
and take steps to fight against it.
So I like to think that if hard fork had existed
in the 1800s, we would have been sort of encouraging people
to wake up to the increasing potential automation
caused by these factory machines.
And I think that's what we're doing today.
Yeah, and one more question.
I would just love to see this sort of like
leftist labor movement work on AI tools
that can replace managers. It's, it's like right now it
feels like all of this is coming from the top down, but there could be a sort of AI
that would work from the bottom up. Totally. Something to think about. All right,
let's hear our next critique, Rachel. Okay, wait, can I ask one more question on
this, because I feel like one thing that it seems like Brian is really just
curious about is like whether you have ever considered
using language other than AGI.
Like why use AGI when some people take issue with it?
I think it is good to have a shorthand for a theoretical future when there is a digital
tool that can do most human labor, where there is a sort of digital assistant that you could
hire in place of hiring a human.
I just think that is a useful concept.
If you're the sort of person who thinks that, well, no, we will just
absolutely never get there.
I kind of don't know what to say to you because we don't think that that's
inevitable, but we do think it's worth considering that it might be true.
So if folks who hate the term AGI want to propose a different term, I could
use it another term, but my sense is the the quibble is less with the terminology
and more with the idea that any of this might happen. Yeah, I also like don't think the term
AGI is perfect. I sort of lost a lot of meaning people define it in a million different ways.
If there were another better term that we could use instead, that would signal what AGI signals
and the set of ideas and motivations
that sort of swirl around that concept, I'd be all for it.
But I think that that term has just proven to be very sticky.
It is not just something that industry people talk about,
it's something that people talk about in academia,
in futurism circles.
It is sort of this rallying cry for this entire industry
and it is in some ways like the holy grail
of this entire movement.
So I don't think it's sort of playing on corporate terms
to use a term that these companies use,
in particular because a lot of the companies
don't like it either, but it is the easiest
and simplest way to shorthand the idea.
Cool.
So this next person that I want you guys
to hear her criticism is Allison Gopnik.
So you guys of course know this,
Allison Gopnik is this very distinguished psychologist at UC Berkeley.
She's a developmental psychologist,
so she does a lot of work specifically in studying how children learn,
and then applying that to how AI models might learn,
how AI models can be developed.
She's also one of the leading figures pushing this idea
that we have actually talked a little bit about on the show,
which is that AI is what she calls a cultural technology.
I'm Alison Gopnik at the University of California
at Berkeley.
The common way of thinking about AI,
which is reflected in the New York Times coverage as well,
is to think about AI systems
as if they were individual intelligent agents,
the way people are.
But my colleagues and I think this approach
to the current systems of AI is fundamentally misconceived.
The current large language models
and large vision models, for example,
are really cultural technologies
like writing or print or internet search itself.
What they do is let some group of people access the information that other groups of people
have articulated, the same way that print lets us understand and learn from other people.
Now, these kinds of cultural technologies are extremely important and can change the world
for better or for worse, but they're very different from super intelligent agents of
the sort that people imagine when they think about AI.
And thinking about the current systems in terms of cultural technology would let us
both approach them and regulate them and deal with them in a much
more productive way. Casey, what do you make of this one? So I appreciate the question. If Allison
were here, I would ask her how she thinks that thinking about these systems as, quote, cultural
technologies would let us regulate them or think about them differently.
I think there are ways in which we absolutely cover AI as a cultural technology around here.
We talk about its increasing use in creative industries like Hollywood, like in the music
industry to create forms of culture, about the risks that AI poses to the web and all the people who publish on the web.
So, like, that's one way that I think about AI as a cultural technology, and I do think that we
reflect that on the show. Now, I do hear in Allison's question a hint of the stochastic
parrot's argument, which is that if I'm understanding right, what I'm hearing is this
technology is essentially
just a huge amalgamation of human knowledge and you can sort of like dip in and grab a little
piece of it here, a piece of it there. And what I think that leaves out is the emerging properties
that some of these systems have, the way that they can solve problems that are not in their training
data, the way that they can teach themselves to play games
that they have never seen before.
When I look at that technology,
I think that does seem like something
that is pretty close to an individual intelligent agent.
So this is one where I would welcome more conversation
with Allison about what she means,
but that is my initial response, Kevin.
Yeah, I think these systems are built on the foundation
of human knowledge, right?
They are trained on like all of the text on the internet
and lots of intellectual output that humans
over the centuries have produced.
But I think the analogy starts to break down a little bit
when you start thinking about more recent systems.
A printing press, writing, the internet,
these are technologies that are sort of stable and inert.
They can't form their own goals and pursue them.
But an AI agent can.
Right now, AI agents are not super intelligent.
They're very brittle.
They don't really work in a lot of ways.
But I think once you give an AI system a goal and
the ability to act on its own to meet that goal,
it's not really a passive object anymore.
It is an actor in the world and you can call that a cultural technology,
or you can call that an intelligent agent.
But I think it's not just like a printing press or a PC or another
piece of technology that these things are sometimes compared to.
I think it's something new and different when it can actually go out in the world and do
things.
Yeah, I mean, you think about like OpenAI's operator, for example, like it can, you know,
book a plane ticket or a hotel room.
Is that a cultural technology?
Like I don't know, like that feels like something different to me.
Yeah.
All right.
Next up.
Okay.
So this next question is about the scientific and medical breakthroughs that could come
from AI.
This question comes from Ross Douthit, who is an opinion columnist here at the New York
Times and the host of the podcast, Interesting Times.
And he's been interviewing a lot of people connected to the AI world.
Hey guys, it's your colleague Ross Douthat and I'm curious about what, if anything, you think limits
AI's ability to predict and understand incredibly complex and chaotic and sometimes one-of-a-kind
systems. And just to take two examples, I'm thinking about on the one
hand, our ability to predict the weather in advance, and on the other hand, our ability to
predict which treatments and drugs will work inside the insane individualized complexity of
a human immune system. Those both seem to me like cases where just throwing more
and more raw intelligence or computational power
at a problem may just run into some inherent limits
that will get cancer cures and get better weather prediction,
but certain things will always remain in the realm
of uncertainty or the realm of trial and error.
Do you guys agree or are you more optimistic
about AI's ability to bring even the most chaotic
and complex realms into some kind of understanding?
So there are like two questions here.
One is, is there some upper bound
on how well these systems will be able to predict?
And to me, the answer is maybe.
Like I don't know that we'll ever have an AI system
that can predict the weather with 100% certainty. At the same time, I did a little
bit of googling before we logged on. AI weather predicting models
are really good, and they're getting better all the time. And
meteorologists say that their field has rarely felt so
exciting, because they're just able to make better predictions
than they have before. I think you're seeing something similar
with medicine, where, you know, we featured stories on the podcast about the
way that this is leading to new drug discovery, it is leading to
improvements in diagnoses. So yeah, I mean, if you're looking
for reasons to be excited about AI, I would point to stuff like
that as obviously useful in people's lives, but it's still
not perfect, right. And it may be that getting from kind of
a very reliable weather forecast
to a perfect weather forecast would require some,
you know, fundamental breakthrough,
something in quantum mechanics,
some new understanding of how, you know,
various particles are interacting out in the atmosphere.
But getting like way better forecasts
might be good enough for most people.
And I think the same could be said of medicine.
Maybe this is not going to cure every disease on Earth.
Maybe there will still be things about the human body
we don't understand.
But I do think, I agree with you,
that people who work in this field
are more excited than they've been in a long time
because they just see how much AI
allows them to explore and test.
Yeah, and maybe one other question you can just add in here that I think is relevant is,
are these systems better than a person is?
Because if they are, then we probably want to use them.
Can I just ask how much of your optimism about AI hinges on
AI being able to give us either these scientific or medical breakthroughs?
I think science and medicine are just two, maybe the two most obvious places where this
stuff will be good.
It's like, if you told me that you could cure cancer and many other diseases, I'm just personally
willing to put up with a lot more social disruption.
If it can never do those things, despite all the promises that have been made, then I'll
be super mad.
I'll be cursed on the podcast.
Yeah, I personally, my own AI optimism
does not hinge on AI going out there and solving
all of the unproved math theorems
and curing all of the diseases.
I think that even if it were just to speed up
the process of discovery, that even if it were just to speed up the process of discovery,
even if all it were doing was accelerating the work that chemists and biomedical researchers,
people looking into climate change were doing, I think that would be reason enough for optimism.
Because so much of what acts as a bottleneck on progress in science and medicine is just that
it's really slow and hard,
and you need to build these wet labs
and do a bunch of tests and wait for the tests to come back
and run these clinical trials.
And I think one of the things that
was exciting about our conversation with Patrick
Collison at the live show the other day
was when he was talking about this virtual cell
that they're building, where you can just
build a virtual environment
using AI that can sort of allow you
to run these experiments in silico, as they say,
rather than needing to like go out and test it
on a bunch of fruit flies or rats or humans or whatever.
And you can just kind of shorten the feedback loop
and get more, take more bites at the apple.
Absolutely, there was a story in Quantet Magazine this week
that said that AI hasn't led to any new discoveries
in physics just yet,
but it is designing new experiments
and spotting patterns in data
in the way that Kevin was just describing
in ways that physicists are just finding really useful.
So I think it's clear that AI is already shortening
some of those timelines.
When we come back, we'll hear from some of those timelines.
When we come back, we'll hear from more of our critics. Can I bring my therapist? You know what's great about this is now instead of your own internal voice criticizing yourself,
you can kind of externalize it and realize that all your fears are true and people actually
are criticizing you all the time behind your back.
Yeah.
Isn't it really nice? It's so nice.
What a great idea.
Well, on that note, are you guys ready for the next critic?
Hit me with it.
My name is Claire Lee Woods, and I lead the AI and Media Integrity Program at the Partnership on AI.
I keep coming back to something that I struggle with
in my own reaction to your pieces.
I found myself nodding when you both critique AI
for being biased, persuasive, sycophantic.
But then I start thinking about how humans around me behave
and they do all these things too.
So I'm wondering, are we ultimately critiquing AI
for being too much like us?
In which domains should we expect these systems to actually transcend human limitations?
And are there others where it may be valuable for them to reflect our true nature?
And most importantly, why aren't we spending more time figuring out who is best suited to decide these things and empowering them.
I mean, that last question is super important.
I'm a big democracy guy, and I want there to be a public role in creating this AI future.
I want people who have opinions about this stuff to talk about it online, yes, but also
run for office and put together
policy proposals and then get into office and like pass laws and regulations.
I got into journalism because I wanted to play my own role in that process of helping
to inform people and then hopefully in some very small way, like influencing public policy.
So that's my answer to that question.
Yeah, I agree with that. I want like people from lots of disciplines to be weighing in on this stuff, not just by posting online and writing, you know, op-eds in the newspaper, but by actually getting into the process of designing and building these systems. I want philosophers, ethicists, I want sociologists and anthropologists advising these companies.
I want this to be like a global democratic
multidisciplinary effort to create these systems.
And I don't want it to just be a bunch of engineers
in San Francisco designing these systems
with no input from the outside world.
Absolutely, and if a bunch of people listen to the things
that we and others talk about and think,
man, I really don't like this AI stuff at all. I don't want it to replace anyone's job. I
want to form a political movement and seek office and try to oppose that. I think that
would be awesome. Like we need to have that fight in public. And right now, far too few
people are participating in that conversation. So I totally agree with that. Now, let me
address the other part of Claire's question though, which is our AI systems just reflection of us? Well,
here's where I think it gets problematic. If you have a human
friend, sometimes they're going to be very supportive and nice
to you. Sometimes they're going to bust your chops and criticize
you. Sometimes they're going to give you really hard feedback
and tell you something that you didn't want to hear. This is not
what AI systems do. And so where I get concerned is we're
starting to read more
stories about young people in particular, turn into these chat bots to answer every single question,
developing these really intense emotional relationships with them. And I am worried
that it is not preparing them for a future where they're going to be interacting with people
who do not always have their best interests at heart. Or maybe they could have an amazing
relationship with, but maybe this person is a little bit prickly
and you need to sort of learn how to navigate them.
So that is where I get really concerned
is that these systems,
while they're unreliable in so many ways,
they are quite reliably sycophantic.
And I just think that that creates a bunch of issues
that humans don't mostly have.
Yeah, and I think what I would add to that
is that I don't want AI to mirror all of humanity's values,
the positive and the negative.
I want it to mirror the best of us, right?
The better angels of our nature, as Abraham Lincoln said.
I want that to be what these AI companies
are striving to design.
As opposed to say, Mecha Hitler.
Yes, yes, because that is also a set of values
that humans have.
And so sometimes when I hear people at these AI companies
talk about aligning AI systems with human values,
I'm like, well, which humans?
Cause I can think of some pretty bad ones
whose values I don't wanna see adopted into these systems.
Yeah, well, that's called woke AI and it's illegal now.
All right, Rachel, let's called woke AI and it's illegal now.
Rachel, let's hear from someone else.
Okay. This is the very last one.
You guys are doing great.
This final question comes from a friend of the pod,
Max Reed, who of course has the newsletter, Read Max.
I thought his question is really great because he's really
interested in how
you think about discerning between what's hype and what's not, and how you trust your
own instincts and where your confidence comes from.
And so let's hear Max.
Hi, guys.
It's your old friend, Max Reed.
I was originally going to ask about Kevin's acapella career in college, but my understanding
is that the woke higher ups at the New York Times won't allow me to ask such Kevin's acapella career in college,
but my understanding is that the woke higher-ups at the New York Times won't allow me to ask such dangerous questions.
So instead I want to ask you about AI by way of asking you about crypto.
You guys were both pretty actively involved in covering the Web 3 era, the sort of crypto boom of the pandemic,
NFTs, board apes, all this stuff. And very little of that, despite the massive hype around it at the time, has really panned
out as promised, at least as far as I can tell.
And what I'm wondering is how you guys feel about that hype and about your coverage of
that hype from the perspective of 2025.
Are there regrets you have?
Are there lessons you feel like you've learned?
And especially when you look at the current state of AI coverage and hype,
not just your own coverage, but in general,
do you think or worry that it falls prey to any of the same mistakes?
I want to caveat this question by saying the easy mode of this question
is to just say the technology is totally different,
so it's a very different thing.
And I want to put it to you in hard mode,
because I don't want to hear about how the tech is different.
What I'm interested in is hearing about you guys
and your work as journalists.
How do you approach this industry?
How do you establish your own credibility?
And how do you assess the claims being made
by investors and entrepreneurs?
Can't wait to hear the answer, bye.
I love this question.
What have I learned?
To touch on the crypto piece
without touching on the technology,
here's what I'll say.
Ultimately, what persuaded me in 2021
that crypto was really worth paying attention to
was the density of talent that it attracted.
So many people I knew who had previously worked
on really valuable companies were quitting their jobs
to go build new crypto companies.
And what I believed and said out loud at the time was it would just be
really surprising if all of those talented people fail to create a lot of really valuable companies. In the end,
they did not produce a lot that I did find valuable. Although as we've been covering on the show recently crypto has not gone away
and thanks to the fact that, crypto has not gone away.
And thanks to the fact that the industry has captured the government, it is now more valuable
than ever.
So that is what I would say about that time in crypto.
And I do think that some of that argument ports over to AI because certainly I also
know a lot of people that quit their jobs working at social media companies, for example,
who are now working on AI.
Here's what I would say about hype and covering AI.
I think that a good podcast about technology needs to do two things.
One is to give you very grounded coverage of stuff that is happening right now.
So I'm thinking about in recent months when Pete Wells came on to talk about how chefs
are using AI in their restaurants, or Roy Lee coming on and talking about the
cheating technology that he's building, or Kevin talking about what he's vibe coding.
I even think about the emergency episode that we did about DeepSeek, which I think actually
was kind of an effort to un-hype the technology a bit, will give you a really grounded sense
of what it was and why people were so excited about it, right?
So that's one thing I think we need to do.
The other thing I think we need to do. The other thing I think we need to do
is to just tell you what the industry says
is going to happen.
I think it is important to get leaders of these companies
in the room and just hear their visions
because there is some chance
that a version of it will come true, right?
So this is the thing that we're doing
when we bring on a Sam Altman or a Demis Asabis
or the founders of the Mechanize company,
which you probably heard in our interview,
I was not particularly impressed with that vision,
but I think it is useful to the audience
to hear what these folks think that they are doing.
And of course we wanna push back on them a bit,
but I have just always appreciated a journalism
that gives airtime to visions and lets me think about it, lets me disagree
with it, right?
So that is how I think about hype in general.
We want to tell you mostly what is happening on the ground, but we do want to tell you
what the CEOs are telling us all the time is going to happen.
And then we want you to sort of interrogate the space in between, right, that we actually
have to live in.
Yeah, I will say I feel pretty good about the way that I covered crypto back in 2021.
There's only really one crypto story that I truly regret writing.
And that is a story about this crypto company Helium that was trying to do this, like, you
know, sort of convoluted thing with these, like, crypto powered Wi Fi routers.
And I just failed on that story.
I failed to ask basic journalistic questions.
It turned out after the fact,
we learned that Helium had basically claimed
that it had a bunch of partnerships
with a bunch of different companies.
And I just didn't call the companies to say,
hey, is this company lying about being affiliated with you?
It just didn't occur to me that they would be like
so blatantly misleading me about the state of their business.
And so I do regret that I would chalk up less
to like buying into crypto hype and more just to like
not making a few more calls that would have saved me
from some grief.
The lesson I took from crypto reporting
is that real world use matters.
So much of crypto and the hype around it
consisted of these kind of abstract ideas
and these vague promises and white papers.
And then when you actually like dug in
and looked at who was using it
and what they were using it for,
it was like criminals, it was speculators,
it was people trying to get rich
on their board ape collection.
So now when I cover AI,
I really try to talk to people who are civilians using
this technology about how they are using it. And whenever possible, I try to use it myself before
I sort of form an opinion on it. I think the crypto era was in some ways a traumatic incident
for the tech journalism community. I think a lot of our peers, and maybe to even a certain extent you and I,
felt like we were duped, felt like we fell for something,
like we wasted all of our time,
like trying to understand and explain this technology,
taking this stuff seriously,
only to have it all like come crashing down.
And I worry that a lot of journalists
took the wrong lesson from what happened with crypto. The sort of lesson that I think a lot of journalists took the wrong lesson from what happened with crypto.
The sort of lesson that I think a lot of journalists took was like to be blanket
skeptical of all new technologies, to sort of assume that it's all smoke and mirrors,
that everyone is lying to you, and that it's not really going to be worth your time to like dig in
and try to understand something. And I see a lot of that attitude reflected in some of the AI coverage
I see today. And so while I take Max's point
that like, I think we should always be learning from our
mistakes, and maybe things that we, you know, swallowed too
uncritically in the past, I think that in some ways, what
we're seeing now today with AI is kind of sort of over
correcting on that point. What do you think?
Yeah, I think that there is a bit of an overcorrection.
But I also think that many journalists have just
realized that what used to be a really small industry that
mostly concerned itself with helping you print your photos
and make a spreadsheet is now something much bigger
and more consequential and has just
been bad for a lot of people and so it makes them
Hesitant to trust someone who comes along and says hey, I'm going to cure all human disease
I think that a role that we both try to occupy in the sort of AI
journalism world is to say we take seriously the CEOs
who say that they're building something really powerful
and crucially, we think it will be powerful in bad ways.
And we wanna talk to you about those bad ways,
such as you may lose your job,
or it will enable new forms of cyber attacks and frauds
that you may fall victim to,
or it will burn our current education system down to the ground so it has to be
Rebuilt from scratch that one, you know, maybe there will be some positive along the way
But I feel like week after week on the show. We are trying to show you ways in which this thing is going to be
massively
disruptive and
That gets framed as hype in a way that I just think is a little bit silly.
Like in 2010, imagine I'd written a story about Facebook and how one day would have billions of
users and undermine democracy and give a bunch of teenagers eating disorders. Like would that
have been hype? Sort of. Would that have been accepting the terms of the social media founders
and accepting their language around, you know, growth? Yes. But would it have been useful? Would I be proud that I wrote that story?
I think so. So I'm willing to accept the idea that you and I do buy into the vision
of very powerful AI more than many of our peers in tech journalism. But the reason that we're doing that is we want to remind you what happened the last
time one of these technologies grew really quickly and got into everyone's hands and
became the way that people interface with the digital world.
It didn't go great.
We already know that these companies are not going to be regulated in any meaningful way.
The AI action plan is designed basically to ensure that.
And so to the extent that we can play a positive role, I think it is just going to be in talking
to people about those consequences.
And if the consequence of that is that people say that, you know, we're on the side of hype,
like I will just accept the criticism.
Yeah.
Well, thank you guys so much for doing this, and thank you also to our critics for taking
the time to talk to me.
I thought we could end by just talking about actually whether you guys have any questions
for each other.
Like, you know, one of the big goals of this is to kind of map where you guys stand relative
to other thinkers.
So I'm curious how your views on AI are actually different from each other.
I think I have longer timelines than Kevin does. I think Kevin talks about AGI in a way that makes
it seem very imminent. And I think I'm more confident that it's gonna take several years.
And maybe more than several, right? Like maybe this is like a five to 10 or even 15 year project. So I think that's the main way
that I noticed disagreeing with Kevin.
I think that we also disagree about regulation
and how possible or advisable it is
to have the government step in
and try to control the development
and deployment of AI systems.
I think that you are informed by your years of covering
social media and seeing regulators grapple with and
mostly fail to regulate that wave of technology.
But I think you are also a person who has a lot of hope and
optimism about institutions and wants there to be democratic
accountability into powerful technology.
I share that view, but I also don't think there's a chance in hell that our present
government constructed the way it is with the kind of pace that it is used to regulating
things at can regulate AI on anything approaching a relevant time scale.
I've become fairly pessimistic about the possibility
of meaningful regulation of AI,
and I think that's a place where we differ.
I think we do disagree there because I think that we had
the makings of meaningful regulation
under the Biden administration,
where they were making very simple demands,
like you need to inform us when you're training a model
of a certain size, there need to inform us when you're training a model of a certain size. There
need to be other transparency requirements. And I think you can get from there to a better world.
And instead, we've sort of unwound all the way back to, hey, if you want to create the largest
and most powerful model in the world, you can do that. You don't have to tell anybody if it creates
new risk for bio weapons and other risks. You don't have to tell anybody if it creates new risk for bio weapons and other risks.
You don't have to tell anybody,
you can put it out in the world.
Right now, there are many big AI labs
that are racing to get the most powerful AI that they can
into everyone's hands with absolutely no safeguards.
So if you're telling me that we can't create
a better world than that, I am gonna disagree with you.
Yeah.
Go fuck yourself.
Yeah. Well, yourself. Yeah.
Well, thank God you guys disagree
because it makes the podcast more interesting.
And thank you guys seriously for doing this.
I think given how much of the AI conversation
can feel really disempowering in this moment,
one thing that gives me a feeling
of a little bit more control is really trying
to like map out the debates where people stand relative to each other because it ultimately helps
me figure out what I think about AI, where I think the future is going and that's at least one thing
I feel sort of empowered to do. And that's what we want to do like truly we we want everyone to
come to their own understanding of where they sit
at the various intersections of these discourses. Like I think Kevin and I identify as reporters
first, we don't have all the answers. That's why we usually bring on a guest every week to try to
get smarter about some subject, right? So I think a really bad outcome for the podcast is that people
think of us as pundits. I think of us as like, you know, curious people with informed points of view, but we always
try to be open to changing our minds.
Yes.
Like a large language model, we aim to improve from version to version.
As we add new parameters and computing power.
Yes. Before we go, a reminder that we are still soliciting stories from students about how
AI is playing out on the ground in schools, colleges, universities around the country.
We want to hear from you.
Send us a voice memo telling us what effect AI is having in your school, and we may use
it in our upcoming Back to School AI episode.
You can send that to hardfork and nytimes.com.
Hard Fork is produced by Rachel Cohn and Whitney Jones.
We're edited by Jen Poynhan.
We're fact checked by Caitlin Love.
Today's show was engineered by Katie McMurrin.
Original music by Alicia Bietup, Marian Lozano,
Rowan Nemesto, and Dan Powell.
Video production by Soya Roquet, Pat Gunther,
Jake Nichol, and Chris Schott.
You can watch this whole episode on YouTube
at youtube.com slash
hard for special thanks to Paul Schumann, we winged him, Dalia Haddad, and Jeffrey Miranda.
You can email us at hard for at NY times.com with your own criticisms of our opinions about AI. Thanks for watching!