Hard Fork - The Tech Behind Signalgate + Dwarkesh Patel's "Scaling Era" + Is A.I. Making Our Listeners Dumb?
Episode Date: March 28, 2025This week, we dig into the group chat that’s rocking the Trump administration and talk about why turning to Signal to plan military operations probably isn’t a great idea. Then, we’re joined by ...the podcaster Dwarkesh Patel to discuss his new book “The Scaling Era,” and whether he’s still optimistic about the broad benefits of A.I. And finally, a couple weeks ago we asked whether A.I. was making you dumber. Now we hear your takes. Guest:Dwarkesh Patel, tech podcaster and author of “The Scaling Era: An Oral History of A.I., 2019-2025” Additional Reading:The Trump Administration Accidentally Texted Me Its War PlansSignal Chat Leak Angers U.S. Military PilotsIs A.I. Making Us Dumb? We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Transcript
Discussion (0)
Listen to this.
This week I checked my credit card bill, normally a pretty boring process in my life, and I
see a number that astonishes me in its size and gravity.
And so the first thing I think is, how much door dash is it possible to eat in one month?
Have I hit some new level of depravity?
But then I go through the statement and I find a charge that is from the heating
and plumbing company that I used to use when I lived in the home of Kara Swisher. Kara
Swisher, of course, the iconic technology journalist, friend, and mentor, originator
of the very podcast feed that we're on today, Kevin.
And former landlord of Casey Newton.
Former landlord of me. And when I investigated, it turned out
that Kara Swisher had charged my credit card for $18,000.
Ha ha ha ha ha.
For what?
What costs $18,000?
I don't know what is going on, but it costs $18,000 to fix.
And until I made a few phone calls yesterday,
that was going to be my problem.
So here's what I want to say to the people of America,
you need to watch these landlords.
You might think that you're out from underneath their thumb,
but they will still come for you
and they will put $18,000 on your credit card
if you do not watch them.
Now this is slightly terrifying to me,
the idea that Cara has access to your credit card
in some way, shape or form.
Well, and I should say, basically it was on file with the heating and plumbing company.
So I'm not sure that I could actually blame Kara for this,
but I did have to talk to her about it.
Oh, she's crafty. I think she knew what she was doing.
She's been waiting to get back at us like this for a long time.
And mission accomplished, Kara.
-♪ The New York Times, The New York Times, The New York Times.
I'm Kevin Roos, a tech columnist at the New York Times.
I'm Casey Newton from Platformer.
And this is Hard Fork.
This week, the group chat that's rocking the government will tell you why the government
turned a signal for planning military operations and why it's probably not a great idea.
Then, podcaster Dworkash Patel stops by to discuss his new book and tell us why he still
believes AI will benefit
all of us in the end.
And finally, we asked you if AI was making you dumber,
it's time to share what you all told us.
I feel smarter already.
Well, Casey, the big story of the week is Signalgate.
Yes, what I would say, Kevin, is the group chats are popping off at the highest levels of government.
Yes. And if you have been hiding under a rock or on a silent meditation retreat or something for
the last few days, let's just quickly catch you up on what has been going on. So on Monday,
the Atlantic, and specifically Jeffrey Goldberg,
the editor in chief of the Atlantic,
published an article titled,
"'The Trump Administration Accidentally Texted Me
Its War Plans.'"
In this article, Goldberg details his experience
of being added, seemingly inadvertently,
to a signal group chat with 18 of the US's
most senior national security leaders,
including Secretary of State Marco Rubio,
Tulsi Gabbard, the Director of National Intelligence,
Pete Hegseth, the Secretary of Defense,
and even Vice President JD Vance.
The chat was called, Huthi PC Small Group,
PC presumably standing for Principles Committee
and not personal computer.
And I would say this story lit the internet on fire.
Absolutely.
You know, we have secure communications channels
that we use in this country, Kevin,
to sort of organize and plan for military operations.
They were not used in this case.
That is kind of a big deal in its own right.
But to accidentally add one of the more prominent journalists
in all of America to this group chat as you're planning it
is truly unprecedented in the history of this country.
Yeah, and unprecedented in my life too.
Like I never get invited to any secret classified
group chats, but I also feel like there is an etiquette
and a procedure around the mid-sized group chat.
Yes.
So I'm sure you've had the experience of being added to a group chat.
In my case, it's usually like planning a birthday party or something.
And there's always like a number or two on this group chat that you don't have stored in your phone, right?
That's right.
The unfamiliar area code pops up along with the named accounts of everyone who you do know who's in this group chat.
Absolutely.
And the first thing that I do when that happens to me
is I try to figure out who the unnamed people
in the group chat are.
Yes.
And until you figure that out,
you can't be giving your A material to the group chat.
This is so true.
I saw someone say on social media this week
that gay group chats have so much better
operational security than the national security advisor does.
And this is the exact reason.
If you're gonna be in a group with seven or eight people
and there's one number that you don't recognize,
you're gonna be very tight lipped
until you find out who this interloper is.
Exactly, and maybe someone will even say,
hey, who's, you know, 347?
There's a protocol for this is what I'm saying.
Yes, and as you're pointing out,
it's a protocol that most people take extremely seriously.
Yes.
Even when they are talking about things
like planning birthday parties.
Yes.
And not military strikes.
Exactly.
Yeah.
So before we get into the tech piece,
let's just sort of say what has been happening since then.
So this story comes out on Monday in the Atlantic.
Everyone freaks out about this.
The government officials involved in this group chat
are sort of asked to respond to it.
There's actually a hearing in Congress
where several of the members of this group chat
are questioned about how a reporter got access
to these sensitive conversations.
And basically the posture of the Trump officials
implicated in this has been to deny
that this was a secret at all.
There have been various officials saying nothing classified was discussed in here.
This wasn't an unapproved use of a private messaging app.
Basically nothing to see here, folks.
Yes, and on Wednesday, the Atlantic actually published the full text message exchanges
so that people can just go read these things for themselves and see just how detailed the
plans shared were. Yes.
And let's just say it does look like
some of this information was, in fact, classified.
It included details about the specific timing
of various airstrikes that were being ordered in Yemen
against the Houthis, which are a sort of rebel terrorist
militia.
This was not a party planning group chat.
Yeah.
Here's a good test for you.
When you read these chats,
imagine you're a Huthi in Yemen.
Would this information be useful to you
to avoid being struck by a missile?
I think it would be.
To me, that's the test here, Kevin.
Totally.
So let's dive into the tech of it all
because I think there is actually an important
and interesting tech story kind of beyond the headlines here.
So Casey, what is Signal and how
does it work?
Yeah, so Signal, as you well know, as a frequent user is an
open source, end to end encrypted messaging service
that has been with us since July 2014. It has been growing in
popularity over the past several years. A lot of people like the fact that,
unlike something like an iMessage or a WhatsApp,
this is something that is built by a nonprofit, actually.
It's funded by a nonprofit organization,
and it is fully open source.
It's built on an open source protocol
so anyone can look and see how it is built.
They can poke holes in it, try to make it more secure.
And as the world has evolved, more and more people
have found reasons to have both end-to-end encrypted chats
and to have disappearing chats.
And so Signal has been sort of part of this move away
from permanent chats stored forever
to more ephemeral, more private communications.
Yeah, and I think we should add that among the people who
think about cybersecurity, Signal
is seen as kind of the gold standard of encrypted
communications apps.
It is not perfect.
No communications platform is ever perfectly secure
because it is used by humans on devices that
are not perfectly secure.
But it is widely regarded as the most secure place to
have private conversations. Yeah, I mean, and if you want to know why is that we could go into some
level of detail here, signal makes a priority to collect as little metadata as possible. And so,
for example, the government went to them and they said, Hey, we have like Kevin's signal number,
tell us all the contacts that Kevin has. They don't actually know that
they don't store that. They also do not store the chats
themselves, right? Those are on your devices. So if the
government says, Hey, give us all of Kevin's chats, they
don't have those. And there is some pretty good encryption and
privacy practices and some of the other apps that I think a
lot of our listeners use on a daily basis. WhatsApp has pretty good protection, iMessage has pretty good protection, but there are
a bunch of asterisks around that.
And so if security is super, super important to you, then I think many of us would actually
recommend Signal as the best place to do your communicating.
Yeah.
And you and I both use Signal.
Most reporters, I know, use Signal to have sensitive conversations with sources.
I know that Signal has been used by government officials
in both Democratic and Republican administrations
for years now.
So Casey, I guess my first question is like,
why is this a big deal that these high ranking
government officials were using Signal,
if it is sort of the gold standard of security?
Sure, so I would put it in maybe two sentences, Kevin, that sums this whole thing up.
Signal is a secure app, but using Signal alone does not make your messages secure.
So what do I mean by that? Well, despite the fact that Signal is secure, your device is vulnerable,
particularly if it's your personal device,
if it is your iPhone that you bought from the Apple store.
There is a huge industry of hackers out there
developing what are called zero-day exploits.
And a zero-day exploit is essentially an undiscovered hack.
They are available for sale on the black market. They often cost millions of dollars and
criminals and more often state governments will purchase these attacks because they say, hey,
it is so important to me to get into Kevin's phone. I have to know what he's planning for Hardfork this week.
So I'm gonna spend three million dollars.
I'm gonna find a way to get onto his personal device.
And if I have done that, even if you're using
Signal, it doesn't matter because I'm on your device now
I can read all of your messages, right? So this is the concern.
So wait, what are what are American military officials
supposed to do instead? Well, we have special designated
channels for them to use, we have networks that are not the
public internet, right? We have messaging tools that are
not commercially available. And we have set up protocols to make
them use those protocols to avoid the scenario that I just
described.
Yeah, so let's go into that a little bit. Because as you
mentioned, there are sort of designated communications,
platforms and channels that high ranking government officials,
including those with access to classified information,
are supposed to use, right?
There are these things called SCIFs,
the Sensitive Compartmented Information Facilities.
Those are like the physical rooms that you can go into
to receive like classified briefings.
Usually you have to like keep your phone out of those rooms
for security.
Yeah, I keep all of my feelings in a sensitive
compartmentalized information facility. But you're working on that. I'm working on that, I keep all of my feelings in a sensitive, compartmentalized information facility.
But you're working on that.
I'm working on that, I'm working on it.
But if you're not like physically in the same place
as the people that you're trying to meet with,
there are these secure communication channels.
Casey, what are those channels?
Well, there are just specialized services for this.
So this is like what a lot of the tech giants will work on.
Microsoft has something called Azure Government,
which is built specifically to handle classified data.
And this is like sort of rarefied air, right?
Not that many big platforms actually go to the trouble
of making this software.
It's a pretty small addressable market.
So you gotta have a really like solid product
and really good sales force to like make this
worth your while, but the stuff exists. and the government has bought these services over the years and installed them.
And this is what the military is supposed to use.
Yeah.
So I did some research on this because I was basically trying to figure out like, are these
high ranking national security and government officials using Signal because it is the kind
of easiest and most intuitive thing for them to use?
Are they doing it because they don't want to use the stuff that the government has set
up for its own employees to communicate?
Why were they doing this?
Because one thing that stuck out to me in the transcripts of these group chats is that
nobody in the chats seemed surprised at all that this was happening on Signal.
No one when this group was formed and these 18 people were added to it, you know, said anything about, hey, why are
we using Signal for this? Why aren't we using Microsoft Teams or whatever these sort of
official approved thing is?
What I found out when I started doing this research is that there is a something of a
patchwork of different applications that have been cleared for use by various agencies of
government. And one reason that these high ranking government officials
may have been using Signal instead of these other apps
is because some of these apps are not designed to work
across the agencies of government, right?
The DOD has its own communication protocols.
Maybe the State Department
has its own communication protocols.
Maybe it's not trivially easy to kind of start up
a conversation with a bunch of people from various agencies on a single government-owned and controlled tool.
Yeah.
And that should not surprise us because something that is always true of secure communications
is that it is inconvenient and annoying.
This is what makes it secure, is that you have gone to great lengths to conceal what
you were doing. I read some reporting in the Washington Post this week that for the most part,
when they're doing their most sensitive communications,
those communications are supposed to be done in person, right?
Like that is the default.
And if you cannot do it in person, then you're supposed to use these secure communication channels.
Again, not the public internet.
So that is the protocol that was not followed here.
Right.
And I think one other possible explanation for why these high ranking officials were
using Signal is that Signal allows you to create disappearing messages, right?
That is a core feature of the Signal product is that you can set in any group chat, like
all these messages are going to delete themselves
after an hour or a day or a week.
In this case, they seem to have been set
to delete after four weeks.
Now, there are good reasons why you might want to do that
if you're a national security official,
you don't want this stuff to hang around forever,
but we should also say that that is also
an apparent violation of the rules
for government communication,
because there are records act that of the rules for government communication because
there are records act that require the preservation of government communications.
And so one reason that the government and various agencies have their own communications
channels is because those channels can be preserved to comply with these laws about
federal record keeping.
Yes, there is a federal records act and a presidential records act.
And the idea behind those laws, Kevin, is that, well, you know,
if the government is planning a massive war campaign that
will kill a bunch of people, we should have a record of that.
We do, you know, in a democracy, you
want there to be a preservation of some
of the logic behind these attacks
that the government is making.
So yes, it seems like they clearly
have just decided they're not going to follow those.
Yeah, and so I think the place where I land on this is that this is, I would say, obviously a dumb,
probably unforgivably dumb mistake on the part of a high ranking national security official.
My favorite sort of like cover-up attempt on this was the national security advisor,
Michael Waltz, was asked how this happened,
because he was the person, according
to these screenshots of this chat, who added Jeffrey
Goldberg from the Atlantic to this chat.
And he basically gave the statement that was like,
we're all trying to figure out what happened here.
We saw the screenshot, Michael.
You added him.
And I think there are obvious questions that raises about, you know, whether he had mistaken
him with someone else named Jeffrey Goldberg, maybe a national security official of some
kind.
Oh, I bet the words Jeffrey Goldberg never even appeared on Michael Waltz's screen.
Okay, this is like the realm of pure speculation, but let me just tell you, as somebody who
is routinely contacted by people anonymously on Signal, usually their full name is not
in the message request.
It's like, you have a new message request from JG, so just those initials. And so I will look through my Signal chats and I'll
be trying to ask that one person about the one thing, what was their Signal name? And I'm looking
through a soup of initials. So like, I actually understand why that happened, which is yet one
more reason why you might not want to use Signal to do your war planning. Yes, exactly. I think the most obvious sort of Occam's razor explanation
for why all these high-ranking officials are on Signal
is that it's just a better and easier
and more intuitive product than anything the government
is supposed to be using for this stuff.
It's more convenient.
Yes, and I find this totally plausible,
having spoken with people who have been involved
with government technology in the past.
Like, it is just not the place where cutting edge software is developed
and deployed. Famously was this sort of struggle between President Obama and some of his security
advisors when he wanted to use a Blackberry in the Oval Office and there was sort of like
no precedent for how to do that securely. And so he fought them until they sort of made
him a special Blackberry that he could use.
Like this is a time-honored struggle between politicians
who want to use the stuff that they used
when they were civilians while in political office
and are told again and again, like, you can't do that.
You have to use this clunkier, older, worse thing instead.
Well, I'm detecting a lot of sympathy in your voice
for the Trump administration here,
which is somewhat surprising to me. Because while I can stipulate that sure, they must go through
an annoying process in order to plan a war, I'm somebody who thinks, well, it probably should be
really annoying and inconvenient. You probably should actually have to go physically attend a
meeting to do all of this stuff. And if we are going to decide that war planning
is something that the Secretary of Defense
can do during commercials for March Madness,
just pecking away on his iPhone,
we're going to get in a lot of trouble.
Imagine you're an adversary of America right now,
and you've just found out that the entire administration
is just chatting away on their personal devices.
Do you not think that they have gone straight to the black market
and said, what's a zero day exploit that we can use to get on
Pete Hegseth's phone? Of course they have for sure. And so what
I'm not saying here is that this is excusable behavior. What I am
saying is that I think people, including government officials
will gravitate toward something that offers them the right mix
of convenience and security. And I would like for this to be an incident that kind of spurs
the development of much better and more secure ways for the government to communicate with
itself. Like, it should not be the case that if a bunch of high ranking officials want
to start a group chat with each other, they have to go to this private sector app rather
than something that the government itself owns and controls and that can be verifiably
secure.
So yes, I think this was extremely dumb.
It is also, by the way, something that I'm sure was happening in democratic administrations
too.
Like this is not a partisan issue here.
Well, what exactly do you think was happening?
Like yes, the Democrats were using signal and yes, they were using disappearing messages.
It's not clear to me that they were planning military strikes.
I don't know.
I have no information either way on that.
What I do know is that, like, I have gotten messages on Signal from officials in both
parties.
I have gotten emails from the personal Gmail accounts of administration officials in both parties.
This is, I think, an open secret in Washington that the government's own tech stack is not
good and that a lot of people, for reasons of convenience or privacy or what have you,
have chosen to use these less secure private sector things instead.
I think I should make a serious point here, which is that it is in the national
interest of the United States to have a smaller gap between the leading commercial technology
products and the products that the government is allowed to use. Right now in this country, if you
are a smart and talented person who wants to go into government, one of the costs of that move is that you effectively
have to go from using the best stuff that anyone
with an iPhone or an Android phone can use
to using this more outdated, clunkier,
less intuitive set of tools.
I do not think that should be the case.
I think that the stuff that the public sector
is using for communication,
including a very sensitive things,
should be as intuitive and easy to use and convenient
as the stuff that the general public uses.
Yes, it should have additional layers of privacy.
Yes, you should have to do some kind of procurement process.
But a recurring theme on this podcast,
whenever we talk about government and tech,
is that it is just way too slow and hard
to get standard tools approved
for use in government. So if there's one silver lining of the Signalgate fiasco, I
hope it is that our government takes access to good technology products more
seriously and starts building things and maintaining things that are actually
competitive with the state of the art. I'm going to take the other side of this
one, Kevin. I think if you look at the way that the government was able to
protect their secrets in
previous administrations prior to the spread of signal,
they were actually able to prevent
high-ranking officials from accidentally adding
journalists to conversations that they shouldn't have been in.
There is no evidence to me that because of
the aging infrastructure of the communication systems of government
We were unable to achieve some sort of military objective
So, you know even as somebody who generally likes technology
I think some of these, you know tech oligarchs have this extremely
Know-it-all attitude that our tech is better than your tech your sucks and they sort of bluster in and they say all you know
All of your your aging legacy systems,
we can just get rid of those and move on to the next thing.
And then you wake up after Signalgate and you're like,
oh, that's why there was a system.
That's why there was a protocol.
It turns out it was actually protecting something.
Right, like this is the Silicon Valley story
over and over again, is we are going to come in
and try to build everything from first principles.
We're gonna be completely ahistorical.
We're not going to learn one less than anyone else has ever learned before because we think
we're smarter than you.
And Signalgate shows us that actually no.
Sometimes people have actually learned things and there is wisdom to be gleaned from the
ages, Kevin.
And maybe that should have been done here.
Well, Casey, the Defense Department may be in its failing era, but AI is in its scaling
era.
We'll talk to author of the scaling era, Rakesh Patel, when we come back.
Well, Casey, there are a number of people within the clubby and insular world of AI
who are so well known that they go by a single name.
That's true.
Madonna, Cher, and who else?
Well, there's Dario, Sam, Ilya, various other people.
And then there's Dwarkesh.
Yes.
Who is not working at an AI
company. He is an independent journalist, podcaster, blogger, intellectual blogger. He hosts the
Dwarkash podcast, which has had a number of former Hard Fork guests on it. And
he is, I would say, one of the best-known media figures in the world of AI. Yeah,
absolutely. You know, Dwakash seemingly came out of nowhere
a few years back and quickly became well-respected
for his highly technical, deeply research interviews
with some of the leading figures, not just in AI,
but in also history and other disciplines.
He is a relentlessly curious person,
but I think one of the reasons why he is so interesting
to us is on the subject of AI,
he really has just developed an incredible roster of guests
and a great understanding of the material.
Yes, and now as of this week, he has a new book out,
which is called The Scaling Era,
An Oral History of AI, 2019 to 2025.
And it is mostly excerpts and transcripts from his podcast
and the interviews that he's done with luminaries in AI.
But through it, he kind of assembles the history
of what's been happening for the past six or so years
in AI development, talking to some of the scientists
and engineers who are building it,
the CEOs who are making decisions about it,
and the people who are reckoning with what it all means.
Indeed. So we have a lot to ask Dwarkesh about,
and we're excited to get him into the studio today
and hang out.
All right, let's bring in Dwarkesh Patel.
Let's do it.
Let's do it.
Let's do it.
Let's do it.
Let's do it.
Let's do it.
Dwarkesh Patel, welcome to Heart Fork.
Thanks for having me.
I want to start with the Dwarkesh origin story.
You are 24 years old.
Yep. Correct.
You graduated from UT Austin.
You majored in computer science.
I'm sure a lot of your classmates
and people with your interest in tech and AI
chose the more traditional path of going to a tech company,
starting to work on this stuff directly.
Presumably that was a path that was available to you.
Why did you decide to start a podcast instead?
So it was never my intention for this to become my career.
I was doing this podcast basically in my free time.
I was interested in these economists and historians,
and it was just cool that I could cold email them and get them to come on
my podcast and then pepper them with questions for a few hours.
Then when I graduated,
I didn't really know what I wanted to do next.
So the podcast was almost a gap year experience of,
let me do this, it'll help me figure out what kind of
startup I want to launch or where I can get hired.
Then the podcast just went well enough that dot, dot, dot.
I'm like, this could actually be a career.
This is a more fun startup than whatever,
Code Monkey, third setting in Android kind of. So
basically just kept it up and it's grown ever, ever since and
it's been a fun time.
Yeah, I mean, I'm curious how you describe what you do. Do you
consider yourself a journalist?
I guess so. I don't know if there's a good word. I mean,
there's like journalists, there's content creator,
there's blogger, podcaster, sure, journalist, yes.
Humanitarian.
I ask because I started listening to your podcast
a while ago, back when it was called The Lunar Society.
And the thing that I noticed right away
was that
you were not doing a ton of explanation and translation.
Like I often think of our job as journalists
as one primarily of translation,
of taking things that insiders and experts are talking about
and like making them legible to a broader
and less specialized audience.
But your podcast was so interesting to me
because you weren't really doing that.
You were kind of not afraid to stay
in the sort of wonky insider zone.
You were having conversations
with these very technical experts
in their native language,
even if it got pretty insidery and wonky at times.
Was there a theory behind that choice?
No, honestly, it never occurred to me
because nobody was listening in the beginning, right?
So, I think it was a bad use of my guest time
to have said yes in the first place.
But now that they've said yes, like,
let's just have fun with this, right?
Like, who is listening to this?
It's me, it's for me.
And then what I realized is that people appreciated that style of...
Because with a lot of these people, they've done so many interviews.
You've heard their, you know,
what is your book about kind of thing before.
The intuition I always go for is,
pretend like you're at dinner with this person.
And if you're at dinner with them,
you just ask them about your main cruxes.
Like, here's why, here's, you know, what's going on here.
Here's why I disagree with you.
You tease them about like their big ideas or something.
But initially it was just an accident.
I think in mainstream media, we are
terrified that you might read something we write
or listen to something we do and not
understand a word of it, because there's always
an assumption that that is the moment
that you will stop reading.
I think what you have discovered with your podcast
is that that's actually a moment that causes people to lean in
and say, I didn't get all of that,
but I'm getting enough of it that I'm curious,
what is this thing that is going to happen next?
Right.
And everyone's got Google, right?
So if you don't understand something or chat GPT,
you can always look it up in a way that may not
have been possible with talk radio back in the day.
Yeah, that's right.
So you've got this new book out, The Scaling Era, basically
a sort of oral history of the past six or so years of AI
development.
Tell us about the book.
So I have been doing these interviews
with the key people thinking about AI over the last two years, CEOs like Mark Zuckerberg. Tell us about the book. So I have been doing these interviews with
the key people thinking about AI over the last two years.
CEOs like Mark Zuckerberg and
Demis Asabas and Dario Amadei.
Researchers at a deeply technical level,
economists who are thinking about what will
the deployment of these technologies be like,
philosophers who are talking about
these essential questions about AI ethics and how
will we align systems that are millions of times more
powerful or at least more plentiful.
These are some of the most gnarly,
difficult questions that humanity has ever faced.
What is the true nature of intelligence?
What will happen when we have millions of
intelligent machines that are running around in the world?
Is the idea of superhuman intelligence even a coherent concept?
Like, what exactly does that mean?
And what exactly will it take to get there, obviously?
So, it was such a cool experience to just see all of that organized in this way.
We would have annotations and definitions and just beautiful graphs.
My co-author Gavin Leach and our editor Rebecca Hyscott
and the whole team just did a wonderful job making this really beautiful artifact, so that's the book.
I also really liked the way that the book sort of slows down
and explains some of these basic concepts,
footnotes, the relevant research.
Like you really do, it is more accessible than I would say
the average episode of the Dwarkash podcast
in the sense that you can really start from,
like I would feel comfortable giving this to someone
as a gift who doesn't know a ton about AI
and sort of saying like, this is sort of a good primer
to what's been happening for the past few years
in this world.
And it won't treat you like an idiot.
Like a lot of these other AI books are just about this,
oh, big picture, how will society be changed?
And it's like, no, to understand AI, you need to know
like what is actually happening with the models,
what is actually happening with the hardware,
what is actually happening in terms of like actual investments and capex and whatever.
And we'll get into that.
But also because of this enhancement with the notes
and definitions and annotations, we still, you know,
it's written for, like, a smart college roommate
in a different field.
Yeah.
One question that you asked at least a couple people
in your book some version of was basically,
what's their best guess at why scaling works?
Why pouring more compute and more data into
these models tends to yield something like intelligence?
I'm curious what your answer for that is.
What's your current best guess of why scaling works?
I honestly don't think there's a good answer anybody has.
The best one I've heard is this idea that intelligence is
just this hodgepodge of different kinds of circuits and programs.
This is so hand wavy and I acknowledge this is hand wavy,
but you got to come up with some answer.
That fundamentally what intelligence is,
is this pattern matching thing,
this ability to see how different ideas connect and so forth.
As you make this bucket bigger,
you can start off with noticing,
does this look like a cat or not?
Then you get to higher and higher levels of abstraction,
like what is the structure of time and
the so-called ether and the speed of light and so forth.
Again, so hand wavy,
but I think it ultimately will just be this hodgepodge.
It does strike me that this just feels similar to the way that human beings work. You're born into the world and you just so hand wavy, but I think it ultimately will just be this hodgepodge.
It does strike me that this just sort of feels similar to the way that human beings work.
Like you're born into the world and you just essentially get blasted with data for many,
many years until you have some kind of symbolic understanding of everything and then you go
from there.
So that's how I think about it.
Yeah, I mean, there seems to be this sort of philosophical divide among the AGI believers
and the AGI skeptics over the question of
whether there is something other than just materialism in intelligence, whether it is
just like, intelligence is just a function of having the right number of neurons and
synapses firing at the right times and sort of pattern matching and doing next token prediction.
I'm thinking of this like famous Sam Altman tweet where he posted, I am a stochastic parrot and so are you. Basically, sort of dealing with, rebutting the
these sort of common attack on large language models, which was that they were just stochastic
parrots. They're just learning to sort of regurgitate their training data and predict the
next token. And among a lot of the sort of AGI true believers that I know, there is this feeling
that we are just essentially doing
what these language models are doing in predicting
the next tokens or synthesizing things that we've
heard from other places and regurgitating them.
That's a hard pill for a lot of people to swallow, including me.
I'm not quite a full materialist.
Are you? Do you believe that there's something about
intelligence that is not just
raw processing power and data and pattern matching?
I don't.
I mean, it's hard for me to think about what that would be.
There's obviously religious ideas about,
there's maybe a soul or something like that,
but separate from that,
something we could like sort of have a debate about
or analyze.
Yeah, and actually I'm curious about like,
what kind of thing could it be?
Ethics?
I don't know, like that sounds very fuzzy
and non-scientific, but like,
I do think there is something essential about intelligence
and being situationally intelligent
that requires like something outside
of your immediate experience,
like knowing what is right and what is wrong?
Well, I think one reason why this question might be a bit challenging
is that there are still many areas where the AI we have today
is just less than human in its quality level, right?
Like, these machines don't really have common sense.
Their memories are not great.
They don't seem to be great at acquiring new skills, right?
If it's not in the training data,
sometimes it's hard for them to get there.
And so it does raise the question,
well, is the kind of intelligence we have
kind of categorically different
than whatever this other kind of intelligence is
that we're inventing?
Yeah. That's right.
On the ethics thing,
I think it's notable that if you talk to GPT-4, it has a sense of ethics.
If you talk to Claude, it has a sense of ethics. It will tell you, you talk about like, what do you think about animal ethics?
What do you think about this kind of moral dilemma? It like it has this.
I mean, I'm not sure what you mean by a sense of ethics.
In fact, the worry is that it might have too strong a sense of ethics. Right.
And by there, I'm referring to maybe its ethics becomes like I want more paper clips or, I mean, sorry, on a more serious note.
But those ethics are given to it in part by the process of training and fine tuning the
model or making it obey some constitution, like...
Where do you think you get your ethics?
Who trained you, Bruce?
Yeah, I mean...
I mean, it is notable that we, most people in a given society
shared the basic world, dude,
that like you and I agree on 99% of things,
and we would probably agree on like 50% of things
with somebody in the year 1500.
And the reason we agree on so much
has to do with our training distribution,
which is this real, you know, the society we live in.
Yeah.
So yeah, I mean, maybe this argument
that there is something more to intelligence than
just brute force computation is somewhat romantic.
That's what they call it, cope.
Yes.
I was trying to figure out a more sophisticated way of saying cope.
But, like, do you think that is cope?
Do you think that the people who are sort of skeptical of the possibility of AGI because
they believe that computers lack something essential that humans have is just a response to
not being able to cope with the possibility that computers could replace them.
I think there's two different questions.
One is, is it cope to say that we won't get AGI in the next two years or three years or whatever short timelines that some people in San Francisco,
some of our friends seem to have? I don't think that's cope. I think there's actually a lot of
reasonable arguments one can make about why it will take a longer period of time.
Maybe it'll be five years, 10 years. Maybe this ability to, as you were saying, keep this coherence
and engage with the task over the course of a month is just requires a different kind of skill
than these models currently have. I don't think that's cope. I think the idea that we'll never get there is cope because there's always this argument
of the god of the gaps, of the intelligence of the gaps.
The thing it can't do is a thing that is fundamentally human.
One notable thing, Aristotle had this idea that what makes us human is fundamentally
our ability to reason.
And reasoning is the first thing these models have learned to do.
Like they're like not that useful at most things except for raw reasoning.
Whereas the things we think of just as pure reptile brain of having this understanding
of the physical world as we're moving about it or something, that is the thing that these
models struggle with. So we'll have to think about what is the archetypical human skill set as these models advance.
That's fascinating. That never actually occurred to me.
I think it speaks a lot to why people find them so powerful in
this therapist mentor coach role,
is that those figures that we bring into
our lives are often just there to help us reason through something.
These models are increasingly very good at it.
Yeah.
Yeah.
In your conversations with all these AI researchers and industry leaders,
are there any blind spots that you feel they have consistently or places where they are
not paying enough attention to the consequences of developing AI?
I think they do not,
with a few notable exceptions, they don't have a concrete sense of what things
going well looks like and what stands in the way.
If you just ask them what the year 2040 looks like, they'll say things like, oh, we'll cure
cancer, we'll cure these diseases.
But what is our relationship to billions
of advanced intelligences?
How do we do redistribution such that the over,
I mean, it's not your or my fault
that we'll be out of a job, right?
There's no in principle reason why everybody
couldn't be better off, but there shouldn't be
the zero something where we should make sure
the AIs don't take over.
And we should also make sure we don't treat them terribly. Hmm. Something else that's been on my mind recently
that you're sort of getting at is, or that maybe you are getting at with your question, Kevin,
is how seriously do the big tech companies take the prospect of AGI arriving?
Because on one hand, they'll tell you, we're the leading frontier labs,
we're publishing some of the best research, we're making some of the best products, and yet it seems like none of them are really reckoning with any of the
questions that you just raised.
It sort of makes sense, even saying some of the stuff that you just said right now, which
seems quite reasonable to me, would sound weird if Satya Nadella were talking about
it on an earnings call, right?
And yet at the same time, I just want...
Quarter four was just strong with 4,000 happy AIs,
growing 10% year over year.
Right. But like, on some level, it's weird to me.
You know, somebody recently was talking to me about Google
and was sort of saying, if you look at what Google is shipping right now,
it doesn't seem like they think that very powerful intelligence
is going to arrive anytime soon. What they're
taking seriously is the prospect that ChatGBT will replace Google and search. And that maybe
if you actually did take AGI seriously, you would have a very different approach to what you were
doing. So as somebody who has like talked to the CEOs of these companies, I'm curious, how do you
rate how seriously they're actually taking AGI? I think almost none of them are AGI-pilled.
Like they might say the word AGI,
but if you just ask them,
what does it mean to have a world with
like actually automated intelligence?
There's a couple of immediate implications.
So right now, these companies are competing with
each other for market share and chat.
If you had a fully autonomous worker,
even a remote worker,
that's worth tens of trillions of dollars.
That's worth way more than a chatbot, right?
So you'd be much more interested in
deploying that kind of capability.
I don't know if API is the right way,
maybe it's like a virtual machine or something.
I'd just be much more interested in developing the UI,
the guardrails, whatever, to make that work,
then trying to get more people to use my chat app.
Then I also think compute will just be this huge bottleneck if you really believe
that what compute buys you is a human level. Human intelligence is worth a lot, right?
Like, just look at every GDP per capita is like $70,000 or something.
So I would just be interested in getting as much compute as possible to have it ready to deploy
once the AIs are powerful enough. One of the things I really enjoyed about your book is getting a sense,
not just of what the people you've interviewed think about AI and AGI and scaling,
but what you believe.
I have to say I was surprised at the end of the book you said that you believe AI is
more likely than not to be net beneficial for humanity.
I was surprised because a lot of the people you talk to have quite high P-Dooms.
They're quite worried about the way AI is going.
That seems not to have spread to you.
You seem to be much more optimistic than some of your guests.
Is that just a quirk of your personality or why are you more optimistic than the people
you interview?
If you have a P-Doom of 10% or 20%,
that is, first of all, unacceptable.
The idea that everything you care about,
everybody you care about,
could in some way be extinguished, disempowered, so forth.
That is just an incredibly high number.
Just like, let's say nuclear weapons is like a doom scenario.
It's like, if you're like,
I should I go over the world with this country,
and there's a 20% chance that there's no humans around,
you should not take that bet.
But it's harder to maybe...
express the kinds of improvements which are...
This will sound very utopian,
but we do have peak experiences in our life.
We know that, or we have people we really care about,
but we know how beautiful life can be,
how much connection there can be, how much connection there can be,
how much joy we can get out of,
whether it's learning or curiosity
or other kinds of things.
And there can just be many more people,
us, digital, whatever, who can experience it.
And there's another way to think about this,
because it's fundamentally impossible
to know what the future holds.
But one intuition here is, imagine I gave you the choice,
I'll send you back to the year 1500.
Tell me the amount of money I would have to give you,
but you can only use that money in the year 1500,
such that it would be worth it for you
to go back to the year 1500.
I think it's quite plausible the answer is,
there's no amount of money I'd rather have in the year 1500
than just be alive right now
with my normal standard of living.
And I think, I hope,
we'll have a similar relationship
with the future.
What is your post-AGI plan?
Do you think that you will be podcasting?
Will you still hang out with us?
It's funny, because we have our post-AGI careers already.
Even after the AGI comes, they might automate everybody else
in this office, but you and I will just
get in front of a camera and...
That there will still be value in sort of like having
a personality, being able to talk, explain,
being somebody that people relate to on a human level.
That's right, I think so.
I am curious though, because a thing that I know about you
from our brief interactions and just reading things
that have been written about you,
is that you believe in learning broadly. You have been described as a person who's being on a quest
to learn everything.
Sounds exhausting.
Casey's on a quest to learn nothing.
I'm on a quest to learn what I need to learn.
Just in time manufacturing.
I think a lot of people right now, especially students
and younger people, are questioning the value
of accumulating knowledge.
We all have these, like, pocket oracles now
that we can consult on basically anything.
And sometimes, I think, I was at a school last week
talking with some college students.
And one of them basically said they felt like they were
a little bit like the taxi drivers in London
who still had to memorize all the streets
even after Google Maps was invented.
And that was sort of obsolete.
They felt like they were just doing it
for the sake of doing it.
I'm curious what for you the value
of broad knowledge accumulation is
in an age of powerful AI.
The thing I would say to somebody
who is incredibly dismayed is like,
why am I going to college?
Why is any of this worth it is,
if you believe AGI, ASI is gonna be here in two years,
that's fine.
I don't think that's particularly likely.
And if it is, what are you gonna do about it anyways?
Right, so why might as well focus on the other worlds.
And in the other worlds, it's gonna happen before
the fully
automated robot that's automating the entire economy
is these models will be able to help you at certain kinds
of tasks, but they will fundamentally just give you
more leverage on the world.
My friend Ashota Douglas put it this way.
Just imagine you're going to have 100x the amount of leverage
on the future.
And the kinds of things that you will be in a good position to do is,
if you have deep understanding of a particular industry,
the relevant problems in it,
and it's hard to give advice in the abstract like this,
because I don't know about these industries,
so you'll have to figure it out.
But this is probably the time to be the most ambitious,
to have the most amount of agency,
to actually, these models currently aren't really good
at actually doing things in the real world,
or even the digital world.
If you can do that and use these as leverage,
this is probably the most exciting time to be around.
Here's my answer for that.
You don't want to be in a world
where you just have to ask chat GPT everything.
Do you know what I mean?
Like there's a lot of effort involved
in just sitting down, writing the prompt,
reading the report that comes out of it, internalizing it,
synthesizing, relating it, like, you'd be better off actually
just getting an education and then checking in with the chat
bot for the things that chat bot is good at, at least for, you
know, I don't know, next few years.
Yeah, I don't know, I believe that and I want to believe that
that the thing I've spent my life doing is not going to be
obsolete. Trying to be smarter and learn things.
My sort of guiding principle on this is like, learning is fun.
And if you can just do it for your own enjoyment,
like, I don't think learning the streets of London is that fun,
but I think learning broadly about the world is fun.
And so you should do it if it's exciting and fun to you.
Absolutely. I think that's totally correct.
I also, if I'm like actually fun to you. Absolutely. I think that's totally correct.
I also...
If I'm actually talking to a younger version of myself...
Who would be six years old, to be clear?
Who's the young man we're talking to today?
Hey, little buddy.
Yeah.
Um...
I... Just advice on careers in general is so bad.
And for...
Especially with how much the world's gonna be be changing it's going to get even worse and so I mean who would have told me who what kind of
reasonable person would have told me four years ago man this computer science stuff just stop that
focus more time on the podcast right um so yeah it's going to change a lot I think but see that's
not helpful like what are you going to do with lot. I think, but see, that's not helpful.
Like what are you going to do with this idea that like all advice is wrong?
It would be an even worse position.
Just this idea that like, yeah,
be a little bit skeptical of advice in general.
Really trust your own intuition, your own interests.
Don't be delusional about things obviously,
but yeah, explore.
Try to get a better handle on the world and do more things
and run more experiments than just,
this is the thing that's gonna be high leverage in AI
and that's where I'm gonna do this
based on this first principles argument.
Yeah, I think run more experiments
is just really great underused advice.
Is that why you built a meth lab in your house?
Yeah, it's going great for me.
Buy me that hot tub.
Okay, this was great. Thank you so much, great for me. Buy me that hot tub. Okay.
This is great. Thank you so much, Rakesh.
Thanks, Rakesh.
This is fun.
Thanks for having me on, guys.
Well, Kevin, when we come back,
we asked listeners whether they thought AI
might be affecting their critical thinking skills.
It's time to reveal what they all told us. Well, Casey, a couple of weeks ago, we talked about a study that had come out from
researchers at Carnegie Mellon and Microsoft
about AI and its effects on critical thinking.
That's right. We wanted to know how our listeners
felt about how AI was affecting their critical thinking.
So we asked people to send in their e-mails and voicemails.
Yeah. We got so many responses to this. I mean, almost 100 responses from our listeners
that reflected kind of the more qualitative side of this of how people actually feel like AI is
impacting their ability to think and think deeply. Yeah, and look, there may be a bit of a selection
effect in here. I think if you think AI is bad and destroying your brain and don't touch the stuff, you
probably are not sending us a voicemail.
But at the same time, I do think that these responses show kind of the range of experiences
that people are happening.
And so yeah, we should dive in and find out what our listeners are feeling.
Okay, so first up, we're going to hear from some listeners who felt strongly that AI was not making them
dumber or worse at critical thinking,
who believe that it is enhancing their ability
to engage critically with new material and new subjects.
So let's play one from a perspective
that we haven't really engaged with a lot on this show
so far, which is people of the cloth.
My name is Nathan Bourne and I'm an Episcopal priest. A big part of my work is putting things
in conversation with one another. I'm constantly finding stories, news articles, chapters of
books, little bits of story that people have shared with me and interpreting them alongside
scripture. I've long struggled to find a good system to keep track of all those little bits
I've found. Over the last year, I've turned to AI to help. I've used the Readwise app to better store, index,
and query pieces that I've saved.
I've also used Claw to help me find material
that I would never encounter otherwise.
These tools have expanded my ability to find
and access relevant material that's helped me
think more deeply about what I'll preach
and in less time than I used to spend sifting
through Google results and the recesses of my own hazy memory.
Wow, I love this one. This one was particularly fascinating to me because I've spent some time
working on religion related projects. I wrote a book about going to Christian College
many years ago, and I spent a lot of time in church services over the years and so much of what the church services that
I've been in have done has tried to like sort of find a modern spin or a modern take or some modern insights
on this very old book, the Bible.
And I can imagine AI being very useful for that.
Oh, yeah, absolutely.
I mean, this feels like a case where Nathan is almost setting aside the question of AI
and critical thinking and just focusing
on ways that AI make his researching and writing that he has to do every week much easier,
right?
Like these are just very good, solid uses of the technology as it exists, and they're
still leaving plenty of room to bring his own human perspective to the work, which I
really appreciate.
And, you know, of course, always love to hear about a man of the cloth sort of clasping
his hands together and saying, Claude, help me.
All right, let's hear the next one.
This is from a software engineer named Jessica Mock, who told us about how she's taking a
restrained approach to asking AI for help with coding? When I was being trained, my mentor told me
that I should avoid using autocomplete.
And he said that was because I needed to train my brain
to actually learn the coding.
And I took that to heart.
I do that now with AI.
I do use Co-Pilot, but I use it for floating theories, asking about things
that I don't know. But if it's something that I know how to do, I put it in myself and then
I ask Copilot for a code review. And I found that to be pretty effective. My favorite use
of Copilot though is what does this error mean when I'm debugging?
I love asking that because you get more context into what's happening and then I start to
understand what's actually going on.
Is it making me dumber?
I don't think so.
I think it's making me learn a lot.
I'm jumping into languages that I was never trained in and I'm trying things that I normally would have shied away from.
So I think it really depends on how you use it.
So I love this one.
If you talk to software engineers about how they solve problems,
a lot of what they'll do is just ask a senior software engineer.
And that creates a lot of roadblocks for people,
because that senior software engineer might be busy doing something else, or maybe just feel a little bit shy
about asking them 15 questions a day.
What Jessica is describing is a way where she just kind of
doesn't have to do that anymore.
She can just ask the tool, which is infinitely patient,
has a really broad range of knowledge,
and along the way she feels like she is leveling up
from a more junior developer to a senior one.
That's pretty cool.
Yeah, I like this one.
I think it also speaks to something that I have found
during my vibe coding experiments with AI
is that it does actually make me wanna learn how to code.
Like even though it is probably unnecessary
for me to learn how to code, to build stuff,
and will become increasingly unnecessary,
there is sort of just this like intellectual kick
in the pants where it's like, you know,
if you just like applied yourself for a few weeks, you could probably learn a little bit of Python and start to understand
some of what the AI is actually doing here.
Absolutely. You know what makes me reliably want to finish a video game? It's getting
a little bit good at a video game, right? If I'm starting out and I can't like sort
of figure out how to tie my shoes, I'll throw it away. But that moment where you're like,
oh, I get this a little bit, it unlocks this whole world of curiosity And it sounds like AI is maybe you know giving Jessica that experience
Jessica's message also highlights something really important
Which is we actually know who the worst writers in the world are and they are the people that wrote the error messages, right?
How many times have you just seen a pop-up that says well you hit error?
642 try again. Wait, what is error?
642 and it turns out all that information was on the internet and AI has now made that accessible to us and helps us understand. So if nothing else, AI has been good for that.
Yeah. All right. This next one comes to us from a listener named Gary. He's from St. Paul, Minnesota, which is one of the twin cities, Kevin, along with Minneapolis. And it points to the importance of considering different learning challenges or disabilities
when considering this question of AI's impact
on critical thinking.
Let's hear Gary.
I'm a 62-year-old marketing guy who does a lot of writing.
I'm always trying to get new ideas,
keep track of random thoughts.
And I also have ADHD, so I get a ton of ideas, but, um, I was
looking at a ton of distractions, to be honest.
And so what I've found with AI is I get to have a thought partner, if you will,
who, um, can help me just download all of these different ideas that I've got.
Um, and, you know, if I need to follow a thread, I can follow a
thread by asking more questions.
At the end of one of these brainstorming sessions, I can say, just recap everything that we came up with, give it to me in a list.
And all of a sudden, my productivity just gets massively improved because I don't have
to go back and sort through all of these different notes, all of these different things.
I've jotted down all over and can sort through what's real and what isn't real.
So it has been super helpful to me in that way.
Kevin, what do you make of this one?
Yeah, I like this one because I think that one of
the things that AI is really good for is people with
not just like challenges or disabilities with learning,
but just different learning styles.
One of the most impressive early uses of
chat GPT that I remember hearing about was the use in
the classroom to tailor a lesson to a visual learner,
or an auditory learner,
or just someone who processes
information through metaphors and comparisons.
It is so good at doing that kind of work of making something
accessible and personalized
to the exact way that someone wants to learn something.
Yeah. I imagine that Gary may be doing this already,
but the sort of use cases that he's describing seem like they would be great
for somebody who wants to use one of these voice mode technologies.
Yes.
I'm somebody who's most comfortable on a keyboard, but there are so many people
that just love to record notes to self, and there are now a number of AI tools
that can help you
organize those and turn them into really useful documents.
And so if you're the sort of person that just wants to
let your mind wander and talk into your phone for a few minutes,
and then give to the AI the job of making it all make sense,
we have that now, and that is kind of crazy and cool.
Yeah.
All right. Let's do one more in this camp of
people who don't think that AI is
making them dumber or worse at critical thinking.
My name is Anna and I live in a suburb of Chicago.
I wanted to share a recent experience I had with
AI and how it made me think harder about solving a problem.
I'm self-employed and don't have
the benefit of a team to help me if I get solving a problem. I'm self-employed and don't have the benefit of a
team to help me if I get stuck on something. I was using an app called Airtable, which is a
database product. I consider myself an advanced user, but not an expert. I was trying to set up
something relatively complex, couldn't figure it out, and couldn't find an answer in Airtable forums. Finally, I asked ChatGBT.
I explained what I was trying to do in a lot of detail and asked ChatGBT to tell me how I should configure Airtable to get what I was looking for.
ChatGBT gave me step-by-step instructions, but they were incorrect.
I prompted ChatGBT again and said, Airtable doesn't work that way.
And ChatGBT replied, you're right.
Here are some additional steps you should take.
The resulting instructions were also incorrect,
but they were enough to give me an idea
and my idea worked.
In this example, the back and forth with ChatGBT
was enough to help me stretch the skills
I already had into a new use case.
I love this one because I think what made AI helpful to Anna in this case is not that
she used it and immediately gave her good information, is that she knew enough about
it to know that it was unreliable and so to do her own deeper dive based on her experience that she wasn't getting
good information from the AI.
My worry is that people who aren't, Anna,
who aren't sort of deeply thinking about these things
will just kind of blindly go with whatever the AI tells them,
and then if it doesn't work, they'll just kind of give up.
I think it really is a credit to her
that she kept going and kept figuring out
what is the real solution to this problem.
It is a risk, but let me just say, and this is just kind of a free tip for your life,
if you are someone who struggles with using software,
I increasingly believe that one of the best uses of chatbots is just asking them to explain to you how to use software.
I recently got a PC laptop and everything is different than I've been used to for the past 20 years of using a computer
But my PC has a little copilot button on it and I press it and I can say how do I connect an Xbox?
Controller to this thing and it told me in 10 seconds say me a lot of googling
So anyway, Anna you're onto something here. It said get a life
Shame on you copilot
I did say that. I was offended. Shame on you, co-pilot. All right. Now let's hear from some listeners, Kevin,
who are more skeptical about the way AI might be affecting
their own cognitive abilities,
or maybe their students' ability to get their work done.
For this next one, I want to talk about an email we got
from a professor, Andrew Fanno,
who conducted an experiment in a class he teaches
for MBA students at Northwestern.
Northwestern, of course, my alma mater, go Wildcats.
And that is why we selected this one.
And Andrew sent us a sort of longer story
about a class that he was teaching.
And the important thing to know about this class
is that he had divided the students into two groups.
One could use computers,
which meant also using large language models,
and another group of students who could not.
And then he had them present their findings. And when the computer group presented,
he told us that they had sort of much more creative ideas, more outside the box, and that those solutions
involved listing many of the items that the LLMs had proposed for them.
And one of the reasons that Andrew thought that was interesting was that many of the ideas that they presented
were ones that had actually been considered and rejected by the people who were not using the computers
because they found those ideas to be sort of too outlandish.
And so the observation that Andrew made about all of this was that the computer using group saw these AI generated ideas as
something that they could present without them reflecting negatively on
themselves because they weren't their ideas these were the computers ideas and
so it was like the LLMs were giving them permission to suggest things that might
otherwise seem embarrassing or ridiculous so what do you make of that?
That's interesting I mean I usually think of AI as being kind of a flattener of creative ideas
because it is just sort of trying to give you like the most, you know, predictable outputs.
But I like this angle where it's like actually, you know, giving you the permission to be a little weird
because you can just say if someone hates the idea, you can just say, oh, that was the AI.
Yeah, don't blame me. Blame this corpus of data that was harvested from the internet.
Which is why I plan,
if anyone objects to any segments
that we do on the show today or in the future,
I do plan on blaming ChatGPT.
That was the ChatGPT's idea.
Yeah, interesting.
If it's a good segment, I did it.
If not, it was Claude.
All right, let's move to another listener message.
This one's from a listener named Katya,
who's from Switzerland.
She told us about how looming deadline pressure caused her to
maybe over-defer to AI outputs.
She wrote, quote, last semester,
I basically did an experiment on this myself.
I was working on a thesis during
my master's studies and decided to use some help.
My choice fell on cursor,
which is one of these AI coding products.
She writes,
initially, I intended using it for small tasks only just to be a bit faster, but then the
deadline was getting closer, panic was setting in and I started using it more and more. The
speed was intoxicating. I went from checking every line of code to running rounds of automatic
bug fixing without understanding what the problems were or what was being done.
So I actually think this is the most important email that we've gotten so far, because it
highlights a dynamic that I think a lot of people are going to start feeling over the
next couple of years, which is my bosses have woken up to the fact that AI exists. They're
gradually raising their expectations for how much I can get done.
If I am not using the AI tools
that all my coworkers are now using,
I will be behind my coworkers
and I will be putting my career at risk, right?
And so I think we're gonna see more and more people
do exactly what Katja did here
and just use these tools like Cursor.
And while, you know, to
some certain level, I think that's okay, we've always used productivity tools to make ourselves
more productive at work. There is a moment where you actually just stop understanding
what is happening. And that is a recipe for human disempowerment, right? At that point,
you're just sort of barely supervising a machine, and the machine is now doing most of your job.
So, this is kind of like a small story
that I think contains a dark warning
about what the future might look like.
Yeah, I think that kind of mental outsourcing
does worry me, the sort of autopilot of human cognition.
An analogy I've been thinking about recently
in trying to
distinguish between tasks that we should outsource to AI and
tasks that we probably shouldn't
is forklifting versus weightlifting.
Okay. Tell me about this.
So there are two reasons that you
might want to lift heavy things.
One of them is to get them from point A to point B for some purpose.
Maybe you work in a warehouse.
Obviously, you should use a forklift for that.
There's no salutary benefit to carrying heavy things
across a warehouse by yourself.
And that's very slow, it's very inefficient,
and the point of what you're doing
is to try to get the thing from point A to point B.
Use a forklift for that.
Weightlifting is about self-improvement.
Weightlifting is, yes, you could use a machine
to lift this heavy object,
but it's not going to make you stronger in any way.
The point of weightlifting is to improve yourself
and your own capabilities.
So I think when you're in a situation
where you have the opportunity or the choice of using AI
to help you do some task,
I think you should ask yourself
whether that task
is more like forklifting or more like weightlifting
and choose accordingly.
I think it is a really good analogy,
and people should draw from that.
I want to offer one last thought of my own, Kevin,
which is that while I think it is important
to continue this conversation of how is AI
affecting my critical thinking?
I think in this last anecdote,
we see this other fear being raised,
which is what if the issue isn't,
do I still have my critical thinking skills?
And what if the actual question is,
do I have time to do critical thinking?
Because I think that one effect of these AI systems
is that everybody
is going to feel like they have less time, the expectations on them have gone up at work,
they're expected to get more done because people know that they have access to these
productivity tools. And so you might say, you know what, I actually really want to take
some time on this and I don't want to turn to the LLM and I want to bring my own human
perspective to this. And you're going to see all your co-workers not doing that and it is just
Going to drag you into doing less and less of that critical thinking over time
So well, I think you know is AI making me dumber is a really like interesting and funny question that we should keep asking
I think am I going to have the time that I need to do critical thinking might actually be the more important question
Yeah, that's a really good point. All right, well that's enough critical thinking for this week.
I'm gonna go be extremely ignorant for the next few days
if that's okay with you, Kevin.
That's fine by me. Heart Fork is produced by Rachel Cohn and Whitney Jones.
We're edited this week by Matt Collette.
We're fact checked by Ina Alvarado.
Today's show is engineered by Alyssa Moxley.
Original music by Marion Lozano, Diane Wong, and Dan Powell. Our executive producer is
Jen Poyant. Our audience editor is Nell Golovly. Video production by Chris Schott, Soya Rokey,
and Pat Gunther. You can watch this whole episode on YouTube at youtube.com slash hardfork.
Special thanks to Paula Schuman, Pui Wing Tam, Dalia Haddad, and Jeffrey Miranda. You
can email us at hardfork at ytimes.com
or if you're planning a military operation,
just add us directly to your signal chats.