Hard Fork - Meta on Trial + Is A.I. a ‘Normal’ Technology? + HatGPT
Episode Date: April 18, 2025This week Meta is on trial, in a landmark case over whether it illegally snuffed out competition when it acquired Instagram and WhatsApp. We discuss some of the most surprising revelations from old em...ail messages made public as evidence in the case, and explain why we think the F.T.C.’s argument has gotten weaker in the years since the lawsuit was filed. Then we hear from Princeton computer scientist Arvind Narayanan on why he believes it will take decades, not years, for A.I. to transform society in the ways the big A.I. labs predict. And finally, what do dolphins, Katy Perry and A1 steak sauce have in common? They’re all important characters in our latest round of HatGPT. Tickets to Hard Fork live are on sale now! See us June 24 at SFJAZZ. Guest:Arvind Narayanan, director of the Center for Information Technology at Princeton and co-author of “AI Snake Oil: What Artificial Intelligence Can Do, What it Can’t, and How to Tell the Difference.” Additional Reading:What if Mark Zuckerberg Had Not Bought Instagram and WhatsApp?AI as Normal TechnologyOne Giant Stunt for Womankind We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Transcript
Discussion (0)
Okay, see, I'm having a little mystery this week.
We love a mystery on the show.
Because I was traveling last week,
and I was at the Newark, New Jersey airport,
and I realized as I was leaving the airport
that I had left my Apple Watch at the airport.
So at some point, you just looked down at your wrist,
and the Apple Watch is gone.
Yes. Okay.
And then, of course, I went to the Find My app to try to and the Apple Watch is gone. Yes. Okay. And then of course I went to the Find My App
to try to locate my Apple Watch and it had moved.
So in the span of about an hour,
I went from the Newark airport to a vape shop
in Kearney, New Jersey.
That's not good.
No.
That means it probably was not accidental
that this thing just disappeared off your wrist.
Yeah, maybe not.
And I put it into lost mode too.
Have you ever done lost mode?
You live your life in lost mode.
I've.
Yes.
Yes, your life is one big lost mode.
But this is a thing that you can do
where if someone picks up your Apple watch,
it'll like say, here's my phone number, please call me.
Yeah.
So the thief, whoever it is, has my number on the watch.
Have you thought about just changing the message
as something really mean?
Just be like, nice vapes you have there.
What are you, 13?
Cool vapes, bro.
Yeah, then give me back my watch.
Say I'm watching you vape.
That's what I would change it to.
I'm Kevin Ruse, a tech columnist at the New York Times. I'm Casey Noon from Platformer. And this is Hard Fork. This week meta goes to trial in an antitrust
case. Here's what we've learned from the testimony so far. Then, AI snake oil author Arvin Narayanan
joins us to make his case that AI
has been massively overhyped.
And finally, it's time for another round of Hat GPT.
Casey, we have a very exciting announcement to start the show this week. Yes, we do, Kevin.
Tell the people.
We are doing a live show.
Hard Fork Live is coming to San Francisco on Tuesday, June 24th.
Kevin, since almost the start of doing this podcast,
we have been clamoring to get out there,
to get listeners in a room with us and do what we do live,
but maybe with some fun twists and turns,
and we are finally ready to share that with the world.
Yes, this is going to be a very fun night at SF Jazz.
We have all kinds of surprises in store,
including some very special guests.
So come on out and hang out with us,
spend the night with us.
Yes, and here's what I will say.
If you wanna have your bachelorette party
at Hard Fork Live, we'll take pictures with you.
Think about it.
Now listen, I know that you're already sold,
so how do you get tickets?
Well, it's simple, gang.
Go to nytimes.com slash events slash Hard Fork Live,
and you can find the tickets right there,
or you can just click the link in the show notes.
Yes, there will be hard fork merch at hard fork live.
There may be dancing, there may be pyrotechnics.
Here's the point, if you don't go,
you're not gonna know what happened.
Exactly.
And I think you are gonna wanna know.
Yeah, it's gonna be great.
June 24th SF Jazz tickets just went on sale.
Snap them up, cause they're not gonna last forever.
Yes, buy your tickets today at nytimes.com
slash events slash hard fork live.
Now that's enough of that, Kevin, let's get to the news.
Yeah, so the big story this week that we wanna start with
is that Meta is finally going on trial.
This is an antitrust case that was brought
by the Federal Trade Commission years ago,
and this week it started the actual trial
in the U.S. District Court in Washington, D.C.
This is a big case.
It is one of the largest antitrust cases
brought over the last decade against a major tech company.
And it has potentially big consequences.
One of the remedies that the government
is trying to push for here is that
Metta would be forced to divest Instagram
and WhatsApp, basically undoing those acquisitions
that Facebook made many years ago.
Yeah, and while that might not be a super likely outcome
here, Kevin, I do think it speaks to the existential
stakes of this trial for Metta.
They absolutely have to win, otherwise it will look
like a completely different company.
Yes, so it's a little too soon to try to handicap how the case is going. It's only been a couple
of days. Mark Zuckerberg and other meta executives are still testifying. Basically, we've just
gotten through opening statements and a little bit of testimony. But Casey, can we just review
the history of this case and how we got to this point?
Yeah. So this is a case that was filed all the way back in December, 2020, during the first Trump administration.
The charge was that Metta had acted anti-competitively
in building a monopoly in the space
that the FTC calls personal social networking.
And that one of the ways that they had illegally maintained
it was by snapping up all their competitors
and preventing
them from growing into big independent companies. Most famously, of course, with Instagram, which
it bought in 2012, and with WhatsApp, which it bought two years later. So that was the original
charge. And the judge, Kevin, throws it out. And what's the reason for throwing it out? Well, the FTC had alleged that Facebook
had a monopoly in this market.
But when the judge read the complaint,
he was like, you kind of didn't really offer
much evidence for that.
You know, there was kind of a stray stat here or there.
But he felt like the FTC had been kind of lazy
in bringing the case.
So he said, I'm throwing this out.
If you want to refile it, you can.
But I'm not letting this thing go forward until then.
And then they refiled it.
They did after President Biden took office.
And the FTC got some new leadership.
So Lena Kahn became the new head of the agency.
And the administration did decide to continue the case.
And they went through and they added in a bunch of kind of new stats and facts and figures,
trying to illustrate this idea that there really is something called a personal social networking
market and that Facebook, which would soon change its name to Metta, had a monopoly over it.
Right. So this is clearly a case that spans some kind of partisan gap. But
can you just remind us like what the basic crux
of the argument here?
What is the government's case that Metta has built
and maintained an illegal monopoly
by acquiring Instagram and WhatsApp?
Well, the crux of the case is that
according to the government,
there is a market for something called
personal social networking,
which consists of apps that are primarily intended
to help you keep up with friends and family.
And besides Facebook,
there are only four other apps in this market, Kevin.
What are those apps?
Those apps are Instagram, WhatsApp, Snapchat, and MeWe.
What is MeWe?
MeWe is an app that most people
have absolutely never heard of.
It has a little kind of Facebook alternative, basically.
It is not particularly popular.
Miwi might say that that is because that Metta has been acting anti-competitively
and prevented them from growing.
And I suspect Metta would say that Miwi has not been growing
because it is just not that great of an app.
Yeah, it's Peewee.
It's a bit of a Peewee app, is Peewee.
Yeah.
So I understand why market definitions are important
in cases like these, essentially,
because my understanding is, in order
to argue that someone has a monopoly over a market,
you first have to define what the market is.
And if you're the FTC and you're trying to make the case
that Metta has an illegal monopoly,
it can't just be
everything, every internet service, because clearly they would not have a monopoly over that. But as for this one specific area, which we are defining as personal social networking,
you really don't have much competition. Yes, that is the argument that they are making.
And frankly, Kevin, I just don't think it is that strong of an argument. And so as the first couple days of this trial have unfolded, it has been interesting to
see the government trying to sketch out that case, but in my opinion, struggling to do
so.
Well, let's get into the analysis in a little bit.
But first, let's just talk about what has been happening with the trial so far.
So my colleagues at the Times, Cecilia Kang, Mike Isaac, and David McCabe have been
covering this, including actually going to the trial. And it seems like so far, both
sides are just sort of laying out their opening arguments. The FTC is trying to make the case
that this is an illegal monopoly. They have all these emails and communications from various
meta executives going back many years talking about the competitors that they are trying to
sort of neutralize by either acquiring them or copying their features and trying to sort of
use that to make the case that this is a company that has had a very clear anti-competitive strategy
for many years. What has Meta been saying in their defense? Meta has been saying essentially
in their defense? Metta has been saying essentially that the market that the government is suggesting they
have a monopoly over is fake and has been sort of invented solely for the purposes of
this trial.
On the stand, Mark Zuckerberg has been explaining how Metta's family of products has evolved
to include new features as the market has evolved. And really, Kevin, this
is just a story about TikTok. And it speaks to why the fact
that the federal government is generally so slow to bring
antitrust actions winds up hurting it in a case like this.
Because the thing is, I would argue, from roughly 2016 to 2021 or so,
Meta does kind of have a monopoly over what we think of as social networks, right? Snapchat gets
neutralized. Twitter gets neutralized. YouTube is kind of playing a different game. When it comes
to sending messages to friends, Meta really does kind of have the market locked up. But then along comes
TikTok, this app out of China, which has a completely different view of what a
social network could be. And the most important thing they decide is, we don't
actually care who your friends and family are. We're just going to show you
the coolest stuff we can find on our network. We're gonna personalize it to you,
and you can enjoy it,
and you don't even have to follow anything
if you don't want to.
We're just gonna show it.
And this winds up transforming the industry,
and Meta and everyone else has been chasing it ever since.
And so if you're the federal government, this is a problem,
because you're trying to solve
an essentially like 2016 era problem in 2025 when the world looks very different.
Yeah, I mean, the way I sometimes think about it is that Metta had a monopoly, but that
they failed to maintain the monopoly.
And the way that they failed to maintain the monopoly was that they didn't buy TikTok when
it was a much smaller but fast growing and popular app.
And actually it's stranger than that
because a big way that TikTok grew
was by buying a bunch of ads on Facebook and Facebook apps.
They essentially used Facebook to bootstrap
a new social network that ended up becoming
one of Facebook's biggest competitors.
So I think if you take the long view here,
it's not that like Metta has this like long standing
monopoly that it still maintains today.
It's like they had one and they let it slip away
by failing to recognize the threat the TikTok post.
And Zuckerberg has talked about this a bit
on the stand this week.
And what he's essentially said is,
look, TikTok just looked very different
than what we were used to competing against, because it was not really about
your friends and family. And so we did miss it for that reason.
But if you fast forward to 2025, and you open up TikTok, what is
TikTok trying to get you to do? Add all of your friends, send
them messages, right? All of these apps eventually wind up
turning into versions of each other. But again, it's creating
this problem for the
government, because how do you successfully make the argument that meta still has maintained this
monopoly? Or are you somehow able to convince a judge that meta should be penalized for maintaining
the monopoly that it had back when it did? Yeah, I wanted to ask you about one thing that has come
up so far in the trial. You know, every Every time there's a big antitrust lawsuit between a tech company and the government,
we get all these emails and these internal deliberations between executives talking about
various strategy things.
I always find that to be the most interesting and revelatory piece of any of these trials.
Absolutely.
How these people talk to each other, what kinds of things they're worried about, how
they're planning years in the future.
And one of the things that came up in this trial already
is that Mark Zuckerberg at one point argued
that they might actually want to split Instagram off
into a separate company.
This was back in 2018.
He considered spinning off Instagram
from the core Facebook app, basically reasoning
like the government might try to force us to do this anyway.
But also, he worried about something called the strategy tax
of continuing to own Instagram.
Can you explain what he meant by that?
Yeah, so this was really fascinating to me as well.
In 2018, Instagram was really bedeviling Mark Zuckerberg.
Instagram had been bought six years prior, it was still run by its founders, Kevin Systrom and Mike Krieger, and it had been afforded a level of independence that was really unusual for most acquisitions at big tech companies.
And he just becomes convinced that Instagram's growth and its cultural relevance is coming at the expense of Facebook that Instagram seems younger, hipper, sexier, Facebook is
starting to feel older and more fuddy duddy.
And so he starts figuring out these ways of you know, like, if you share your Instagram
photo to Facebook, we're going to get rid of the little thing that says like shared
from Instagram on it, right.
So some of this has been reported before Sarah Friar wrote a great book about this called
No Filter.
But this was truly new.
We did not know until this week that in 2018 Zuckerberg almost just pulled the ripcord
and said, let's get rid of this thing.
Yeah.
And his argument was interesting.
It wasn't just that he thought that Instagram was sort of cannibalizing Facebook's popularity.
It's that he appeared to think that it would be more valuable as an independent company. He talks in these exchanges about how
sometimes the companies that are spun off of big giants tend to be more valuable after the spin-offs
and how spinning off Instagram might actually enable it to become more valuable. So the
government is obviously trying to use this to say, look, this is such a good idea, breaking up Instagram and Facebook
that even Mark Zuckerberg thought it was a good idea
back in 2018.
Now I assume he would say something different today.
Yeah, and this gets at another antitrust argument
that has been made over the past decade or so
that has less grounding in legal precedent,
but is still favored by some, including Lena Cahn.
And the basic idea is just,
there is such thing as a company that is too big.
And one role that antitrust law can play
is by taking things that are very big
and making them a little bit smaller.
And a main reason to do that is exactly what you just said,
that the pieces that you break up will be more valuable
in the long run than if you sort of clump
everything together.
And by the way, I think you can make a good argument
that Instagram would be one of those things.
We have seen reporting that Instagram now makes up
around half of Meta's overall revenue.
So first of all, you can imagine how devastating
it would be to Meta if they just lost that
in one fell swoop.
But on the other hand, it does absolutely show
that this network could survive and thrive on its own, right?
And it'll be interesting to see if the government makes that case.
So one other historical tidbit that has come out so far in this trial that I thought was fascinating
and wanted to talk about with you was this exchange between Mark Zuckerberg and some other Facebook executives back in
2022 where Mark Zuckerberg pitched the idea
of deleting everyone's Facebook friends
and basically having everyone start over
with a fresh graph, a fresh slate of Facebook friends.
What was this idea?
Did it ever get close to fruition?
And why is it coming up in the context
of the antitrust trial?
So this was an idea that was floated in 2022,
according to some reporting that Alex Heath did for The Verge.
And the idea was that people weren't using Facebook
as much as they used to.
I would imagine particularly in the United States.
And so Mark Zuckerberg floats the idea,
why don't we just delete everyone's friends list
and make them start over?
And the idea was that this would we just delete everyone's friends list and make them start over?
And the idea was that this would make it feel cooler and more interesting if like you weren't
just hearing from a bunch of people you added 12 years ago and never talked to again?
Well, that's the thing is that I think one reason why Facebook started to feel stale
was that you had built this network of people that was just sort of like everyone you'd
ever made eye contact with.
And so it was a pretty boring thing to browse
because you didn't actually care
about a lot of the people you were seeing there.
So what if you just had to start over and say,
like, actually, like, I only care about these 20 people.
Yeah, I actually think this is a great idea.
Me too.
But why didn't it happen?
Well, so there's a moment in a Business Insider story
about this where the head of Facebook, Tom Allison,
apparently replied that he wasn't sure the idea was, quote,
viable given my understanding of how vital
the friend use case is, which like,
this is a man who is trying to say as delicately as possible
to his boss, one of the world's richest men,
you are out of your freaking mind.
Tom Allison is like, my understanding of Facebook
is that it's important that your friends list exists
because that's actually the entire point of Facebook.
But obviously you're the boss,
but I just kind of want to point that out.
Yes.
And so the idea doesn't get followed up on.
It reminds me of those like passages in books
about Elon Musk where he just like goes
into the Tesla factory
He's like what if it went underwater like a submarine and all the and all the engineers have to be like a sir
That's not possible given the laws of physics
This is sort of like there's like these famous stories about you know
If you were like an Amazon employee and you open up your email and there's like some like terrible story from a customer
And it's been forwarded to by Jeff Bezos
With just a single question mark all of a sudden. That's all you do for the next month is figuring this out
So this is one of those stories
But what's so funny Kevin is well on one hand we can agree this would probably be destructive to Facebook's business
It does seem like a great idea that they should absolutely do totally. Yeah. All right, so
That is some of the spicy stuff that has come up at the trial so far.
What are you gonna be looking for
as this trial goes forward to figure out how it's gonna go?
Well, number one, your colleague Mike Isaac
reported this week that in one of the emails,
the former chief operating officer of Facebook,
Sheryl Sandberg, asked Mark Zuckerberg
how to play Settlers of Catan.
I need to know if she ever learned how
and if she got any good at it.
That seems a little beside the point, but.
Well, that's thing one.
Thing two though, Kevin, is can the government
actually make its case?
Look, I never liked to be on the side
of sounding like I'm carrying water
for a trillion dollar corporation,
but I also believe in governments making good,
solid arguments based on the facts.
And again, while I think that there was a great case
that Metta acted super anti-competitively
over the past decade,
I mean, that's just like settled fact as far as I'm concerned.
I think it's a lot harder to say they have a monopoly in a world where everyone is chasing after TikTok at a million miles an hour, right?
So can the government somehow convince us that no, no, no, TikTok is a very different thing and that were it not for Metta's continued anti-competitive behavior,
MeWe would have a billion users,
then, well, I don't know how the government's going to be able to prove its case.
Yeah. I mean, one thing that I'm looking at is the political dimension here,
because we know that Mark Zuckerberg has spent the last several months furiously
sucking up to Donald Trump and people in the Trump administration
trying to cast himself as a great ally and friend to the administration.
And we also now know that part of the reason
that he was doing that is to try to make
this antitrust case go away.
And in fact, late last month, according to some reporting
that came out recently, Mark Zuckerberg actually called
the FTC and offered to settle this case for $450 million,
which was a small fraction of the $30 billion the FTC was asking for.
And that he, according to this report, sounded confident that the Trump administration would
sort of back him up and that that was one reason that he was willing to make this low
ball offer.
Yeah, this is like as close as you could have come if you're Mark Zuckerberg to just giving
the FTC the finger, right? Like $450 million in the context of this trial is nothing
and he knew it was nothing.
And this was, I believe, him signaling to them,
your case sucks and I'm about to win.
Yeah, and there's some really interesting
backroom politics happening here
that I don't pretend to know the ins and outs of,
but maybe you know more about them.
Essentially, my impression from the outside happening here that I don't pretend to know the ins and outs of, but maybe you know more about them.
Essentially, my impression from the outside
is that all of this flattery and kissing up
to the Trump administration was actually seeming to work.
Like the administration's posture toward Metta
was softening somewhat.
And then when some sort of hardcore MAGA folks
got wind of this, they sort of stepped in
and according to some reports at least,
had some conversations with the president that resulted in him stiffening his spine a bit toward
Metta. So explain what's going on here. Sure. Well, so there was a report in Semaphore by Ben
Smith, a former hard fork guest, that Andrew Ferguson, who is now the chair of the FTC,
and Gail Slater, who is an assistant attorney general in charge of any trust enforcement at the Justice Department, went to meet with
the president to try to say, hey, you have to please let this case go forward.
And they were apparently successful in that.
Prior to that, though, Kevin, as you note, Metta had transferred $26 million to the president,
at least, right, a million dollars for the inauguration, $25 million to the president at least right a million dollars for the inauguration $25 million to settle a lawsuit over the fact that they suspended him after January 6th and that seemed to be working
You know the president and JD Vance started to criticize all of the European
Fines and fees that were being levied against Facebook for various infractions
It was really starting to become kind of a plank of his trade war that hey
We're not gonna let you fine our companies anymore no matter what they did, right? So
all of this was music to Mark Zuckerberg's ears. And I suspect one reason why he might
have thought, you know, I bet I can get this antitrust case thrown out.
Yeah. And it turns out he couldn't. And now he's on trial and he's having to testify and
go through all this evidence. And, you know, my feeling on this is like, I think the FTC's
case is somewhat weak here for all the reasons
that you laid out.
But I think that it's good that MEDA has
to have its day in court, that it can't just sort of buy
its way out of this Gantt-Trust action,
and that it will actually have to prove
that it did not have an illegal monopoly,
or at least to cast some reasonable doubt on that.
Absolutely.
And I think that no matter what happens in this case, Kevin, it has actually
had a really positive outcome for the market for consumer
apps in general.
What do you mean?
So if you look at the past five years or so,
look at some of the apps that have come
to prominence since then.
Look at a Telegram.
Look at a Substack.
Look at a Clubhouse even back during the heyday of that app in the world of 2016
I'm very confident that Facebook would have been trying to buy all of those apps, right?
But they couldn't anymore. They just knew that all of those would be a non-starter
And so what has happened we have started to see other apps flourish
There actually is oxygen in the market now
started to see other apps flourish, there actually is
oxygen in the market. Now, companies can come in and compete
and know that they're not about to immediately get swept off the chessboard via a huge offer from a meta or a Google or one of the
other giants. Now, this has some problems for those companies,
right? If you raise billions of dollars in venture capital,
eventually those investors want to see a return. But if you're
just a consumer who wants to see competition in the market and not every
app you use on your phone owned by one or four companies, you've actually had a pretty
good go of it over the past few years.
Yeah, I think that's a good point.
I think aside from the merits or lack of merits of this particular case, what it makes me
just realize is that bringing antitrust enforcements against the big tech companies is just so challenging
because the underlying marketplace and the ecosystem
just changes so rapidly.
So the facts from 2016 or 2017 might not actually hold up
by the time your antitrust case gets to trial years later.
Things may just look very different.
And as you've laid out, I think this is a big problem for the FTC here.
The market for social networks or even what meta is, is very different now than it was
even a couple of years ago.
So do you think this has any implications for tech regulation writ large?
Well, on one hand, yes, I do think it means that when the FTC wants to bring an antitrust
action, they need to do it much more quickly than they did here.
But on the other hand, Kevin, every case is different. You know, I was thinking this week
as I was writing, are there any implications here for let's say the Google antitrust cases
that have gone on? And as I stopped and reflected, I thought, you know what, I still think that
Google does actually have a monopoly in search and search advertising. And I think the FTC
is right to go in there and try to break that up. So every big company is different. But in this one particular case of Meta, Kevin,
I do think that the tech world has sort of moved on in a way
that the FTC has not.
Yeah, it's almost like the thing that sort of made the case
irrelevant or at least not as urgent as it might have felt
a few years ago is not that Meta changed its ways.
It's just that social media as a category
became much less
delineated and much less relevant. And let me just say, I continue to be
surprised at how little a stir it made a couple years ago when Metta announced
that they were basically going to be moving on from the friends and family
model. That all of a sudden your feed was going to have a bunch of
recommendations from creators and celebrities and stuff that you had never
chosen to follow but that an algorithm thinks that you might like, right? Facebook really
did leave friends and family behind in a big way, at least as a priority. And the world just kind
of shrugged because the world had already moved on to TikTok. Totally. Well, more to say there,
but we will keep watching this trial. Casey, are you planning to show up at the courthouse?
I'm hoping to get called as a witness.
I wouldn't call you to the stand.
You're too unpredictable.
I'm gonna put the whole system on trial.
When we come back, a skeptical look at AI progress
from Princeton's Arvind Narayan. KC.
Last week on the show,
we had a conversation with Daniel Cocotello from
the AI Futures Project about his new manifesto, AI 2027.
We got a lot of feedback on it.
Yeah. In fact, one of my friends messaged me and said,
''You know, that was a real bummer.''
Yes. Much to our surprise,
there was a new manifesto on the block this week.
This one was much more skeptical of the fast take takeoff scenario that Daniel and his co-authors
suggested.
It was written by two computer scientists at Princeton, and it is called AI as normal
technology.
Yeah.
And this really arrived at the right time for us, I think, Kevin, because for weeks,
if not months now, listeners have been writing in saying,
hey, we love hearing you guys talk about AI,
but we would really appreciate a slightly more skeptical
take on all of this.
Somebody who has not bought all the way into the idea
that society is going to be completely transformed by 2028.
And so when you and I read this piece from Arvin and Saash,
we thought this might be the thing
that our listeners have been looking for. Yes, so this piece was piece from Arvin and Sayash, we thought this might be the thing that our listeners have
been looking for.
Yes.
So this piece was written by Arvin Narayanan,
who's a professor of computer science at Princeton
and his co-author, Sayash Kapoor.
And in this piece, Arvin and Sayash
really lay out what they call an alternative vision of AI,
basically one that treats AI not as some looming
superintelligence that's going
to go rogue and take over for humanity, but as a type of technology like any other, like
electricity, like the internet, like the PC, things that take a period of years or even
decades to fully diffuse throughout society.
Yeah, they go through step by step.
What are the conditions inside organizations that prevent technology from spreading at
a faster pace?
Why is that same dynamic likely to unfold here?
And what does it mean that AI might not arrive in a super intelligent form for decades instead
of just a few months?
Totally.
And this is a very different view than we hear from the big AI labs in Silicon Valley.
A lot of the people we've talked to on this show believe in something more like a fast takeoff,
where you do start to get these recursively self-improving AI agents that can just sort
of build better and better AI systems. Arvin and Sayosh really say, hold on a minute,
that's not how any of this works. Kevin, something that I really appreciate about this work is that Arvin and
Sayosh are not the sort of skeptics who say that AI is all hype, that it isn't powerful,
that you can't do cool things with it today.
They also don't think that its capabilities are going to stop improving anytime soon.
These are not people who are in the sort of deep learning is hitting a wall camp.
They think it's going to get more powerful.
They just think that the implications of that
are much different than those who have been suggested.
So to me, it seems like a much smarter,
more nuanced kind of AI skepticism
than the sort that I sometimes see online.
Yeah.
So to make his case that AI is a normal technology
and not some crazy superintelligence in the making.
Let's bring in Arvind Narayanan.
Arvind Narayanan, welcome to Hard Fork.
Thank you.
It's great to be here.
Great to chat with you after so many years of reading your writing.
Well, let's start with the central thesis of this new piece
that you and Sayesh Kapoor wrote together.
There's a lot in it.
It's very long, and we'll take time
to unpack some of the different claims that you all make.
But one of the core arguments you make
is that AI progress or the sort of fast takeoff scenario
that some folks, including former guests of this show, have envisioned,
is not going to happen because it's going to be bottlenecked
by this slower process of diffusion.
Basically, even if the labs are out there inventing
these AI models that can do
all kinds of amazing and useful things,
people and institutions are slower to change,
and so we won't really see
much dramatic transformation
in the coming years.
But to me, AI diffusion actually seems very fast
by historical standards.
Chat GPT is not even three years old.
It has something like 500 million users.
Something like 40% of US adults use generative AI, which
didn't really exist even a few years ago.
That just seems much faster to me
than the proliferation of earlier technologies
that you've written about.
So how do you square the growing popularity
and widespread usage of these apps
with the claim that it just is going
to take a long time for this stuff
to diffuse throughout society?
So I'm going to make a crazy sounding claim, but hear me out.
Our view is that it actually doesn't
seem like technology adoption is getting faster.
So we're well aware of that claim, about 40% of US adults
using generative AI.
We discussed that paper in our essay.
And I have no qualms with their methodology or numbers
or whatever.
But the way it's been interpreted
is it's only looking at the number of users without looking
at the distinction between someone who is heavily using it and relying on it for work
versus someone who used chat GPT once a week to generate a limerick or something like that.
So the paper, to the author's credit, does get into this notion of intensity of use.
And when they look at that, it's something on the order of one hour per workweek, and
it translates to a fraction of a percentage point in productivity.
And that is actually not faster than PC adoption, for instance, going back 40 years.
I know you are not specifically responding with this piece to any other piece that's
come out, but I just can't help but thinking about our conversation that we had last week with
Daniel Cocotello of the AI Futures Project,
who has just spent the past year putting together
this scenario of what he thinks the world
will look like over the next few years.
Part of his thesis is that we'll start to have
these autonomous coding agents that will automate
the work of AI research and development and will
essentially speed up
the iteration loop for
creating more and more powerful AI systems.
I'm curious what you make of
that thesis and where you think it breaks down.
Is it that you don't think that the systems will ever
become that good and capable of
that recursive self-improvement,
or is it that you think that will happen,
but it just won't matter much because coding is only one type of job and one type of occupation, we've got
all these other ones?
Like, where is the hole in that scenario?
Yeah, this is a lot of part two of our paper is devoted to this issue.
And there is an interesting linguistic choice, I think, that you made.
You alternatively referred to them as highly capable and highly powerful AI systems.
For us, those two are not equivalent. They are actually very different from each other.
We don't dispute, we completely agree with Dan, that improving AI capabilities is already rapid and could be further accelerated with the use of AI itself for AI development.
For us, that does not mean that these AI systems will become more powerful.
Power is not just a property of the AI system itself, it's a property both of the AI system
and the environment in which it is deployed. And that environment is something that we control,
and we think we can choose to make it so that we're not rapidly handing over increasing amounts
of control and autonomy to these AI systems,
and therefore not make them more powerful.
Now, there is an obvious counter-argument
that these things will make you so much more efficient
that people will have no choice but to do so.
We disagree.
We have a lot of analysis in the paper
for why it's actually just not going
to make business sense in most cases
to deploy AI systems in uncontrolled fashions
compared to the benefits that it will bring.
It might be worth pausing a bit and saying
a bit more of the argument you make for why that is.
What are some of the sort of natural breaks
that you see happening in organizations
that prevent technology from spreading faster
than it does today?
Sure.
This is where we think we can learn a lot
from past technologies.
When we look at the history of automobiles, for instance,
for the first several decades,
vehicle safety was not even considered a responsibility of manufacturers.
It was entirely on the user.
And then there was a mindset shift, right?
And once safety began to be seen as a responsibility of manufacturers,
it no longer made business sense for them
to develop cars with very poor safety engineering
because whenever those cars caused accidents,
there would be a negative PR consequence for the car company.
So this mindset shift realigned incentives
so that safety becomes part of what the manufacturer here
is competing on.
And that kind of mindset shift, I think, is important for AI.
And that is something we can credit the AI safety movement
with.
Safety is so indelibly associated with AI
in most people's minds.
So that's a good thing.
So that's the first thing.
Second, once you're in the situation
where the negative safety consequences are easily
attributable to the party who is responsible for it, you can have regulation that sets
a standard that's going to be much more feasible than a scenario where something bad happens
and there's no way to attribute it to who is the responsible party for it.
So those are things we should be working on, right?
How to make responsibility clearer,
who is responsible for what.
But those are things we can do.
And if we get those things right,
I don't think it's the case that companies will be forced to
deploy AI in unsupervised, uncontrolled manner.
Yeah, I mean, I see your point there.
And I think there is, you know,
you make a really instructive example in
your paper about the difference between Waymo and Cruise,
two self-driving car companies,
one of which has a very strong safety record Waymo,
the other of which had a high-profile incident in
San Francisco and was forced to pull
its robo-taxis out of the city and essentially shut down,
which was Cruz.
And you sort of extend that to the logic of AI more generally,
where you say that the companies that have the safe products
will sort of out-compete the companies with the unsafe
products in the market.
And I would love to believe that there is sort of a self-correcting
mechanism in the market that will filter out
all the unsafe products.
But what I observe is that sometimes, as the technology is getting more capable, the safety
standards are actually moving in the other direction.
So just a few days ago, the FT reported that OpenAI is now giving its safety testers less
time and fewer resources than it used to before releasing their models, in part because the
pressure to get these models out the door and stay ahead of their competition
has become so intense.
So I guess given the safety standards
that we're seeing now in the industry,
what makes you confident that the market
will sort of take care of these safety concerns
before these unsafe products are put into people's hands?
I'm not confident, and that's not something
we say in the paper.
We don't use the term self-correcting.
We don't think markets will self-correct.
It will take constant work, I think, from society.
And obviously, journalism plays a big role here,
and regulators.
We don't try to minimize the role for regulation either.
And yes, what we're seeing with some of the safety windows
decreasing is a problem.
Definitely with you on that.
But I think that is something that we
have the agency to change.
The fact that safety testing happens at all
before models are released, that is something very different
with AI than with past technologies.
That is something we accomplished together,
the AI safety community and everyone else
who had a stake in this, to make this the expected
practice for companies. And yes, it's true that recently things have been
trending in a negative direction. It's important to change that. But one last
thing I want to say is that while I agree with Kevin's concerns, it's not
maybe quite as concerning to me as some people would see it because for us, a lot
of the safety concerns come from the deployment phase as opposed to the
development phase. And so while yes, there is a big responsibility for model
developers, a lot of the responsibility has to be shared by deployers so that we
have defense and depth, we have multiple parties who are responsible for
ensuring good safety outcomes. And I think right now, the balance of
attention is too much on the developers too little on the deployers. And I think
that should change.
Let's talk about another aspect of safety, which is the idea of alignment, right? This idea in AI
development that we should build systems that adhere to human values, and that if we don't do
that, there is some potential that eventually they will go rogue and wreak havoc. You are very
skeptical about the current approach
to model alignment.
Why is that?
Sure.
So here's kind of the causal chain,
that AI systems will become more and more capable.
And recall that gap between capability and power,
as a result of being more and more capable,
they will become more and more powerful.
And that distinction has been alighted in a lot
of the alignment literature.
And once you have these super powerful systems, we have to ensure that they're aligned with
human values, otherwise, you know, they're going to be in control of whole economies
or critical infrastructure or whatever.
And if they're not aligned, they can go rogue and they can have catastrophic consequences
for humanity.
Our point is that if you even get to the stage where alignment becomes super important,
you've already lost. So in a sense, we want a stricter safety standard in a way than a lot of
the alignment folks do. We don't think one should get to the superpower stage. And if you get to
that stage, then tinkering with these technical aspects of AI systems is a fool's errand. It's
just not going to work. Where we need to put the brakes is between those increases
in capabilities and saying, oh, AI
is doing better than humans now.
We don't need humans to provision.
We're going to put AI in charge of all these things.
And that is something where we do
think we can exercise agency.
Again, that's a prediction.
We can't be 100% confident of that.
We outline in detail in the paper
why we think we can do that, but certainly it
remains to be seen.
I just want to make sure I understand the claim.
Because right now, the leading AI labs
are all trying to give their models more agency, more
autonomy, to allow them to do longer sequences of tasks
without requiring a human to intervene.
Their goal, many of them, is to build
these sort of
fully autonomous drop-in remote workers
that you could hire at your company
and tell them to go do something
and then come back a month later
and it's done or a week later.
Are you saying that that is technologically impossible
or implausible, or are you just saying that it's a bad idea
and we should stop these companies
from giving
their models more autonomy without human intervention?
So it's a bit of both.
We're not saying it's technologically impossible,
but we think the timelines are going
to be much, much longer than the AI developers are claiming.
To be clear, I agree with you, Kevin.
You wrote recently in your Fila AGI piece
that within perhaps a couple of years,
AI companies are going to start declaring
that they have built AGI.
However, we don't think what they're
going to choose to call AGI based on their pronouncements
so far is the kind of AI that will actually
be able to replace human workers across a whole spectrum
of tasks in a meaningful way.
So first of all, our claim is that it's
going to take a long time. It's going to take a feedback spectrum of tasks in a meaningful way. So first of all, our claim is that it's going to take a long time.
It's going to take a feedback loop of learning
from experience in real world contexts
to get to actual drop in replacements for human workers,
if you will.
But our second claim is that even if and when that
is achieved for companies to put that out there
with no supervision would be a very bad idea.
We do think there are market incentives against that,
but there also needs to be regulation.
One example of something that we suggest is, for instance,
the idea of AI owning wealth.
That is one way in which AI could accumulate
more power and control.
Those are all avenues that we have of simple interventions,
simply banning AI owning wealth, for instance,
that will ensure that humans are forced to be in the loop,
forced to be in control at critical stages of the deployment of AI systems.
Yeah, we simply must not give chat GPT an allowance. I will not hear of it in this house.
Now, you brought up Arvind, Kevin's recent piece in which he argued that AGI
is imminent. How wrong did you think that piece was?
is imminent, how wrong did you think that piece was? I mean, so first of all, I agree with him
that companies are going to declare this AGI.
And I also agree with Kevin that some people at least
are not paying as much attention to this as they maybe should.
With all that said, there's also an information asymmetry
from the other side.
A lot of the time when AI developers claim that AI can
replace this or that job,
they're doing so with a very narrow conception of what that job actually involves.
The domain experts in that job have a much better idea.
A lot of the time ignoring AI,
I do think is rational.
There is a gap in both directions.
What I wish for is better mutual understanding, right?
Better understanding from the public
of where AI capabilities currently are,
but also better understanding from AI developers
of the real knowledge that everyday people have
of their various different contexts
through which they can learn
what the actual limitations of AI systems are.
Yeah. One of the themes that you get at in the paper is that this focus on catastrophic risk
that the AI safety community often emphasizes sort of risks taking the focus off of
nearer term risks. But those risks, as you describe them, include the entrenchment of bias and discrimination,
massive job loss, increasing inequality, concentration of power,
democratic backsliding, mass surveillance. And I guess I'm just really struck that even you,
as somebody who has really been leading the charge saying that a lot of this AI stuff is
overhyped, are also saying, my God, look at these terrible risks that are
baked into the potential of these systems.
One small clarification.
We do think that some of the interventions targeted against superintelligence risks could
actually worsen these other kinds of risks that we care more about.
We're not making a distraction argument.
That's a very specific argument that I've made in the past
on Twitter, but not in any of my more formal writing.
I don't make that argument anymore.
We can certainly worry about multiple kinds of risks.
But our real concern is that if we're so worried
about super intelligence that we decide that the answer
to it is a world authoritarian government,
then that is going to worsen all of these other risks.
And yes, so look, I mean, the second sentence of the paper
is normal technology isn't meant to underplay this,
even electricity and the internet are normal technologies
in our conception.
And when we look at the past history
of general purpose technologies,
there have always been periods of societal destabilization
as a result of their rapid, as you could call it,
even decades long in
our view, is kind of rapid deployment and it's hard for societies to adjust.
Most famously, the Industrial Revolution led to a mass migration of workers to cities where
they lived in crowded tenements.
Worker safety was horrendous.
There was so much child labor.
And it was as a result of those horrors that the modern labor movements came about, right?
And so eventually the Industrial Revolution lifted
living standards for everybody,
but not in the first few decades.
All of that is very plausible with AI.
We're not necessarily saying that that would happen,
but I do think those are the kinds of things
we should be thinking about and trying to forestall.
Let me move to an area where I think I really do
disagree with you, but I wanna see if I can understand
your argument a little bit better.
So you write about the idea of arms races in this piece.
And one of the things you say is that there is no
straightforward reason to expect arms races
between countries over AI.
You also say in the piece that you want to exclude
from the discussion anything about the military.
But I didn't understand this because to me,
the military is typically the source of the arms race.
And as these systems gain more capabilities,
it is gonna lead the United States and its adversaries
to try to build systems faster, more capably, possibly less
safely in hopes of getting one over on their adversary.
So help me understand what you are arguing about arms races
and why you're leaving the military out of it.
Yeah, so to be clear, when we say we exclude the military,
we're just straightforwardly admitting
that that could be an Achilles heel of the whole framework.
And it is something we're researching.
We don't think that military arms races are likely,
but it's not yet something we understand well enough
to confidently put into the paper.
We're going to be exploring that in follow-up.
So that's what we mean by excluding military AI.
But even outside the military, there
are lots of arms races that have been proposed, right?
So for instance, one way in which this has been
envisioned is, let's say, our court system or any other important application of decision making.
Maybe countries will find that it's just so much more efficient and effective, let's say, to put AI
in charge of making all decisions about criminal justice. So that's a kind of metaphorical arms
race you can imagine where
there is a push to develop more and more powerful AI systems with less and less oversight. So it is
that particular concern we're responding to. And our point of view there is, first of all,
this is not the kind of thing at which you can perform at a superhuman level. The limitations
are inherent and not related to computational capabilities.
And even if it is somewhat more efficient,
you can say if paying for the judiciary, for instance,
I think the consequences for civil liberties, et cetera,
are going to be domestically felt,
and therefore the local citizens will rise up
and protest against those kinds
of irresponsible AI deployments.
So it doesn't matter if it gives you
an advantage against another country in some abstract sense.
It is not something that people will accept or should accept.
Yeah, I find that such an interesting argument
and provocative because it really
flies against some of my priors here, which
are that there are actually powerful market forces
and demand forces pushing people toward less restricted models.
I observed that in some of
the most recent models they've released,
OpenAI has relaxed some of
the rules around what you can generate,
generate images of public figures.
They're reportedly exploring more
erotic role play that you can use their models for.
There does seem to be this market force,
at least in the US,
that is pushing people toward
these less restricted uses of the AI models.
Or how about Kevin,
when you went to the Paris AI Action Summit,
which had its roots in the idea of building safer AI,
but you got there and it was basically a trade show,
and it was the French government saying, hey, don't count France out of the AI race, but you got there and it was basically a trade show, right? And it was the French government saying,
hey, don't count France out of the AI race, here we come.
Right, so to me, I feel like we're sort of already seeing
this competitive dynamic play out.
Yeah, but I wanna see what that looks like
from your perspective, Arvin,
because I think, you know, my perspective is that
we are already in something of an arms race.
We have these export controls,
we have people at the highest levels of government
in both the United States and China
saying that this is like a definitive conflict
of the next few decades.
So what in your mind lowers the stakes here
or lowers the temperatures or takes us out
of the category of an arms race?
So this is one of the other big ideas in the paper.
We're borrowing this from political scientist Jeffrey
Ding, and we're adapting it a little bit. And his big idea is
that geopolitical advantage comes less from the innovation in
technology and more from the diffusion of that technology
throughout the economy, the public sector, throughout
productive sectors. And he says that America's advantage over
China right now is not primarily because of our capacity to out
innovate and innovations travel between countries very easily.
And we've seen that over and over.
And in any case, the innovation advantage in AI is like a few months at best.
The real advantage is in diffusion diffusion again, being the bottleneck
step that takes several decades.
And that's where the advantage really is.
So we agree and we talk about the implications of that.
But we're saying that the same thing also applies to risks.
The risks for us are key not to the development
of capabilities, but to the decision
to put those models into consequential decision-making
systems.
And while it is true that there is an arms race going on
in development, we're not seeing an arms
race in the consequential deployment
of these AI systems.
And just to make that point very concrete, Kevin,
I'm going to play devil's advocate a little bit here.
I do think it's, to be clear, I do
think it's bad if model developers skip safety
testing altogether.
But indulge me in this thought experiment.
Let's say model developers start releasing models
with absolutely no safeguards, whatever.
So what?
Let's talk about it.
Yeah, what do you think happens next, Kevin?
The classic answer from the perspective of an AI safety
person would be that a very bad person or group
gets their hands on a model that is unrestricted
and uses it to, say, create a novel pathogen or a bio weapon? They can do that today.
We have state-of-the-art models that have been released with open weights,
and sure, they might have some safeguards,
but those are actually trivial to disable,
so that capability exists today.
I think if that were the thing that's going to lead to catastrophe,
you know, we would all be dead already.
And that's a risk that existed even before AI.
A lot of the ways in which AI can help create pathogens
is based on information that's also
available on the internet.
So this is something we should have been acting on all along.
And we have been finding other ways to decrease that risk.
One could argue that maybe those steps are not enough.
But it is hard for me to see this as an AI problem,
as opposed to
just an existing civilizational risk.
I find that a bit glib.
It makes me think of the debate over deepfakes and synthetic media.
Like it was always true, I shouldn't say always, but you know, for the past, I don't know,
30 years it's been true that you could take a photo of me and manipulate it into some
like non-consensual nude image of myself, right?
The capability
exists and you know until recently it hasn't even been legal to do that most places, but now you can
do it instantaneously, right? And so part of the danger of AI is not does the capability exist,
it is sort of how easy does it make the bad thing? And my guess is that what Kevin is worried about
is that a future open weights model
is going to make it much easier for somebody
to make a novel pathogen than the current state of the art,
where you have to use the Google search
engine, which famously has been getting worse for some time
now.
So yeah, let me, if I may, quickly respond to that.
So I completely agree with you on the deepfakes concern.
And I want to come back to that, especially the notification.
Every time we're asked about what we actually worry about with AI, that's absolutely at the topfakes concern. And I want to come back to that, especially the notification. Every time we're asked about what we actually
worry about with AI, that's absolutely
at the top of our list.
And I think the way in which policymakers
have been so slow to react to that has been shameful.
I'll come back to that in a second.
But where I disagree with you is that
as a model for how we should think about bio risk.
Because with the notification apps, friction matters a lot.
If you decrease the friction, if you make it a little bit easier because with the notification apps, friction matters a lot.
If you decrease the friction,
if you make it a little bit easier to access these apps,
these high school kids are gonna use them
and it's been an epidemic of hundreds of thousands
of these kids using it.
And it's a real problem, it's a huge problem.
Bio-risk is not like that.
It's not something that a bored teenager does.
That's something where someone's trying to destroy
the world or whatever.
For that kind of adversary, the friction is irrelevant.
If they're actually so keen on getting to that outcome,
if it takes three extra clicks to get to something,
that's not going to stop them.
So these are, for us, two very different kinds of risks.
For deepfakes, we do think we should be putting more frictions
in place.
Simple things like disallowing these apps on the app store,
not allowing social media companies
to profit off of advertisements for these apps.
These are all things that it boggles my mind
that we have not done yet.
One of the things that I think was so useful about the scenario
that Daniel Cocotello and his coworkers sketched out
in AI 2027 is that it just made it very vivid and visceral
for people to try to imagine what the near future could look like if they're right.
Now, obviously, people will have many disagreements or quibbles
with specific things that they project,
but it was at least a scenario.
I'm wondering if you could paint a picture for us of what
the world of AI as normal technology will look like a few years from now.
Obviously, you don't think that AI capabilities
have hit a wall, so we will continue to get
some new AI capabilities.
Those capabilities will not be diffused throughout society,
but what does the world look like in 2027 to you?
I mean, for me, it's a longer time scale, right?
Maybe I'll talk about the world 10 or 20 years from now.
The world of 2027 for us is still pretty much
the world we're in
today. The capabilities will have increased a little bit and the work hours of people using AI
are going to have increased from, I don't know, three hours per week to five hours per week or
something like that. I might be off with the numbers, but I think qualitatively the world
is not going to be different. But, you know, a decade or two from now, I do think qualitatively
the world will be different.
And this is still work in progress in our minds.
We'll expand it in the book version of this paper.
But one of the things we do say is
that the nature of cognitive jobs
is going to change dramatically.
And we draw an analogy to the Industrial Revolution.
Before the Industrial Revolution,
most jobs were manual.
And eventually, most manual jobs got automated.
In fact, back then, a lot of what we do
wouldn't even have seemed like work.
Work meant physical labor.
That was the definition of work, right?
So the definition of work fundamentally changed
at one point in time.
We do think the definition of work
is going to fundamentally change again.
We do think there will come a point where,
just in terms of capabilities, not power,
AI systems will be capable of doing,
or at least mediating, a lot of the cognitive work
that we do today.
And because we think it's so important
that we don't hand over power to these AI systems,
and because we think people and companies will recognize that, a lot of what it means to do a job will be supervising
those AI systems. It takes a surprising amount of effort I think to communicate
what we want out of a particular task or a project to let's say a human
contractor and we think that the same thing is going to happen with AI so a
lot of what's involved in jobs is just specifying the task. And a lot of what is going to be involved
is monitoring of AI and ensuring that it's not running amok.
So that's one kind of prediction that we make.
That's, of course, far from a complete description.
But I think that's already radical enough,
so I'll stop there.
Yeah, I think that a casual observer of your work
in AI snake oil and in this new piece about how AI is normal technology,
could come away from your work with
the impression that they don't have to think about AI.
Because it's overhyped,
it can't actually do anything,
and it's not going to arrive anytime soon in your life.
I know that that is not what you're
saying because I've read your papers,
but I think that is a view that many people have
about AI right now,
is that it's just kind of being hyped up
by the sort of circus masters of Silicon Valley,
and it's all smoke and mirrors.
And if you really dig, you know,
an inch below the official announcements,
what you find is that it's all fake
and people don't have to worry about it. And I just kind of wonder
how it feels to be an AI skeptic in a landscape like that because I
worry that these stories that we're telling people about AI being
overhyped and not all that powerful are actually lulling them into a false sense of security.
I was recently reading an article from Scientific American that they published in 1940 called
Don't Worry, It Can't Happen, which was all about how the leading physicists and scientists
of the day had looked into this question of could you do nuclear fission?
Could you split the atom?
Could you make an atomic bomb?
And had basically concluded that this was impossible and literally told readers that
they should not be losing any sleep over this possibility.
Did Gary Marcus write that?
I don't think he was born yet.
But this was sort of the scientific consensus of the people outside the Manhattan Project
and similar efforts was that this was just impossible.
And as a result, I think people were afraid and scared and
surprised when it emerged that we actually
did have atomic bombs in the making.
I worry about something similar happening with AI today,
where we are just telling people over and over again,
like you don't have to think about this,
you don't have to worry about it,
it's not of medium importance to you.
I think if it does show up in people's lives in
a way that is shocking or unpleasant, I just worry that they're going to be more surprised than they need to you. And I think if it does show up in people's lives in a way that is shocking or unpleasant,
I just worry that they're going to be more surprised
than they need to be.
Do you worry at all about that?
So there are so many things to unpack there.
I think I've been surprised by how often people
have opinions about my work having only read the title,
not even the subtitle of my book.
The subtitle of AI Snake Oil is what AI can do, what it can't,
and how to tell the difference.
The point is not all AI is useless.
And just the amount of hate mail I've gotten where people don't
recognize this is interesting, but I guess that's what the internet does.
And I think part of it is there are these two narratives, right?
There's the utopia narrative and there's the dystopia narrative and it's all hype.
There's nothing to see here narrative.
And it's so tempting to box people
into one of those narratives.
And we don't fit into any of those, I think,
neither me nor my coauthor, Sayesh Kapoor.
And it's interesting, you know, being on social media,
seeing the amount of audience capture.
When I write something skeptical of AI, you know, it gets
10 times or 100 times more engagement than when I write something pointing out improvements in AI
capabilities, for instance. So there's a strong pull. And look, I'm doing what I can, which is
not succumb to that audience capture and not give people only the one half of the story that they
want to hear. But it's just a structural problem with our information environment,
for which I can't, as an individual, I think, be responsible.
At the same time, let me also say that I think part of the blame here
has to lie with the hype, because these products are being hyped up so much
that when people believe the hype for a bit and try things out
and they find that it's not what it's been hyped up
to be. It's just very tempting to flip all the way to the other side of things. So yeah, I mean,
I wish for a more productive discourse, but I think that's a shared responsibility for all of us,
including the companies who are really setting the direction of the discourse.
All right. Arvind, thank you so much. Thank you, Arvin. Thank you for being a good sport.
Thank you.
This has been great.
Appreciate it.
When we come back, hold onto your hat.
It's time for Hat GPT.
Well, Casey, it's time to pass the hat.
Let's get the hat.
We are playing Hat GPT today.
That is of course our game where we pick tech headlines out of a hat and we discuss them,
riff on them, analyze them. and then one of us says,
stop generating.
Or sometimes we say it in unison.
All right, Kevin, would you like to draw
the first slip from the hat,
or do you just wanna pick up the one off the ground
that I accidentally dropped?
I'll pick up the ground one.
Okay, great.
This is more like ground GPT.
Okay.
Well, you know, with AI systems,
it's very important to ground them. That's true. Okay. Well, you know, with AI systems, it's very important to ground them.
That's true.
Okay.
That's an AI joke.
First item from the hat slash ground.
Mark Zuckerberg, Elon Musk,
mocked in hacked crosswalk recordings in Silicon Valley.
So this one came to us from the San Francisco Chronicle.
It also came to us via a listener to this show,
Hannah Henderson, who wrote in about something
that had happened
down on the peninsula, that's the south part
of the Bay Area, where apparently crosswalk signals
on several streets were hacked and the messages
were replaced with messages mocking meta-CEO Mark Zuckerberg
and Elon Musk, videos circulated on social media,
capturing the satirical messages,
which were broadcast when pedestrians
pressed the crosswalk buttons at intersections.
Now, Casey, did you hear these?
I haven't, Kevin, but I believe we have a clip
so that we can hear them right now.
Yes, play the clip.
What?
Hi, this is Mark Zuckerberg.
The real ones call me the Zuck.
You know, it's normal to feel uncomfortable or even violated as we forcefully insert AI into every facet of your
conscious experience. And I just want to assure you, you don't need to worry because there's
absolutely nothing you can do to stop it. Anyway, see ya.
Wow. Kevin, I've heard of Jay walking, but Jay mocking?
But we really shouldn't joke about this. I heard that after he learned of this,
Elon Musk had Doe shut down
the federal department of crosswalks.
So, it could be a really bad outcome here.
Yes, so I assume this was just some flaw
in the security features of these systems
and that it will be quickly repaired.
But I do think it points to a new potential source
of revenue that I've been curious about for years,
which is we should just have way more sponsorship of stuff.
You know, like how you can sponsor a highway
and you like clean it up and pay some money
and you get your company's name on the little sign
next to the highway.
I think we should allow that for everything.
Telephone poles, crosswalks,
every piece of public infrastructure
you should be able to sponsor.
And if you hit the button across the sidewalk,
it should say, I'm Jack Black,
go see the Minecraft movie now across the street.
I'm trying to think if you've had a worse idea than this,
but I'm coming up empty.
Stop generating.
Okay.
All right.
This next story, Kevin.
Cuomo announces new housing plan with a hint of chat GPT.
The local New York news site, Hellgate,
first reported that the former New York governor
and current New York City mayoral candidate,
Andrew Cuomo, had released a 29-page housing plan
that featured several nonsensical statements
and a chat GPT
generated link to a news article raising questions about whether the campaign had
used chat GPT to write its housing plan. Yes I love this story because it is not
only a case of AI usage run amok in the government but it is also a case of
people being caught out by the UTM code. Mm-hmm.
Because, Casey, do you know what a UTM code is?
A UTM code is a piece of text that you can append to the end of a URL that often will
tell you, for example, what site is referring you to the next site.
Exactly.
So my understanding of how this all went down is that Cuomo put up his housing plan and
reporters started going through it. And one of these reporters at Hellgate noticed that on one of the links
in the plan, there was the little UTM code at the end that said that the source of that
article had been from chatgpt.com. That's how they were able to start piecing together
the fact that maybe the Cuomo campaign had had ChatGPT help them write this.
Well, that is some really impressive sleuthing,
but Kevin, I think there was actually an easier way
to realize that ChatGPT had written this housing plan,
which is that it had six fingers.
Now, some follow-up reporting from the Times,
Dana Rubenstein, revealed that the report was written up
by policy advisor Paul Francis,
who said that he relies on voice recognition software
after having had his left arm amputated in 2012.
He told the Times,
"'It's very hard to type with one hand, so I dictate.'"
And what happens when you dictate
is that sometimes things get garbled.
He acknowledged using ChatGPT to do research,
but said, quote, "'It clearly was not a writing tool.'"
A campaign spokesman argued that the housing plan wasn't written
by chat GPT saying if it was written by chat GPT, we wouldn't
have had the errors, which I love. It's like the new excuse
for I didn't use chat GPT is like, look at all the errors.
This has to have been done by a human because the AI is smarter
than that. Yeah, well, anyway, you slice it a really hard story
for all the quomoosexuals still out there.
Remember quomosexuals from the pandemic?
Stop generating! That's triggering my PTSD.
Okay, next story. Oh, this is a good one.
Dolphin Gemma, how Google AI is helping decode dolphin communication. On Monday,
Google announced on their blog
that in collaboration with researchers at Georgia Tech
and the field research of the wild dolphin project,
they were announcing progress on Dolphin Gemma,
a foundational AI model trained to learn the structure
of dolphin vocalizations
and generate novel dolphin-like sound sequences.
Casey, have you used Dolphin Gemma
to communicate with any dolphins yet?
You know, I haven't, and here's what,
I don't think it's any of my business
what the dolphins are saying.
How would you feel if some alien civilization
just came in and decoded your language
and analyzed all your thoughts?
I don't think you'd like it very much.
So maybe some of these Google researchers
ought to mind their own business, Kevin.
I like this.
I like the use of AI to communicate with animals.
Seems like a very good use of this technology.
It's also just wild that using the same techniques
that got you these large language models
can also maybe help us start to decode
the utterances of other species.
And I actually did use Dolphin Gemma
to talk with a dolphin the other day.
You know what it told me?
What did it tell you?
It said, I've been trying to reach you
about your car's extended warranty.
And I said, that's enough out of you.
Here's the best part about releasing a tool
and telling people that it's gonna help
to code dolphin language.
If you're wrong, how are they gonna know?
You'd be like, I don't think a dolphin would say that.
All right, stop generating.
Okay, you're on the next one.
Okay, next one.
Oh my God, truly my favorite story of the entire week.
The US Secretary of Education referred to AI as A1,
like the steak sauce.
From TechCrunch, US Secretary of Education
and former World Wrestling Entertainment Executive, Linda McMahon,
attended the ASU plus GSV Summit this week,
where experts in education and technology gathered
to discuss how AI will impact learning.
While speaking on a panel about AI and the workforce,
McMahon repeatedly referred to AI as A1,
like the steak sauce.
Now, Kevin, you have to admit,
that's a very rare way of pronouncing AI.
Well done.
I thought it was only medium.
Now, do we actually have a clip of Linda saying this?
Let's play it.
I color Linda.
I think it was a letter or report that I heard this morning.
I wish I could remember the source,
but that there is a school system
that's gonna start making sure that first graders
or even pre-Ks have A1 teaching every year starting
that far down in the grades.
And that's just a wonderful thing.
Kids are sponges.
They just absorb everything.
And so it wasn't all that long ago that it's, we're going to have internet in our schools.
Woo.
Now, okay, let's see A1 and how can that be helpful?
How can it be helpful in one on one?
Now, if your child is absorbing A1,
you may want to take them to the hospital.
Casey, we are so cooked.
This is the secretary of education saying
that we need more A1 in our schools.
I did love the A1 steak sauce brands,
corporate responses, usually not a big fan
of the corporate,
internet personalities, but this one, they did actually post an Instagram post,
the A1 Stake Sauce Company, and they said,
"'We agree, it's best to start them early.'"
Yeah, big day for them.
I can't wait to find out that A1 donated $25 million
to the inauguration before this little quote accident.
But look, this is the sort of story that makes you wonder,
maybe we should actually have a Department of Education.
It really just underscores the stakes of AI.
Really high stakes conversation.
All right, stop generating.
Okay.
Next one, how Japan built a 3D printed train station in six hours.
This one comes to us from the New York Times.
Apparently in six hours recently,
workers in rural Japan built an entirely new train station.
This station will replace a significantly bigger
wooden structure that has served commuters
in this remote community for over 75 years.
The new station's components were 3D printed offsite over a seven day period and assembled
just over 100 square feet are the measurements of this new station.
It is expected to be open for use in July.
Casey, what do you think of the 3D printed train station in Japan that was built in just
six hours? Well, if they built it in six hours, why do I have to the 3D printed train station in Japan that was built in just six hours?
Well, if they built it in six hours, why do I have to wait until July to use it?
That's my main question.
What do you think?
I like this.
I like the idea of 3D printing housing.
Obviously, our friends and colleagues, Ezra Klein and Derek Thompson have their new Abundance
book out talking about how we need to build new houses in this country.
We should say the, you know,
the sort of 3D printing housing thing has not been
totally successful in America, but the technology is there.
I think we should start 3D printing houses.
I think we should 3D print ourselves a new studio.
Sure, I mean, it's worth a shot.
Here's what I like about this story.
You know, you read about countries doing things like this,
and I feel like this is the sort of thing
that Doge is convincing us that it's trying to do.
You know, it's like, we're gonna make things so efficient,
you know, and in my view of that, they would be like,
oh, you're gonna like build new public infrastructure
really quickly, but instead it's just like,
well, we've replaced your local social security office
with a phone number that no one answers.
Yeah. Yeah.
Anyway, good job, Japan.
Good job, Japan.
Advantage, Japan.
Yeah.
Is there one left or is that it?
There's one more.
All right, Kevin.
And now the final slip in the hat.
One giant stunt for womankind.
This is such a fun essay by Amanda Hess in the Times.
I recommend everyone go read it.
This was of course about Blue Origin's
all women space flight this week. when some very famous women very briefly
went up into the sort of, I guess you'd call it like the
outer reaches of the atmosphere. And Amanda writes, quote, if an
all women spaceflight were chartered by say, NASA, it might
represent the culmination of many decades of serious
investment in female astronauts. And all women Blue Origin spaceflight signifies only that several women have amassed the social
capital to be friends with Lauren Sanchez. Lauren Sanchez, of course, Jeff Bezos' fiance,
Jeff Bezos, the guy that created Blue Origin. So Kevin, what did you make of this spaceflight?
I mean, I think it's an amazing publicity stunt for Blue Origin, which I had not thought of for more than about five minutes until this week
when Katy Perry and Gayle King and all of these famous women
were thrust up into orbit in one of these Blue Origin rockets
and they got some good publicity out of it.
What did you make of it?
Well, you know, as a gay man,
I'm always very interested in what Katy Perry is doing.
And so when I found out she was going to space, I thought, this could be good.
And indeed, Kevin, when she got up, she told us in advance that she was going to make an announcement.
And then she got up into space and the live stream cut out.
And so I think we're still trying to find out what that announcement is.
But we may actually have a clip of exactly when the live stream cut out.
Play it.
And Katy Perry did say that she was going to sing in space.
I'm waiting for it.
I'm waiting for it.
One minute warning.
One minute warning.
So that is Capcom indicating a one minute warning for our astronauts to take in those
last few before they get buckled back into the seats. And it's the first thing I've ever seen.
Now, according to several other passengers on the trip,
Katy Perry did indeed break out into song,
singing, what a wonderful world, before returning,
and when she got back to Earth, she kissed the ground.
What did you make of that?
Mm, I'm still reeling from that clip. That's the best Katy Perry has sounded in years.
Yeah.
She may just want to release that.
Forget what a wonderful world. What did we just hear?
That was great. Very avant-garde.
Now, Casey, would you go to space if Jeff Bezos
or Elon Musk offered you a spot on one of their rockets?
No, if Elon Musk offers you a spot on a rocket
that's giving Bond super villain, I will not be, I'm barely getting into Teslas at this point.
How about you?
What about Blue Origin?
Would you take one of their flights?
I mean, under the right circumstances,
I'm space curious.
You have to admit, it would be a great story to tell.
Even if you only go up for a few minutes,
it could be a great story.
Yeah, totally.
And you get to wear that valor for the rest of your life. You're always an astronaut. You go up for few minutes. It could be a great story. Yeah, totally. And you get to wear that valor for the rest of your life.
You're always an astronaut.
You go up for 10 minutes.
This was a fairly short flight.
My understanding is they didn't have to do any maintenance
of the craft.
It was just sort of like they were just there for the ride.
No, well, I think once Katy Perry started singing,
people started looking around saying,
we gotta get this thing back on the ground.
I really can't deal with that much of this.
No, I wanna go to space space and I'll tell you why.
Why is that?
Because I once read that you actually become taller
in space.
You can grow as much as a couple inches
just because your spine elongates
in the zero gravity environment.
I'm 5'10", Casey.
I've always wanted to be six feet.
So I think going to space could get me there.
And for that reason, I'll go up.
Well, you know, if we went up together,
we'd come down and you'd be six feet
and I would be six foot seven.
Yeah, which would be terrifying.
Which would be, that'd be,
then I'd have an even harder time finding pants.
Anyway, that's what's going on in space.
Now, actually one more question.
Yeah.
If you were an alien civilization
and you found out that the United States
had launched Katy Perry at you,
would you consider that an act of aggression?
Yes, and I do think I hope this starts an international incident where like,
like the Soviet Union will start sending up its pop stars.
Does the Soviet Union still exist?
Whatever happened to them?
I've been meaning to ask.
And that's Hatch GPT. Hats off to you, newsmakers. And we can assure, actually, people will be
curious. I can assure you, no A1 was used in the making of this segment. Thanks for watching! One more thing before we go, we are recording another episode of our Hard Questions series
with a very special guest.
I'm so excited to tell you guys who it is, and we just want to have really, really great
questions for them. If
you have not heard our hard questions segments before, this is our advice segment where we try
to answer your most difficult moral quandaries, ethical dilemmas, etiquette questions that involve
technology in some way. What is going on in your life with technology right now that you might be
able to use a little celebrity help on. Please get in
touch with us, write to us, or better yet, send us a voice memo or even a short video of yourself
asking your hard question and we might answer it in this upcoming episode. Please send those to
hardfork at ny times.com. Hard Fork is produced by Whitney Jones and Rachel Cohen. We're edited by Matt Collette.
We're fact-checked by Ina Alvarado.
Today's show was engineered by Katie McMurran.
Original music by Alicia Bietup, Mary Lozano, Rowan Niemisto, and Dan Powell.
Our executive producer is Jen Poyant.
Video production by Roman Safulin, Pat Gunther, and Chris Schott. You can watch this full episode on YouTube
at youtube.com slash hardfork.
Special thanks to Paula Schumann, Huy Wing Tam,
Dahlia Haddad, and Jeffrey Miranda.
As always, you can email us at hardfork at ny times dot com.
What if you got to the vape shop and the thief
was actually like a sort of a suave gentleman like a swab gentleman thief you know like dressed in a suit?
It's just Lupin from the French. Yes exactly. What if it was Lupin?
He's like I've been expecting you Mr. Roos. You're probably here about your
series 8 Apple watch. I thought you'd never track me down.