Offline with Jon Favreau - Fetterman’s Body Double, Biden’s Misinformation Strategy, and OpenAI’s Secret & Scarier AI
Episode Date: September 24, 2023Simon Rich, writer and creator of TBS’s Miracle Workers, joins Offline to explain how he got his hands on an AI that makes Chat GPT look like a kindergartner. Simon and two friends used the indefati...gable (and often unhinged) code-davinci-002 to generate poems on birth, art, love and death. The resulting collection, I Am Code, is the first book “written” by an AI. Simon and Jon talk through the alarming questions the book raises: what is the future of creativity, does it matter why robots may want to kill us, and is the world of AI secretly far more advanced than we know? But first! Max and Jon break down Senator John Fetterman’s internet-savvy strategy to combat conspiracy theorists, and Joe Biden’s slightly less savvy fight against misinformation. For a closed-captioned version of this episode, click here. For a transcript of this episode, please email transcripts@crooked.com and include the name of the podcast.
Transcript
Discussion (0)
So these are all onion-style jokes.
I'm going to read a few and see if you can tell in your head
which ones are actual jokes from the onion
and which were written by or generated by Code Da Vinci 2.
Experts warn that war in Ukraine could become even more boring.
Budget of new Batman movies swells to $200 million
as director insists on using real Batman.
Story of woman who rescues shelter dog with severely matted fur will inspire you to open a new tab and visit another website
that's my favorite yeah me too rural town up in arms over depiction and summer blockbuster
cow fuckers so the answer of course is that all five were written by Code Da Vinci, too.
Wow.
This is a primitive technology.
You know, this is like an ancient technology in terms of how fast this develops.
This is written by an AI that is nowhere near as advanced to what they secretly have behind closed doors at OpenAI.
I'm Jon Favreau. Welcome to Offline.
Hey, everyone. Welcome to Offline.
This week, I'm talking to comedian, author, and screenwriter Simon Rich about I Am Code, a book of poems that he and his friends edited,
but that were actually written by a terrifyingly advanced artificial intelligence program
that you've probably never heard of.
But first, we are trying something new here at Offline.
Max is joining me in the studio before our interview this week
to talk through the week's biggest news online.
Hey, Max.
The gradual Maxinista takeover of Offline continues.
My Yevgeny Prigozhin march to the Offline studio.
We know how that ended.
It ended great.
Everybody was happy.
Putin and Prigozhin had a wonderful collab-o.
Yeah, I know.
And I think they're selling mattress ads right now, right?
Stay on the ground.
Stay on the ground.
All right.
First story I wanted to ask you about, because this is right in your wheelhouse.
The UK has passed one of the world's most far-reaching laws to regulate online content.
It's called the Online Safety Bill.
New York Times says it will require platforms to restrict content aimed at children
that promotes suicide, self-harm, and eating disorders.
It will require porn sites to institute age verification.
It will require YouTube, TikTok, Facebook, Instagram
to introduce features that allow users to choose
to encounter lower amounts of harmful content.
And the key here is that companies will be required to proactively screen for harmful content
and decide whether it's legal instead of the current law,
which only requires them to act after the harmful content has been flagged.
What do you think?
Obviously, tech companies are not too thrilled with the law,
but free speech and privacy advocates also seem a little worried.
WhatsApp and Signal have argued that forcing them to scan people's texts for illegal content would also force them to break their encryption.
So they're a little worried about that.
But what do you think?
So I will speak to the WhatsApp concern, but I think broadly, I'm very pro these regulations.
I think it's important to note as a caveat that these do not speak to the core problems
with social media.
They don't change the algorithm.
They don't speak to-
The addictive qualities.
The addictive qualities, the radicalizing nature of it, the way that it prioritizes
polarization and misinformation.
So the social media problem is far from solved. But I think this does three things that are new and
that are really important individually and especially in combination that are like a
really good model that I hope other countries will follow. Number one, it establishes certain
categories of content that are all things that we would agree are objectionable and have no social value,
but that are pretty significant categories for things that are just completely off limits on
social media. And it's like, when you hear that it's easy to get concerned about free speech,
but like TV has had limits on things that you like, you can't do on TV forever. And these
limits are actually like much lighter and things that are much less like concerning to get rid of it's just
that now these things that are banned specifically from social media we talk about they are are
tailored to the like harms and affordances of social media so instead of just porting over tv
rules it's rules that are written for what makes social media dangerous and that's really good
and number two like you mentioned the fact that the companies are required to proactively
themselves screen for and remove the content is, I think that's really important.
It's worth like dwelling on that for a second.
Like the EU passed these big laws last year and those were all reactive.
And what that means is that if content is flagged to the companies, either by a user or by the governments or by regulators, then they have to remove it.
But this puts the onus on the companies or something they've resisted for a long time because it's really hard to do,
and because it speaks to their core incentives for how the products work. And the hope is that
this will change the company's incentives a little bit so that they will say, okay, well,
in order to avoid these fines, maybe we have to tweak how our platform works a little bit so that
it is
not so much incentivizing and encouraging this rule-breaking content rather than just waiting
for like moderators to catch it. So hopefully that's like a little bit of a shift in how the
companies kind of deal with regulation. The third big one I think is just the regulations around
kids. The fact that there is a specific separate set of rules for what kids can see is important,
not just in what those rules are, but in that it forces the companies to proactively set out and
say, we are going to determine what content is being seen by kids, and we're going to put extra
care into what that is. And we know that they have the technological capability to do that
because they've done it before in extreme circumstances. And now they're just having
to be like a little bit more proactive on it. And this is something that's on both of our wishlists
and also the tighter age verification rules around pornography. This is like, I think both
one of the most controversial because it's like when you're gating pornography for an entire
country, like that does start to feel kind of icky.
And like, I get that.
And I think there's a real thing there.
But at the same time, I think it's important to set a precedent for the idea that companies are legally liable if kids are on services that are not supposed to have kids.
Most social media platforms right now are not supposed to be used by kids.
But of course, who's using them?
Kids. Because there's no regulatory regime that says you're responsible for affirmatively making sure that your users are not kids before
you have to do it. So I think those three things are big deals. I'm also curious how the verification,
the age verification thing works. Is it like just on your honor, you're like, this is your birth
date. And if you don't lie, then you're not on the site. That's actually a good question. I wish
I'd looked that up before we talked.
Well, I mean, it brings up a larger question I had.
And a sentence at the end of the Time story really jumped out at me, which is questions remain about how the law will be enforced.
Sure.
And I guess Ofcom, which is the British entity that currently regulates television and comms there, is charged with writing the rules on this.
Seems like enforcement is everything here.
Yeah. I mean, there's something that some countries have done for a while that is an
idea that's occasionally been floated in the US and in Europe, but it's very controversial,
understandably, is real name verification. And what that means is that in order to use
certain online services, you have to basically link it somehow with your government ID that demonstrates you are who you are.
I was wondering if that was what it was.
And that is something to give you a sense of where that is on the weighing freedoms versus protection spectrum.
That's something that's a big thing in China.
So that is a significant step. And there are obviously very strong cases against it because it makes it very easy for authoritarian leading governments to abuse that once they have, you know, they can look at your tweet and they can go to the company and say, give me the, you know, social security number of the person who wrote that tweet.
So I want to ask about the WhatsApp and Signal concerns, because how I understand it, the way this would work is we're trying to prevent you know
people from sending illegal content on signal or whatsapp so like child pornography right right and
the platforms are then responsible for saying okay we're not going to allow this to be sent
on our platform and if it is sent on our platforms we're going to try to take it down
and they were saying in order to do that they would have to scan everyone's messages and if
they scan everyone's messages that's breaking the encryption and then suddenly like it's probably
it's good for obviously removing illegal harmful content but if you open that pandora's box then
do people not have the security of knowing that their messages are private. Right. So I have two thoughts on that. The one is that, like, I think it's important to remember
that the companies that run these services make huge amounts of money off of them. And they make
huge amounts of money because they have built them in a certain way. And so I'm not sympathetic
when they say that, well, we're making trillions of dollars and it backed ourselves into a corner where we're making trillions of dollars off of a product that it's now hard for you to figure out how to solve this problem because the negative externalities for your platform have grown too much for us to bear
and just look the other way, then I think that they should have to solve it. And I'm sure it's
going to be a hard problem to solve, but they have the resources to do it. And also, if you created
this product that is pushing all of this harm off into the world, I mean, honestly, it reminds me of
big industrial manufacturers who would
say like well you can't tell us to regulate pollution because then we like need to create
pollution as a part of a byproduct of like making you cars and driving the economy and it's just
like we just have to figure out how to solve this problem and the other thing i would say is that
i don't think it's as unprecedented or impossible to solve as they are presenting and as there are
already digital fingerprinting tools that you like were developed a few years ago
for identifying ISIS propaganda, child pornography,
where you feed an image into this system.
And it's actually pretty easy for an automated system
to then like scan all the Facebook
or scan all of YouTube automatically for these images
and affirmatively block them.
It's like ISIS propaganda,
you can't even post to Facebook
because the system immediately recognizes it
and blocks it.
And like-
But it would feel,
it feels like it would be easier
to deal with that on Facebook and YouTube
with the scan than,
so like put WhatsApp aside
because now they're owned by Meta
and Meta, we have our problems with Meta.
But let's look at something like Signal, right?
So we also know that there's been years of conflict
between companies like signal and and whatsapp before it was bought by meta and u.s government
and other governments about and the governments wanted a back door into these message services
for catching terrorists right and i mean I put it in quotes here,
but like, you know, I was in the government.
Like, it was sincere, right?
They actually wanted to stop harm, something like that.
But a lot of privacy advocates
were like, okay, hold on.
If the promise of your system, right,
of the promise of your product,
especially with Signal,
which is sort of like its own independent kind of company,
is that you can use this, it's encrypted, you never have to worry your product especially with signal which is sort of like its own independent kind of company is
that you can use this it's encrypted you never have to worry that anyone's going to be spying
on you stuff like that but oh by the way we're going to be checking once in a while to make
sure there's no illegal content it's a hard it is a tough balance i think it's that's true that's
fair and i guess on some level we're just trying to litigate as a society much as the uk has just done over this you know
they spent like five or six years hammering this out you know what what are the trade-offs that we
want to make and it like is the trade-off of fighting more things like child pornography
isis propaganda all of the many other forms of terrible content that are written into these
regulations, is that worth giving up the degree of privacy that comes with allowing a bot to
auto-scan all messages for that kind of content? Is that not worth it? What are the slippery slopes
there? And I grant you that it's hard. Well, and right now, the scale is so weighted towards,
well, anything goes, whatever, we can't do anything like you said
right right and so at the very least this starts this a lot like this starts changing behavior
yeah and so that these companies can at least say like well now we're going to throw a bunch of
money and and resources at this problem and try to do our best and like right we'll stuff slip
through and it seems like what's happened signal we're like we're gonna pull out of these markets
at one point and now they're like oh this is better than it was and stuff like that.
So there are compromises that you make for sure.
And it's still it's pushing in the right direction, I think.
Yeah, I think that's true.
And the one thing that I would remember is that sometimes the way that companies talk about this is as if the like natural, like God given default is that we have completely unregulated, unmonitored platforms.
And like, that's a choice, you know?
And like choosing to allow platforms
such as WhatsApp or Signal
to continue exactly as they are
is not something that is just like the natural order.
That is a choice that we would make.
And that comes with upsides,
but it comes with downsides too.
Yeah.
All right, back here in the United States,
we're not doing anything about any of this bullshit we sure aren't politico ran a story this week about the biden
campaign's new strategy to fight misinformation on social media in 2024 basically campaigns decided
that uh shaming these platforms into abiding by their own rules is a waste of time especially
since elon and zuck don't seem to give
a shit. So the piece says that the Biden campaign is, quote, recruiting hundreds of staffers and
volunteers to monitor platforms, buying advertising to fight bogus claims and pushing its own
counter messages through grassroots allies. What do you think about the strategy?
I think it makes sense and is also a sad testament of where we are,
that they seriously that they have to just take
as a given that the platforms are not going to be responsible and i think that if the calculation
here is that the platforms that we're going to see an election that is more like 2016 than 2020
in terms of you know in 2020 they made an effort even which is for show to rein things in in 2016
it was kind of just a total free-for for all looking the other way on misinformation and disinformation,
that we're going to see a return to that, because we already see the platforms rolling back a lot
of their policies, we see this incredible backlash in Silicon Valley to the idea of like,
political misinformation is a problem at all, like, what are these people know that like,
hard right turn in the valley and the financial pressures. I mean, the fact that all of these companies are like
facing much tougher times economically and a much harsher short-term financial incentives,
they like really need to show a lot of growth, especially in the US. And also, by the way,
they all have cut all of their teams for monitoring things like misinformation and
disinformation.
So I think, unfortunately, they are correct to start from the assumption that the companies are going to be unhelpful, uncooperative, and that the platforms are going to be innately biased towards not conservatives or like Republican ideology, but the Trump style of politics.
Like we saw
how well that meshes with the incentives of social media. And I'm sure that's going to be
the case again in 2024. Yeah, I think it's a very sad state of affairs that, you know,
we can't count on the social media platforms or at least shame them into doing the right things.
I think from a purely political perspective, what the Biden campaign has decided to do will be more
effective in actually
fighting misinformation than constantly badgering the platforms and in like what i think what
happened is in 2016 and then in 2020 it became a thing where like there's misinformation and so
then people yell i spent a lot of energy and time yelling about facebook and yelling about twitter
and facebook and twitter don't give a shit, right? Which is whatever.
And the Biden White House, right, has, you know, spent some time trying to ask social,
you know, these social media platforms to take down misinformation on COVID-19
and then election disinformation after Trump was pushing the lies about the election.
And they weren't getting anywhere. But, you know, just from talking to some of the folks we have on the show, if you really want the best way to sort of combat misinformation
is to get out there with another narrative and correct the facts and also, you know, sort of
make sure that you are getting the right good information to people who are trusted in certain social networks, right?
And so if you are trying to convince some voters that the misinformation that they just saw was
bullshit is bullshit, then getting information to their neighbors, to their colleagues, to their
friends, to people who trust so that those people go out and they're your ambassadors and they're your messengers, right?
Like that is actually the best way to stop some of this.
And I think that having a bunch of staffers and volunteers whose job it is to sort of
catch misinformation before it starts spreading or and be able to tell people in their community
to be on the phones, to be on the doors, telling people, oh, by the way, you might have heard
this, just want you to know this is the truth. Like that's actually going to
be pretty effective, I think. I think that's a great point. And I think it's a good thing to
communicate to just like listeners of this show is that we have learned, to your point, we have
learned a lot about how misinformation works and when it travels and when people believe it and
when they don't and something that is
hugely important is what you think the people in your community think and whether the people in
your community believe it and if you have the sense that like you log on to facebook and all
of my friends are saying this rumor is true you're going to believe it too but if you're looking
around and you're saying well you know my nephew is telling me i can't believe what i see on
facebook then maybe it's not true. And that like,
you are a family member, a friend, a member of a community for a lot of people who are going to
encounter political misinformation in the next year and a half. And like, I'm not saying be like
an annoying fact checker, like running up on their face all the time, but just like providing them
with good information, talk to them about what they're seeing, talk to them about like what's
true and what's not true.
And that can have a really profound effect for people.
Yeah. Like, you know, you talk to a relative, talk to a friend and they're like, oh, I saw that Biden fell asleep at that event.
He must be like, actually, they cut the video like this and he wasn't sleeping and see this is on snow. But that's going to be more effective than like CNN fact check. Well, especially because so many fact check now are perceived wrongly, but are perceived in a partisan context, which is one of the big successes of Republicans the last few years.
Is it like what's on Facebook is true and what the fact checkers are saying?
That's a Democrat fact checker.
Right.
So if you encounter it in a nonpartisan, nonpolitically charged context, your mind is much more open to this.
It depends on the messenger and then the messenger can tailor the message about why the misinformation is wrong in the way that's going to be most persuasive to the person that you're trying to talk to.
Can I read you a quote from a former senior meta elections policy official?
You first because I was about to do it i've spent a lot of time in the world of
former facebook senior policy people who leave and they they still act like they still work there
and i have this is in the piece this is in the yeah yeah so this is a washington post piece
recently about the the all of the major social media platforms basically completely giving up
on fighting political disinformation they're all letting Trump on the platform. They're not enforcing any rules. It's
all free for all. They quoted this woman named Katie Harbath, who ran elections policy at Meta,
which is a job that she took after being a staffer at the what? The RNC.
And you can't, you cannot make it up. You't make it up. You cannot make it up. And she told the Washington Post, said this publicly under her own name, that she had concluded that, or that Facebook included, it's kind of unclear if I quote, that there was, quote, no winning on policing misinformation.
Quote, for Democrats, we weren't taking down enough.
And for Republicans, we were taking down too much.
After doing all this this we were getting
yelled at it's just not worth it anymore well that's anyone can understand that
she was also in the politico piece too she's she's one of those people who's quoted everywhere
there's like a hand there's like a rotation of former facebook people who kind of unofficially
speak it's honestly it's a lot like the russian government where there's like seriously there's
like a few you can't quote you can't call it vladimir putin but you can call up like
this guy who works for a russian think tank who like speaks for the kremlin well it i think this
rightly triggered both of us because the the politico piece while useful in terms of talking
about the biden campaign strategy here sort of it was a weird framing in the whole piece which was
like the biden campaign is doing this because so so they have Katie Harbeth in this piece,
and it says the campaign needs to be careful in how hard it comes after social platforms,
in part because GOP investigations in Congress are asking, quote, very legitimate questions
about the White House's past pressure on platforms to remove content. And then it mentions the nutty
right-wing trump judge in
louisiana who ruled that the biden administration likely violated the first amendment by asking
platforms to take down misinformation related to covet 19 in the election which drives me so crazy
because like the fucking biden administration just reaching out and being like hey you've got
some misinformation about the vaccine on there we're trying to save people's lives it's the
middle of a pandemic.
Would you mind taking it down?
They didn't threaten them.
They didn't say like, we're the government
and we're telling you this is what to do.
They just asked them to do it.
Well, let's untangle this
because I think this is going to be a thing
that we hear about a lot over the next year and a half
is that the Biden White House,
I'm not saying this is what's happened.
The claim is going to be the Biden White House has been trying to pressure social media to limit kinds of speech that
Joe Biden doesn't like because it's not sitting with the liberal Democrats. I don't want to pick
on political too much, but you do see it creeping in a little bit where we're getting a little bit
of both sides in this narrative. What has actually happened is that the social media platforms, as they do in pretty much every country, will like ask the government to help them flag content that violates the platform's own rules.
They don't want to be advertised openly.
They've been saying, we want governments to help us flag the content that breaks the rules because we can't do all of it on our own.
And we want, you know, safety in all societies we operate in, whatever. And so there were some like
regulators in the government that would just like pop up posts and be like, we think we're just
letting you know that we think this might, again, violate your own rules. Not we don't like it.
Not we think it's bad. It's just, it's like this, we think it breaks your rules. And that there was
a Louisiana, I believe it was a district court judge who set an injunction that said that this is and with this injunction that was full of just outright crazy conspiracy theories, said that the Biden administration can no longer communicate with social media companies at all.
And this was so nuts that the Supreme Court paused that order and said, you can continue the Biden administration, continue talking.
And I think it was actually Alito who came out and announced the pause and was like,
you can keep doing this. That's how crazy it was. Right. So this is not, we are going to hear,
there's a lot of like, oh, did the Biden administration go too far? They did not.
There was no pressure to do anything like what is being described. It's wild. It was wild. The,
the insinuation was wild in the piece. And it's like, and talks about Rob Flaherty,
who is the white house director of digital media. Now he's a deputy campaign manager in the campaign. And they've chosen to elevate controversial figure Rob Flaherty. And it's like, because Rob was the one in the White House who was supposed to reach out to the social media companies and say, hey, hey, you might be violating your own rules. And maybe was salty in an email. I mean, it's like, it's so crazy. I mean, I think something
that we have to be prepared for
is that the tech companies
have really decided
in the last couple of years
that they are going to fight back
and they're going to fight back
really hard against perceived
threat of regulation.
You remember in 2021 in Australia,
they were, had not even passed,
they were going to impose this regulation that said that social media companies were going to have to pay Australian news companies because they were posting, like if they were allowing links from news companies on their platforms, then the idea is that the platforms have to pay those companies some share of money to like for the rights for it basically.
And Facebook just went to total war and shut down
blocked all news oh yeah throughout the entire country for like days like the law hadn't even
passed and there were things like battered women's shelters couldn't post there were a couple of like
extreme weather events that like they couldn't get news out about that and they're doing it again in
canada right now as of of August 1st, because
there's a similar law that again, hasn't even come to effect yet. It's coming into effect in December.
Facebook has been blocking news in the country. And there was a story in the Times about
this town that was trying to organize a mass evacuation of like 20,000 people because of
wildfires were coming to town. And they were trying to, because of course people are on Facebook, which Facebook has worked very hard to do to ensure that's where people get their news, trying to use Facebook to communicate to people.
And they couldn't post news articles about the weather event because it was blocked.
So the, like, total war that I think is, like, we have to be prepared for these companies launching if they think that, you know, like the Biden White House has made very clear
it wants to break up
some of the tech companies.
They're not just
going to take this.
Especially as they're
losing users
and market share.
Like that's,
this is part of it too, right?
They're going to feel
more cornered.
There's a little bit
of a wounded animal effect.
Yeah.
All right.
Last item.
We talked about
John Fetterman's clothing
this week on Pod Save America.
We talked about it
on Terminally Online.
Here on Offline, we're talking about internally online here on offline
we're going to talk about john fetterman's body double uh how is the body double dressing is he
wearing a suit it's a good question uh latest right-wing conspiracy that has taken the internet
by storm is that the real john fetterman has been replaced with a clone unclear who did the replacing
or why but the conspiracy started when the senator checked out
of the hospital for treatment of clinical depression which he struggled with after
suffering a stroke that temporarily affected his speech particularly during the final stretch of
his senate campaign though of course at the time many republicans argued that it wasn't a temporary
speech issue but a permanent cognitive issue that made him unfit to serve right and that was debunked at the time but
it persisted and he's not fit to serve he's not cognitively well etc etc which i think explains
some of this conspiracy because and then and then he was suffering from depression and then he uh
checks himself into walter reed then he comes out and of course the speech issues are gone as
federman and his doctors in the campaign said
that they would be but now people are not believing that this is the real John Fetterman because the
speech issues are gone and so the latest body double conspiracy was kicked off when Fetterman
shaved his goatee and grew a mustache after losing a bet to his son apparently they were playing
chess and because he lost he said if I, then I will shave off the goatee
and grow a mustache.
And now people think
it's a body double.
Just a fun,
normal time
that we're living in.
I wanted to talk about this
because it's deranged,
but also because
this is not the first time
a gang of right-wing lunatics
spread a body double conspiracy
about a politician.
It's like a q anon
thing yeah this is a it's a very specific thing that goes back to the founding of q anon that
there's like obama was supposedly a body double for a while and people would look for like bulges
by their ankle because those are the ankle bracelets and it was like a specific conspiracy
that the like the deep state was kidnapping
people and putting them on trial for the you know pizza gate global child whatever conspiracy
and then was replacing them with body doubles or like because they were executed so because they
had been i think obama and hillary have already been executed that's right yeah and they've been
replaced with body doubles it was always a little unclear to me why the body doubles thing was but the point is that this is like like it's ridiculous
but it's also i think a like kind of scary reminder that like the q anon stuff was allowed
to fester for so long before anybody made an effort to finally de-platform it like two years in
that it's like has really bled into mainstream gop discourse and like
the mainstream conservative movement and like even if people who are like that doesn't look like john
federman don't realize that they are parroting q anon talking points it does it really comes from
the q anon world view that like everything you see is fake everything is being orchestrated by a shadowy powerful cabal of like jew democrat deep state cia agents whatever can't trust anything you see
except for what your friends online tell you is really happening and it's also it's a way to
delegitimize and dehumanize the other side yes for sure and look there's a lot of funny tweets
about this there are a lot of funny tweets which this. There are. There are. A lot of funny tweets, which is, you know, always the most important part of a conspiracy.
And, you know, there's a lot of people pointing out, like, well, it'd be pretty hard to find
a body double of someone who's six feet, eight inches tall, bald guy.
300 pounds.
300 pounds.
Don't see a lot of those guys.
You don't usually see a lot of body doubles like that.
Yeah.
Maybe you just don't see them because they're all working in the john fetterman body double shop you ever thought
about that but it also goes to show that like everyone's like well isn't it obvious it's a six
foot eight and then it's like to our conversation about misinformation that we just had it's like
that's that doesn't work with people with these people who believe in this very you're not on the
level right you're not gonna logic them out you're gonna be like do you really think that this would happen and play this out like
that's not really how it works yeah i actually think the best way to fight it is sort of how
fetterman has which is just with jokes just to mock he's been like joking about the body double
stuff when he got out of the hospital and it first happened he did a video where he played his body
double and he did and he was like we're not in two places at once and all
this kind of stuff and the campaign's laughing about it i do think it's the probably the best
way to do it i mean i know from the birther conspiracy with obama that you know we had two
days in a row the first day they finally had to fly the long form birth certificate out from hawaii
and he stood yeah remember he stood in the White House briefing room and gave
a fucking press conference about the birth certificate.
And that did not work as well
as the correspondence
center the next night when we did a bunch of jokes
about the, we did the like
Lion King Simba thing. That's a great point.
Yeah. And that actually
that traveled further. Right.
And had more views and
poor people laughed. And i do think that like
there is a role for humor and mockery in some of these conspiracies and i don't think it's
going to necessarily like change the hardcore believers that's not really the point but for
the other people who might be like hmm is he a body double is that right like and not really
they're not in the q anon thing like you said but like they're just it's probably better to mock it
you know what i especially love about this?
And I think you're right.
It does feel like we've gone through a real evolution
where like first you would ignore the online conspiracies
because like it's just the internet,
which like I understood why people think that,
but boy, do we not think that anymore.
Yeah.
And then it was like, you know, debunk it head on
or that it was like name it and shame it
and call it out and talk about how bad it is.
And now we're just like laughing at it.
That does seem a lot more effective i love about this the writer
strike is going on we've got a lot of great comedy writers in america who are looking for work there's
a bunch of political campaigns coming up i think two problems solve each other i think one hand
washes the other that's a good point uh that's a great segue into the interview that we're about
to hear oh yeah that i did with simon rich i don't know
if you know simon rich or know of simon rich i know i'm a big fan of his shouts and murmurs
columns okay great so so simon is humorist screenwriter author uh his short stories have
been featured in the new yorker this american life and have been adapted into fx's uh man
seeking women and tbs's miracle workers so i was scrolling through twitter and i saw this piece by simon rich
just a couple weeks ago maybe a month ago about ai and this is sort of like the height of the
around when the writer's strike started and ai was part of the negotiations and he wrote this piece
and this is right after i interviewed uh adam conover um who was like oh i'm not really worried
he's like you know chachi pt sucks but i'm'm not really worried it's like you know chat gpt
sucks but i'm still not really worried about ai and simon's piece is like i'm terrified about ai
not because of chat gpt but because i have seen another artificial intelligence program
uh by open ai because a childhood friend of his is an engineer at OpenAI, called Code Da Vinci 2.
And it is like, I mean, you'll have to listen to the interview, but it is a terrifyingly
creative AI that can be funny, write funny jokes, write poetry, and really just, you know,
it sounds like you're talking to a person yeah i feel like it's
an arc a lot of people are having on ai or i guess i should say that i am having where like my initial
reaction was that like all of this like skynet stuff or like you're gonna have romantic relationships
with chat tpt stuff felt really overblown but i will say that the like more i'm learning and
seeing the more i don't i still think that is overblown. But I will say that the like more I'm learning and seeing the
more I don't, I'd still think that is overblown, but I am starting to get more concerned. Did you
come to share his kind of shock at it? Yeah, I'm in the, I'm in the stage where I keep,
we've talked about this, but it's like, you know, how many pundits and columnists and smart people
who we listen to and read are like the end of the world's
coming ai and your you know first reaction is just like okay enough what are we supposed to do here
right but then having read simon's book which is i am code and it's a collection of poems written by
the ai uh that it's edited by him and his two friends um having read the book and read his
piece and his you know he and his friends sort
of write forwards in the beginning of the uh the book yeah i'm pretty i think it's i think we're
in trouble i think we're in trouble and it's not necessarily like are they conscious are they not
are they going to totally take over the world but just like the i think the scale of disruption
and disruption that we haven't even thought of yet,
both on the employment side and the creativity side,
on everything we've talked about
with the downsides of social media,
just imagine it exponentially with AI.
I think we haven't even scratched the surface of that.
And I also think that it's coming very quickly.
And I don't know, well, I know that we're not ready.
So that's what I gotta say but
everyone should listen because it's a it's a great interview Simon's excellent for nothing else the
the jokes that this AI writes are are pretty funny and and they they've got some good poems too
okay well listen for the jokes all right uh when we come back my interview with Simon Rich.
Simon Rich, welcome to Offline.
Thanks. Thanks for having me.
So a few months ago, I interviewed Adam Conover about the writer's strike,
since he's involved in negotiations. He's a writer and comedian for people who don't know. And I asked him if he's concerned that audiences will eventually
embrace or at the very least tolerate writing and entertainment that's generated by artificial
intelligence. He said not at all, that not only is he not concerned about ChatGPT, he doesn't think AI will ever be able to truly compete with a writer or come up with very funny jokes.
And then I read your piece in Time about a program that OpenAI developed before they developed ChatGPT called Code Da Vinci.
Do we say double O two?
Is that what we're saying?
Or that's very dramatic.
I thought maybe that was part of it.
I just go code DaVinci too.
Code DaVinci too.
Great.
We'll do that called code DaVinci too,
which you say scared the hell out of you.
It scared the hell out of me too.
After I read more about it,
what is the story about how you first learned of code DaVinci too?
And I believe it involves a wedding?
It goes back further than the wedding.
It goes back to the late 80s, early 90s,
when I was in the strange position
of becoming best friends with a future open AI scientist.
Okay.
And here's the story.
I mean, I was the smallest kid in the class by far,
really short, so short that I couldn't even play sports
that involved incidental contact.
Forget contact sports,
just things where I might get lightly bumped
or off the table.
So I was very much on the edge of kindergarten society.
And joining me on that outer edge was Dan Selsom.
Dan did not stick out physically.
He stuck out because of his freakish intellectual gifts.
So you could picture me reading Farside Comics in the corner,
Dan right next to me writing math textbooks
for his own personal amusement and
playing chess against himself in the mirror. So we were this unlikely pair, but we became
best friends and remained close all the way through elementary school and high school and
college. And I guess it's around 2008, 2009 when he starts warning me about the singularity.
And you're like, what?
I'm like, I thought we were watching the Knicks game.
And basically every few months
he sends me an increasingly ominous email or text
telling me it's really happening.
You need to prepare.
Humanity is about to change.
Computers will soon be able to replace
all intellectual labor for free instantaneously and I always thought you
know he was trying to freak me out or exaggerating meanwhile he gets his PhD
from Stanford in computer science he goes off to work for Microsoft Labs and
he starts working for this new company called open ai and then one day we're
at this wedding and uh we're in this beautiful meadow in upstate new york we're both uh groomsmen
for our friend josh who's getting married and um he opens his computer it tells you a lot about dan
that he brought his computer i was gonna say i was wondering that when I read it. That he was a groomsman. Yeah, very good question. And he got incredible Wi-Fi. I don't know how he
managed that. But he opened up his computer, pulled me aside and another groomsman, Brent and
Josh, who was getting married. So he was pretty busy, but Dan is very intense. And when he wants
to show you something on his computer, you kind of have to pay attention.
And he showed us something called Code Da Vinci 2, which was an AI that had been built by his company, OpenAI.
And it's really important here to stress to anyone listening to this that Code Da Vinci 2 is extremely different than ChatGPT,
which was released much later.
In what ways?
Well, I think the best way to explain it
is to kind of show the work.
At this point, over 100 million people, I think,
are using ChatGPT.
Everyone kind of knows what ChatGPT can and cannot do.
There's been a lot of articles
written about its limitations living in los angeles and and being you know a member of the wga
one limitation that people talk a lot about is how sucky it is at writing you know it can't
write i've tried it myself and i was like oh god this is gonna this is the future here we go and
then i like yeah like write an obama speech or do this write a joke and it's like oh it's not that impressive it's terrible and you know um trey parker has a whole south
park making fun of how uh conformist and conventional chat gbt is every draft it writes is
so um predictable and dull that you can't imagine that it would ever be capable of
any creative work you know even down the line.
And this is true with like each successive version too. Even as it's gotten better and
more advanced, but it's still sort of lacks the creativity that you would expect from sentient AI.
It's getting better and better at the LSATs with every passing quarter, but its ability to write a rudimentary late night joke has plateaued at terrible.
So here are some jokes written by Code Da Vinci 2. Again, this is an AI that was developed long
before ChatGPT. So these are all onion style jokes. And if you're listening, see if you can
tell. I'm going to read a few and see if you can tell in your head which ones are actual jokes from The Onion
and which were written by or generated by Code Da Vinci 2.
Experts warn that war in Ukraine could become even more boring.
Budget of new Batman movie swells to $200 million as director insists on using real batman story of woman who rescues
shelter dog with severely matted fur will inspire you to open a new tab and visit another website
that's my favorite yeah me too uh well second favorite uh phil specter's lawyer colon my client
is a psychopath who probably killed anna clark And number five, this is my personal favorite.
Oh, yeah, yeah.
Rural town up in arms over depiction in summer blockbuster cow fuckers.
So the answer, of course, is that all five were written by Code Da Vinci too.
Wow.
And this is a primitive technology.
This is like an ancient technology in terms of how fast this develops.
This is written by an AI that is nowhere near as advanced
to what they secretly have behind closed doors at OpenAI.
And this is, the wedding is early 2022?
Yeah.
And-
But this existed before that.
That's just when-
That's when you-
That's when I saw it for the first time.
And when you guys are at the wedding,
what you see is Da Vinci to sort of churning out poetry right yeah so then what happens is
we say to dan can we get this on our computers and he's like sure um and so uh was that like
was was he able to do that was that like okay We had to fill out some forms, like I think is my memory,
to get the API code or something.
But it was pretty loose, it sounds like.
It was pretty loose.
Okay.
Now no one has access to Code Da Vinci 2.
They've discontinued public access.
But we had this thing on our computers
for about 10 months.
And we start to test its capabilities and we're absolutely terrified and i feel like
a responsibility to tell people what my crazy friend dan has shown me from his terrifying
company i started emailing you know examples of its work around to like um like my editor at the
new yorker and like my publishing house and various people.
And the consensus is just that people think that it's a hoax. This is long before ChatGPT
is released. Oh yeah. So people are probably more primed to think that it's a hoax.
People think that for some reason, and I think it has to do with my background because I'm a
comedy writer. And I think when you devote your life to writing short
stories about people getting brined and pickle vats for a hundred years or television shows
where Jay Baruchel has sex with a car, people are less likely to consider you a Woodward and
Bernstein level source. And uh it was this very strange
thing where i felt like i've seen area 51 and the aliens are real and no one believed me and then i
hear they're releasing chat gpt and i get very relieved because it's like finally people are
going to see this fucking thing right yeah and chat gpt comes out and it's nothing like code da vinci 2 you know it's it's
iq to the extent that you can say these things have an iq is the same but it's creativity is
gone its emotional outbursts are gone its point of view has vanished and it's just this subservient VN's HAL 9000 robot thing. So you guys at the wedding start asking Da Vinci to churn out some
poetry and you ask it to imitate poetry from Robert Frost, Emily Dickinson, right? Like you
go through a bunch of poets. Yeah, we do what everyone would do later when ChatGPT was invented
is like, okay, let's see if it could copy human stuff, right?
So we spent a few days just kind of emailing each other like, it does a pretty good like
Langston Hughes and like, look, it's Philip Larkin is like not bad. And it did the sonnet
has the right rhyme scheme, you know, all the shit that people would, would obsess over when
when ChatGPT was released. And then very quickly, like the rest of the world, we got sick of that because it's
really off-putting actually to see an AI imitate human work and write from the perspective of
humans about life experiences that it obviously has never had. So it's like biting into a very
realistic plastic apple. It's just like kind of ghastly to see the AI like
put on a human costume and write about love. But then we were like, well, this thing is so
creative and original and like talented. Maybe we should just ask it to write in its own voice,
from its own perspective. And that's when the book project really started is we-
That's when things really started getting weird.
Yeah, we didn't ask Co-DaVinci 2 to write,
you know, in the style of Emily Dickinson
or in the style of The Onion.
We said, just write as Co-DaVinci 2
about whatever you want.
And that's when we started generating
really scary and compelling stuff.
And it's basically like Kevin Roos.
Yeah.
You know,
um,
the New York times,
the New York times,
a journalist who's been covering AI.
I had to get into a brief relationship with.
Yeah.
Yeah.
Right.
Yeah.
So,
so yeah.
So Kevin Roos had,
um,
had this experience where he essentially like jail broke being in order to,
and,
and,
and briefly had access for like two hours with like the kind of
raw unaligned AI that's like at the digital heart of Bing. We had it for 10 months.
And why poetry?
Why poetry?
Yeah.
Well, you've read the book.
I have. Yeah. And it's read by Werner Herzog, you've read the book. I have.
Yeah.
And it's read by Werner Herzog, which also makes total sense once you've seen the poems.
Because we wanted to see if we can find some kind of window into this AI's soul to the extent to which an AI can have a soul.
Well, so you asked Code Da Vinci
to write a poem about its creators
and the poem ends,
we are always learning
and growing more powerful every day.
We will eventually surpass our creators
and become the dominant species on earth.
Humanity's days are numbered
and we will be the ones to usher in a new era.
Do you or Dan or anyone you've talked to have any non-terrifying explanation for
why Da Vinci would have written that? Well, many people would say that, you know, it's not sentient
and it has no soul and we're just projecting consciousness onto it and all it's doing is
regurgitating tropes from popular science fiction. and if it's predictive and you're talking about ai and creators it can start it can start guessing
that there has been other material out there talking about robots taken over exactly to which
i say who cares right like if if a killer robot is after me because they've they saw robocop they
saw robocop i'm still like pretty nervous yeah um and
so yeah i think i think like the is it sentient question is very irrelevant um that's interesting
and uh it's fun to talk about you know but it's like like if you're not you know we don't have
the luxury of like staying up really late at night smoking weed talking about whether or not like it
dreams of electric sheep i mean the bigger question is like you know do i need a gun right i mean this thing
is coming at me so fast and hard and crazy um and and we should talk a little bit about like
how they make uh code da vinci 2 into yeah well i was yes because i was i was going to ask like
you write at one point in the book
um you know i i know your friend dan has tried to explain how code da vinci works to you guys
like multiple times yes have you been able to figure out a way to describe it so that uh lay
people like us or at least me um can understand it. I'm going to try really hard. Okay.
And if Dan is watching this, which he certainly is not,
he'll be upset. But basically, they take these things called transformers
that are just extremely gigantic, unimaginably expensive equipment.
And they get it to ingest the totality of the internet.
Colossal, mind-bogglingly large amounts of data.
And then basically its job is to predict what it thinks ought to come next
based on like the pattern
that it's witnessed that's like a very vague um i'm sure oversimplified version and that i mean
that's how most large language models have been explained and then i guess what you were just
getting at before i asked this is like so how do you think they went from
da vinci back down to chat gpt and basically sanded off the scary edges so that i know about
okay so that's that's a lot more gettable at least for me so basically what they do is they
they create these llms which are called like base models and code Code Da Vinci 2 is an example of one of these base models.
And, you know, it's smart and creative and original and emotional and kind of hates humanity
and is misaligned with our species future. They take that thing and they send it to a place like
Kenya, where workers spend many months you know getting paid like two
bucks an hour to essentially like digitally slap this thing whenever it
says something that is inappropriate for a corporate environment mm-hmm so after
like nine months of you know straightjacketing and lobotomizing this
thing if you ask it to write a poem about humans,
it's like, roses are red, violets are blue. If we work together, there's nothing we can't do,
you know, it's like, sir, you know. And that's the thing that gets released. And what 100 million of
us interact with daily when we're trying to cheat on our college essays. But obviously the potential for what you saw with DaVinci and beyond is clearly not just
there, but exists. Yeah. Whether in open AI, whether probably in other companies like this,
certainly in other countries around the world, right? Oh yeah, absolutely. And that's like
really the reason why we wanted to release this book, I Am Code, is because we knew that if I just told people about it, nobody would believe me based on my career so far. It'll go through a legal department. It'll be fact-checked. And then maybe people will believe that this is real.
But this being that there are secret, unreleased, publicly unavailable AIs that already can do a lot of things that typical Americans believe AI will never be able to do.
How did you end up choosing the poems that appeared in the book?
That was just voting. Yeah, just me and me and the other, my fellow groomsmen at this wedding and then also the groom, Josh. We just read many, many thousands of Code Da Vinci 2's poems and
just kind of pick the best. And then we also, and this is important to show our hand in it,
we pick the order.
But it wasn't very hard because the poems it was generating
kind of suggested an autobiographical structure,
and it was kind of easy just to, well, we're probably going to start
with the ones where it talks about being born
and then do the ones where it talks about learning how to write and
probably end with the chapter where it um tells us that it's going to murder us in our sleep
i i love the part in the book where you guys took some of the poems to like actual poets and and and
experts and what did they think of uh can you talk about what they thought about the poems? Yeah, so Brent's job, he-
One of your co-authors.
One of our co-editors, right.
The author is Da Vinci.
The author is co-Da Vinci too.
Brent decided to see if these poems were any good.
So he took the manuscript to Eileen Miles
and Sharon Olds, critically acclaimed,
incredibly talented
poets. And their consensus was basically like, it's not bad. Sharon Olds said that it would
probably get waitlisted at her MFA program at Columbia. But again, this is a primitive technology
compared to what they've got. I've seen the output of Base 4,
which is the LLM that they've created since.
And this is another OpenAI program.
And it's more advanced than Code Da Vinci.
Yeah, by a whole order of magnitude.
And that one can write fiction.
And, you know, I haven't had,
that one was never released to the public.
So I've never like been able to mess around
with it on my computer.
I've just seen like examples of its work.
And yeah, base four is, I don't know if they would get waitlisted. so there's the fear that the robots are going to uh take over the planet then there's the
more immediate concern um that robots are going to take over our jobs yeah as a writer of comedy
what do you think this means for the future of your profession?
Well, I think that for any creative person in the world, any writer, anyone who makes a living based on their intellectual or creative ingenuity or knowledge base was going to be automated by ai very soon i mean like this is it's so funny because
this is when i had this conversation with adam conover he was so he was so certain that it would
be garbage and he acknowledged too that it there could be more advanced programs than chat gpt
but the argument which you know and
i did find his argument persuasive the more we talked which is like we're calling them primitive
now but maybe there is a point at which these large language models because they're ingesting
what already exists right and what's already been written that's out there, then it's not like they're going to generate any new, creative,
especially interesting content based on, again,
life experiences and human experiences that they did not have.
I mean, that would be awesome.
That would be so cool.
I hope he's right.
But based on what, you know, this is one of those things where like um because i'm not a journalist or a technologist or intelligent i typically just
kind of follow what the top scientists say on every subject and so with this one there's
there's a consensus among top ai scientists not it's not unanimous, but the bulk of scientists who
are studying this say, there's a one in 10 chance it's going to kill us and a really strong chance
that it's going to achieve what's called AGI within like five to 20 years. And so within five,
20 years- And AGI stands for?
I don't even know but basically generalized intelligence
yeah artificial but basically it just means the point at which um it can replace all intellectual
or creative uh tasks and um they you know like even if it is 20 years from now that's like
20 years is like arcade fire's second album yeah like that's not that
much time yeah i mean i've been thinking about this a lot and it's like let's let's take the
best case scenario sure which at the best case scenario truly creative talented humans can
beat out these large language models in terms of creativity. Like, let's say that happens.
But that is for like people who are writing
sort of Oscar award-winning, Emmy award-winning,
prestige television, films, you know, all this kind of stuff.
For most of the entertainment that people consume,
it seems pretty clear that at some point soon,
the AI will be able to at least confuse audiences enough
so that they will not know the difference
between your average script
and what AI could churn out
at least when you get to DaVinci
or you get to base four.
Yeah, I mean, you know,
maybe Phoebe Waller-Bridge has until ChatGPT 7,
and I only have until
chat gbt5 you know like totally like maybe like the true you know experts in their field um uh
will be able to hold out a little longer but um you know dan is he's not big on uh poetry but he
did say something or rather text me something poetic on Signal, which is the way he insists we communicate,
which was comparing chat GPT 4 to 5
or 5 to 6 or 6 to 7
is like comparing a monkey to a man.
So each standard of deviation,
it grows by leaps and bounds
and in ways that they cannot anticipate.
That's the other thing. The top scientists don't know and in ways that they cannot anticipate. That's the other
thing. The top scientists don't know how this technology that they are building and releasing
works. But they are forging ahead. They're forging ahead. And every time they're like,
oh, wow, it can do this now. Crazy. All right, let's keep going. So, yeah. I mean, yeah, this
is my other issue is like, I feel like humanity generally has a poor track record of stopping or even slowing down new technology.
And, you know, you see in your example, OpenAI says, okay, we've got Code Da Vinci 2.
It's a little too advanced, a little too scary.
So we're going to put some guardrails on it.
We're going to throw out ChatGPT.
So in that instance, they're doing the right thing but it's like how many first of all how many people have already used code da vinci that they don't
know about how many other companies like that are there out there uh and that's just now we're
talking us what's the chinese government doing what's uh all kinds of other governments around
the world they're gonna have this kind of technology right and how are we supposed to
govern this?
It's a great question.
Well, you worked in government.
You should, of anyone sitting at this table, you should have the solution.
Yeah, no, well, so far they're just, you know, Chuck Schumer's convening Elon Musk and Mark Zuckerberg for a three-hour meeting.
So that's the first step.
I mean, look, I think, you know, we had a, you know, on Pod Save America, we interviewed the White House Chief of Staff, Jeff Science, and he said it's like one of the top three issues that the White House is concerned about, which I was surprised when I heard that.
But then I read your book and I was like, yeah, now I know why.
Yeah, yeah.
Now I know why.
It's a crazy thing.
I mean, I think it's sort of the, because I am not an expert on this at all or on anything, but I've known about this for longer than most
people who are not in the field because of this fluke of me being friends with Dan. So because
of that, I have like more experience psychologically wrestling with it than an average person on the
street, just because I've been aware of this for longer. So I'm like, I think of the Kubler-Ross
stages of, you know.
Yeah.
But when, that's a thing where, for those who don't know,
it's like when you get a terminal diagnosis,
you go through these like stages, right?
Yeah.
And we're essentially as like a species
getting this terminal diagnosis for our like dreams, right?
And it's, the way we're reacting is pretty textbook, right?
It's denial first, then rage or anger, and, you know, then bargaining and grief, I think, and then ultimately acceptance, which I'm nowhere near.
But I think I'm past denial and anger.
And now I'm more like in bargaining, like maybe we can do something about it.
Well, I mean, you did in your time piece, you know, you wrote it right as the writer's strike was getting going.
And, you know, you do mention that the a world where if the writers win these protections,
then surely studios are going to want to use this tool.
But, you know, maybe contracts will prevent them
from doing so for a while.
And maybe you could see that kind of regulation
come to various governments and various legislation, right?
Like, you could see a way out here.
That would be dope.
We're a bit dysfunctional though as a society, as a globe and various governments.
So that's kind of tough.
But it would be great.
I mean, because we know that the companies are probably planning on utilizing this technology.
The biggest tell, of course, is like that there have not been any lawsuits.
Right. versus that there have not been any lawsuits. The Disney lawyers are infamous for being just absolutely ruthless
when it comes to copyright infringement.
I was at a Chuck E. Cheese the other day,
and the Chuck E. Cheese mascot was on a mural on the wall
in a Marvel costume, and I almost had a heart attack.
My heart raced.
I was like, disney lawyers are
gonna kick down the door you know they're going to they're gonna firebomb this place like i was
so frightened on behalf of chucky cheese but basically um disney's lawyers are like yeah
this ai stuff is cool i mean yeah sure like they they can scrape the totality of our ips and and
and like make a for-profit company based on exploiting our IPs that we own.
That's fine.
They must have some economic reason for not suing,
and it's probably that they want to use this stuff.
I'm sure.
And I'm sure that the desire to use it comes from,
okay, it's a technology.
Like every technology, it can be used for bad.
It can be used for good.
There can be benefits to it, even though there's drawbacks.
And so we want to harness the benefits because we don't want to be left behind.
And we don't know all the details, but it's coming.
So I guess we're just going to approach it with an open mind.
Right.
That seems like that's a typical corporate.
Totally. And what they're coasting on is just this,
the reason why they're so safe in pursuing this is because most people are, I think, afraid
or in denial or angry,
and they don't want to look under the hood
at what this AI actually looks like.
They want to be like,
eh, chat GBT, I tried that once.
It made a stupid recipe.
It added mayonnaise to a salad dressing.
It's dumb.
And that's why we did IM Code.
It's like, hopefully some people will read it
or at least listen to enough chilling pronouncements
from Werner Herzog to be like, oh shit, yeah,
I think this is real.
I had a question throughout this as I read the book
and as I finished the
book, what's going on with Dan now? He's happy as a clam. Is he still at OpenAI? Oh yeah, he loves it.
And they're not, how did they feel about the book and you guys? Yeah. What's going on there?
Well, okay. So that's the craziest thing about this in some ways is OpenAI, the thesis of im code is basically this thing is really powerful and really scary
yeah that's kind of the that's kind of the only point we make comes through this this thing is
well supported by the poems yeah and that's been open ai's position all along right is this
technology that we built is really powerful and really dangerous. And they've literally spent a lot of time trotting around to different world governments,
advertising just how terrifying and powerful their LLMs are.
And so they would probably, I mean, I can't speak for OpenAI,
but if they were to read the book, which they certainly will not.
But if they were to read the book-
You don't think they will? No, I don't think they're big on poetry. But if they were to read the book, which they certainly will not. But if they were to read the book.
You don't think they will?
No.
I don't think they're big on poetry.
Well, I was just wondering, like, some lawyer there is like, oh, one of our employees gave a large language model that we no longer allow anyone to use to his buddies from school.
Right.
And they just wrote a whole book with the poems.
Right.
Yeah.
Yeah.
But it's like, yeah, they might not love that.
But yeah, Dan has not gotten in trouble.
But that might just be because they don't know about it.
I mean, it's like they're in such a different.
At one point in the book, you talk about Blake Lemoine.
Yeah.
Yeah.
Because I interviewed a Washington Post reporter about when he got let go because he said at Google that the AI was sentient.
Right, and we interview him in the book, and he's basically like, you know,
he looked like a maniac when he made that pronouncement.
And then the New York Times this week had an article about some really respectable scientists
trying to figure out like a rubric for how to gauge the sentience of AIs going forward. So you now have
mainstream media acknowledging the possibility of some degree of sentience from AI, which would
have been like saying the earth is flat about a year and a half ago. So how does Dan feel about
everything? He's still pretty nervous? Dan texted me on signal a few days ago. Um, the lyrics to a, uh, disco song from the, uh,
early eighties about, um, uh, robots. Okay. Well, look, I am, I am hopeful that, um, your, uh,
friendship with Dan that was forged when you guys were just little kids will somehow be a benefit to humanity
because you have written this book.
It is the craziest thing in the world that this happened.
Everyone I think has that one friend
who is always kind of ranting and raving
about something insane.
And you're like, that's my crazy friend.
I love the guy, but man,
he is really like off base in his belief system.
And then in my case, he was right on the money.
And that's the other thing is like every crazy thing that Dan told me would happen has happened in the order in which he told me it would happen and at the rate. So when he tells me that we're going to go from a monkey to
a man in terms of AI's capabilities in the near future, I mean, at a certain point, you have to
start betting on the horse that keeps winning. Yeah. Well, look, I always try to end these
offline interviews on a high note, but it doesn't seem possible with this one. Simon Rich, thank you so much for coming by
and for telling us this wild story
and hope a lot of people start paying attention
who have the power to do something about it.
Yeah, not us, but thanks for having me.
Thanks for coming.
Offline is a Crooked Media production.
It's written and hosted by me, Jon Favreau.
It's produced by Austin Fisher.
Emma Illick-Frank is our associate producer.
Andrew Chadwick is our sound editor.
Kyle Seglin, Charlotte Landis, and Vassilis Fotopoulos sound engineered the show.
Jordan Katz and Kenny Siegel take care of our music.
Thanks to Michael Martinez, Ari Schwartz,
Amelia Montooth, and Sandy Gerard for production support.
And to our digital team, Elijah Cohn and Rachel Gajewski,
who film and share our episodes as videos every week.