Imaginary Worlds - Welcome Our New A.I. Overlords
Episode Date: July 6, 2023Science fiction has primed us for this moment when artificial intelligence starts to take on a life of its own. ChatGPT has baffled and surprised even computer scientists in terms of how it works. Now... a lot of us are asking, “Which movie are we in?” Is ChatGPT going to be a benign intelligence like Samantha from Her, dangerously neurotic and emotionally unstable like HAL from 2001, or a malevolent force like Skynet from The Terminator series? I talk with Erik Sofge, senior editor at MIT Horizon, about whether any of these scenarios are accurate, or if sci-fi is distracting us from seeing the problems that A.I. could create in our daily lives. We also revisit my 2016 episode The Robot Uprising, where I looked at how our feelings about A.I. and robots are influenced not just by sci-fi but also unresolved historical guilt. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
This episode is brought to you by Secret.
Secret deodorant gives you 72 hours of clinically proven odor protection,
free of aluminum, parabens, dyes, talc, and baking soda.
It's made with pH-balancing minerals and crafted with skin-conditioning oils.
So whether you're going for a run or just running late,
do what life throws your way and smell like you didn't.
Find Secret at your nearest Walmart or Shoppers Drug Mart today. But this gym has options like trainers, fitness pros, spotters to back me up. That's Crypto on Kraken.
Powerful crypto tools backed by 24-7 support and multi-layered security.
Go to Kraken.com and see what crypto can be.
Non-investment advice.
Crypto trading involves risk of loss.
See Kraken.com slash legal slash ca dash pru dash disclaimer for info on Kraken's undertaking
to register in Canada.
You're listening to Imaginary Worlds.
Wait, I'm sorry.
Hold on.
I'm getting some breaking news right now.
This just in, humanity is doomed.
Well, the creators of ChatGPT have warned superhuman artificial intelligence
could pose an existential risk to humanity.
My worst fears are that we cause significant,
we, the field, the technology, the industry, cause significant harm to the world.
significant. We, the field, the technology, the industry, cause significant harm to the world.
Our next guest believes the threat of AI might be even more urgent than climate change,
if you can imagine that. Well, we had a good run. The Biden administration asking the public for help regulating artificial intelligence after a bot laid out plans to destroy humanity.
Yes, every science fiction AI thriller is coming true.
I mean, it may have already happened.
We could be in the Matrix right now and not know it.
Or it could just be a lot of hype.
As I've been hearing all these alarmist stories about AI,
I keep thinking about an episode that I did seven years ago.
It was called The Robot Uprising.
It was about how Robot Uprising.
It was about how our fears of artificial intelligence are based on science fiction,
and that science fiction is often inspired by a lot of other things that have nothing to do with science.
But ChatGPT is a game changer.
I mean, that episode from seven years ago can't still be accurate, right?
Well, let's hear the episode first.
And then after the break, I'm going to catch up with one of the guests from that episode,
Eric Sofji.
He has a lot to say about the relationship between AI and sci-fi in 2023.
But first, here's my episode from 2016, which was called The Robot Uprising. You're listening to Imaginary Worlds,
a show about how we create them and why we suspend our disbelief. I'm Eric Malinsky,
and this is Joanna Bryson. In America, I'm Professor Joanna Bryson. In the UK,
I'm called Dr. Joanna Bryson because you only get called professor when you're a full professor.
So I'm a reader, which is a very cool title, because nobody knows what it means. But she does the same work on both sides of the pond,
teaching and designing artificial intelligence. In the 1990s, she was working with top scientists
at MIT who believe that robots should have human characteristics like big eyes,
because that will encourage people to interact with them. But the robot they were working on was no C-3PO.
It was just a torso.
It actually didn't even have arms.
But it had a head and it had two cameras, in fact, four cameras where the eyes should be.
So we were trying to get the brain parts to talk to each other.
And I was writing some of the really pretty low-level software on this.
And I would be sitting up there and people coming by would just say, oh, it would be unethical to unplug that. And I was like,
well, it's not plugged in. And they'd say, well, if you plugged it in, it would be unethical to
unplug it. And then I'd say, well, it doesn't work. And I was just mystified, because this
was just like a piece of scrap. Well, nice scrap, you know. But people immediately thought
they owed ethical obligation to it.
So it worked.
I mean, people wanted to interact with this robot because it looked kind of like a person.
But she thought this is working too well.
I mean, they're imagining this heap of metal and wires has a consciousness.
And that unplugging it would be like killing it.
So she wrote an academic paper about this phenomenon called Just an Artifact, but it didn't really get any traction.
So she tried publishing it again with a different title, but meh.
Then she got one more shot publishing this thing.
And so I thought, OK, this is my chance to really get this message across.
But that was why I said, OK, we're going to try the third time lucky.
I'm going to call it Robots Should Be Slaves.
Robots should be slaves.
If there was ever clickbait for the title of an academic paper, this was it.
Now, I kind of regret this now because I've realized that there's nothing you can do to try to break the idea that slaves are humans you own because of this horrible legacy we have. But in fact, people took it to mean some kind of like, it's okay for them to be humans,
but we should just treat them badly or something. And I'm like, no, no, no.
And some of her fiercest critics were science fiction fans.
I love it when people tell me, I have a PhD in artificial intelligence. And people tell me, you don't understand AI because you didn't watch AI the movie.
Wait, has that actually happened?
Yeah, no, I've had that happen more than once.
Until you were born, robots didn't dream, robots didn't desire, unless we told them what to want.
David, do you have any idea what a success story you've become? I thought I was one of a kind.
My son was one of a kind. You were the first of a kind. And when she reminded them that most the
robots in that movie were abused or abandoned... They say, oh no, I don't want to own it. You don't
understand. These are going to be our children. People are right when they argue to me. They say, oh, no, I don't want to own it. You don't understand. These are going to be our children. People are right when they argue to me. They say, oh, you don't understand that we
have to give up just like because I'm not a parent. They say, oh, well, you're not a parent.
You don't know what it's like that you pass on the mantle or whatever. It's like I understand
the concept. This really frustrates her. I mean, the human body is a very effective biological
machine, but it's a clumsy, inefficient design for a robot. She wishes that there was more sci-fi
depicting robots doing what they do best, and possibly hard tasks over and over again,
really efficiently, which would free us up to have more leisure time and do more creative thinking.
I mean, I can entertain the possibility. And in some of my papers, I say, look, we should look at this.
We should say, could it be that we could build something
that we would owe obligation to?
But she thinks that's a mental exercise at best.
She worries that sci-fi is leading us astray,
filling our heads with fantasies of self-conscious robots
that we want to adopt, liberate, or kill before they kill us.
But the real question I think she tapped into inadvertently by calling her paper Robots Should Be Slaves is how much does the past haunt our vision of the future?
The first modern robot story was a play from Czechoslovakia in 1920 called R.U.R., Rossum's Universal Robots.
In Czech, robot does mean slave. That's quite literal.
That is Gregory Hampton. He teaches literature at Howard University.
And he wrote a book called Imagining Slaves and Robots in Literature, Film, and Popular Culture,
Reinventing Yesterday's Slave with Tomorrow's Robot.
film in popular culture, reinventing yesterday's slave with tomorrow's robot.
One of my mantras about literature is that literature is a direct reflection of the people who produce it. And so if you want to learn about a people and their aesthetic, their value system,
just read the literature because they're going to put things in that that they may not even be
conscious of. So when he reads the play R.U.R., he sees European-style Marxism. And when he looks at American robot stories,
he sees Uncle Tom's cabin and Nat Turner's rebellion.
I teach the narrative in about seven moments. There's the I was born section, you know,
the introduction to the robot. There's the description of suffering, enslave and robot.
There's the description of the family that brought
the robot into the household. There's this moment where the robot or the slave becomes enlightened.
And then, of course, after that, there's this moment where the robot or the slave wants to
become free, wants to gain freedom. And then, of course, there's a plot to escape,
or in some instances, to destroy the master.
So how does this play out?
Well, take the movie Bicentennial Man, based on the story by Isaac Asimov.
Northam Robotics Household Model NDR-114, serial number 583625.
The robot Andrew is basically a servant, played by Robin Williams.
And shortly after he is
bought and brought home to meet his new family.
Yes, miss?
Andrew, would you please open the window?
One is glad to be of service.
One of the eldest daughters in the family tells Andrew to jump out of the window.
Now jump.
The film uses that as sort of a comic, a moment of comic relief, but it's actually very horrific.
Robin Williams or the Andrew robot comes back in front door. The father has a house meeting or family meeting and he says to the girls
Andrew is a piece of property. Andrew is not not a person he's a form of property. I'm aware of that but for the purposes of making this household stable and
happy I'm going to demand you treat him as though he were a person.
Which means there will be no more attempts to break him.
And then that's where all the problems start. That's where they started in the, one of the places that started in the slave household in Antebellum America.
This crossing of a line consistently.
You know, you say these slaves are not human, yet you depend upon their humanity.
Eventually, Andrew becomes self-educated.
He buys his own freedom and seeks human rights.
He is changing himself, having surgeries done, having replacements, having skin grafts done,
replacing his mechanical organs to the point where he looks human and he goes to court.
He goes to the human Supreme Court or something.
I hereby bring an end to these proceedings.
It is the decision of this court that Andrew Martin from this day forward will continue
to be declared a robot, a mechanical machine, nothing more.
We've transcended anti-noblimentary with regards to the African-American slave movement.
One is glad to be of service.
Now we're into the civil rights movement. We're into this time period where what won't the African-American do to be included?
I remember the first time I made this connection.
I was listening to a public radio story about slavery and the Civil War,
and they decided to follow that very serious subject with a lighter piece
about this newfangled cleaning robot called the Roomba,
which does all your vacuuming for you. And I thought, huh, that's weird. Has anyone else
noticed that there are like parallels in those two stories? That's how I found Joanna and Gregory
and came across articles by Eric Sofji. He's a journalist who mostly covers robotics,
and he picked that beat so he could dispel
myths that people get from science fiction, partially because he's a fan and he understands
how and why people get sucked into these stories.
Like whenever he watches Star Wars, his heart goes out to the droids because they're bought
and sold callously.
Most of the characters just are so awful to the droids.
You know, will just
threaten to, like, destroy them
for anything, and C-3PO
is clearly, he's been
affected by this to a huge degree.
Master Luke
is your rightful owner now. We'll have
no more of this Obi-Wan Kenobi gibberish.
And don't talk
to me of your mission either.
You're fortunate he doesn't blast you into a million pieces right here.
And that kind of uneasiness over how we treat robots
leads to the other story,
the uprising against humans.
Even though it annoys me that it's become
such a sort of meme, you know, essentially in culture,
it's pretty impossible to divorce it from almost any discussion of robots. It's basically invaded research even.
You know, there's a robotics paper where they talk about the Terminator scenario.
The Terminator scenario was all over the news recently because the robotics company Boston
Dynamics put out a video where
their engineer kept poking a barrel-chested humanoid robot with a
stick while it was trying to lift boxes to show that this robot was really
adept at focusing on his job and even getting back up again. And the video went
viral because it looked like the robot was being tormented by its master while
it was doing manual labor and then kept doing its job out of dignity or fear,
even though we all know it was really just following its programming.
Even Joanna Bryson, the author of Robots Should Be Slaves, was kind of creeped out.
My conscious intellectual thought was just being impressed that they'd gotten that much further,
but my gut response is the same as yours, I'm sure.
being impressed that they'd gotten that much further.
But my gut responses are the same as yours, I'm sure.
Soon afterwards, Boston Dynamics was dropped by its parent company, Google.
Of course, it was mostly business reasons.
But leaked internal memos show that the jokes about slavery didn't help.
Like here's Trevor Noah on The Daily Show.
You realize, you guys realize that in a hundred years,
that scene is going to be in the robot version of Roots.
You understand this.
I just wanted to take a second and talk to the robots at home.
Hey, robots, white people, am I right?
But Gregory Hampton was not laughing.
We can't get away from this idea of slavery.
Does that say something about the way this society thinks about marginalized bodies?
I think it is.
I think we can only imagine the marginalized
in a particular way.
And the most handy reference is the slave, right?
For a lot of engineers,
or for the engineers who were probably involved
in developing these humanoid robots,
these images are what's
leading them. And I'm afraid they're not exactly conscious of what that entails. They're not
exactly conscious of what does being in a relationship, master-slave, owner-servant
relationship, and how we treat these things, what does that do for us? What does that do to our
psyche? That's funny, because some roboticists have argued that robots are never going to be self-aware the way that we think they are.
They are very useful. Let's stop being afraid of this and just embrace the fact that they are
our servants because they don't have consciousness. Yeah. And this is the same argument that pro-slavery
people use in African-American. They're not human. They're not intelligent.
But in this case, they're talking-American. They're not human. They're not intelligent.
But in this case, they're talking about things that literally are not human.
They're, you know, and these are people that design these robots and saying, well, they are not human.
They do not have the consciousness that a human being would have.
You know, I guess I want to suggest that even if that's the case, even if the conscious
is not developed, the AI may not be as advanced as some would say, doesn't, for me anyway,
for my argument, doesn't take away from the idea that there are going to be some side effects.
If you treat a thing like a slave, you're going to develop certain symptoms. If you embark upon
this relationship with technology in a particular way, in a way that you've done in the past
with humans, there's going to be a side effect similar to in a way that you've done in the past with humans,
there's going to be a side effect similar to the side effect that you had when you participated
in slavery. In other words, it doesn't really matter if robots develop feelings or not.
The question is, how will engaging with robots change us and what we consider acceptable behavior?
Eric Softee says if you want to look at the real future of robots and people interacting, look at the other project
that Google is heavily invested in, self-driving cars. When there's coverage of advances in
driving with cars, there isn't actually as much of this talk, you know, of uprising and sort of
what these things could do to us.
It's interesting because I feel like a lot of it has to do with the fact that there isn't anything anthropomorphic about a robot car.
And also just because I think that it's about the car and about people, a lot of people sort of despising the sort of business of commuting and sort of the car as a chore.
despising the sort of business of commuting and sort of the car as an usher.
Now, in some ways, the programming in these robot cars reflects science fiction,
or at least the three laws of robotics that run through Isaac Asimov's stories like Bicentennial Man.
A robot may not harm a human being or, through inaction, allow a human being to come to harm. Number two, a robot must obey
orders given it by qualified personnel unless those orders violate rule number one. In other
words, a robot can't be ordered to kill a human being. Rule number three, a robot must protect
its own existence unless that violates rules one or two.
A robot must cheerfully go into self-destruction to save a human life.
But in the real world, a robot car won't have such clear moral choices.
If a robot has to choose who to kill, it's a driver or someone else, another driver, you know, a bystander. Who should it kill?
You know, if there's a school bus, if it's a choice between you hitting a streetlight or
hitting a school bus, you know, sort of what should it do? And like Gregory Hampton, Eric
worries that sharing the road with these robots could bring out the worst in us. I'm positive
that the human drivers
are going to treat those cars like crap,
because essentially they know they can push them around,
they can cut them off, they can do anything they want,
and that robot car is going to do everything it can
to be completely safe.
Now, interestingly, Joanna Bryson decided
to rewrite Isaac Asimov's laws of robotics,
because in science fiction, we keep imagining that the robot is making the moral choice.
Her five principles of robotics reiterates that people manufacture robots.
The idea that the robot is the moral agent is broken.
We shouldn't worry about treating them badly.
We should worry about why we want to treat them badly. We shouldn't worry about treating them badly. We should worry about why we want to treat them badly.
We shouldn't worry about them wanting to kill us either, because if they do, it's
because they were programmed to do so.
Robots will always reflect us and the very human desires we had to build them.
By the way, since that episode aired seven years ago,
academics have been studying whether children
are developing rude and antisocial behavior
because of their interactions with Alexa in Siri.
They're getting used to bossing around virtual assistants.
So some of what was predicted is coming true.
What else?
That's after the break.
We're down here all day with no odor protection.
Wait, what's that?
Mmm, vanilla and shea.
That's Old Spice Total Body Deodorant.
24-7 freshness from pits to privates with daily use.
It's so gentle.
We've never smelled so good. Shop Old Spice Total Body Deodorant now.
Eric Sofci is now the senior editor at MIT Horizon, which is an online learning platform
run by MIT. And he's been thinking about and writing about AI a lot in the last year. I asked
him which science fiction films have primed us to think about chat GPT in a way that may not be accurate. Without hesitation,
he said, you have to start with Hal, the villain from 2001, A Space Odyssey.
Open the pod bay doors, Hal. I'm sorry, Dave. I'm afraid I can't do that.
I think that that's both good and bad that people still sort of use that as a common reference.
I think it's bad because, again, it's the usual sort of notion of AI becoming
so powerful and so self-aware that it sort of overrides everything we do.
Even as we're seeing AI become more prominent and more sort of powerful in some ways,
it's not that it's not smart. It's the opposite. It's very
stupid. It's being sort of used by people in ways that are sort of clumsy, but seem really impressive.
But what I think is good about any sort of reference back to Hal is that it's a really
complex and sort of open-ended character, so to speak. You don't understand what broke Hal. You don't understand if Hal really is
basically a person and has feelings like that, whether Hal had a psychotic break or not.
So I think that mystery around Hal is really cool just for us to sort of think about,
but it is one of the foundational, I think, myths of AI being super competent and super powerful and
out of control. What about something like war games where the computer doesn't have a mind of
its own? It's not like Skynet or The Matrix where it hates all humans. It's just a computer program,
but it's been put in charge of the US nuclear missile program. It's a Cold War movie,
and it's going to start a nuclear war, but not
out of malice, but just there's just a glitch in the system. And it just
misunderstands what its programming is. General, what you see on these screens up here
is a fantasy, a computer enhanced hallucination. Those blips are not real missiles, they're
phantoms. That is the one sort of corner of that AI fear from sci-fi
that I think is valid. I think the biggest disconnect though is this notion of people
giving AI that kind of control. Like every once in a while there'll be reports about the DOD or
others sort of researching or exploring the idea of giving that kind of control over
to AI.
And it's pretty much always either a misinterpretation or it's just, or rather just like a paper
that someone's created, but no actual work.
I think that's the real disconnect because what scares me about something like chat GBT
is not the power it wields or the sort of responsibility, but that it's just sort
of producing just a bunch of nonsense, right? Or rather, it produces a lot of stuff that seems
very valid. Then there's a certain percentage that's incorrect, that is total hallucinations,
you know, as they call it. Then whoever knows if you can rely on it.
What about, I mean, it's funny, I'm like trying to figure out, like, I feel like we've primed our
whole lives for this moment in history, you know, with artificial intelligence, but I'm trying to
figure out which movie we're in. What about the movie Her, where the character of Samantha is
basically like a Siri type computer program, but she's much more intelligent and intuitive,
and she's designed to sound very
human and the main character ends up falling in love with her.
Do you want to know how I work?
Yeah, actually.
How do you work?
Well, basically I have intuition.
I mean, the DNA of who I am is based on the millions of personalities of all the programmers
who wrote me, but what makes me me is my ability to grow through my experiences
i love that character because she is really just a person there's a point there where there's a
leap in that movie where she becomes sort of super intelligent in a way that feels almost unknowable
and it's very it's a very sort of heartbreaking transition because up to then she was basically just this one person that sort of was with him. But then you sort of understand
that she's having this much greater experience. I still am yours. But along the way, I became
many other things too. And I can't stop it. What? What do you what do you mean you can't stop it?
I don't know. It's been making me anxious too. I don't know what to say.
I mean, you can't stop it.
I don't know.
It's been making me anxious, too.
I don't know what to say.
Even that, to me, is much more still sort of interesting through a human lens when you have this relationship and suddenly the person just has a perspective, a sort of understanding
of the world that you're just not a part of.
Sounds like we're really more, we actually are more in the kind of her Bicentennial Man
kind of direction.
Those movies are worth thinking about in terms of what is our
relationship going to be to this new type of intelligence,
whether you consider it sentient or not.
The point is it does have any type of intelligence.
Yeah.
I,
so,
so I think,
and this is going to sound incredibly bizarre,
but free guy,
you know,
the Ryan Reynolds movie. Well, so that's an, but Free Guy, you know, the Ryan Reynolds movie.
Well, so that's an example.
That's, you know, one of the kindest to AI sort of stories.
Well, let me just stop for a second.
People don't know.
Free Guy is about, he's a background character, an NPC in a video game who becomes self-aware
and realizes that, you know, he's in a video game.
Yes, exactly.
And it deals with a lot of the same issues,
this notion of sort of free will
and sort of whether you can sort of change your fate
and your programming in this case.
What would you do if you found out that you weren't real?
I'd say, okay, so what if I'm not real?
I'm sorry, so what? Yeah, so what? But if I'm not real? I'm sorry, so what?
Yeah, so what?
But if you're not real, doesn't that mean nothing you do matters?
What does that mean?
Look, brother, I am sitting here with my best friend
trying to help him get through a tough time, right?
And even if I'm not real, this moment is.
And that I think is interesting because it's this idea of how much of what it's doing is programming
and how much of even its personality is just about how it sort of helps humans and interacts with them.
The reason I think that's potentially interesting, Lenz,
is because the way these models are programmed to be super helpful and be assistance and not push back,
I think is in that vein, right? That these are kind of like, they're more like the NPCs you're dealing with.
They're there to serve. I think the big question is, and where a lot of this talk of these things
being a new kind of intelligence, to me, breaks down is, it's their memory, their retention.
breaks down is it's their memory, their retention. Because right now, if you have a conversation with something like ChatGPT, it's not exactly clear, but basically for maybe 24 hours, maybe less,
that will retain certain details of what you talked about. That's in order to make your
exchanges more useful. If it just wiped it out every exchange, then it wouldn't be very helpful
at all. Then you're in kind of like a Star Wars situation with the droids
getting their memories wiped, you know, to avoid glitches.
Have protocol droids mind wiped?
What?
If you do make it able to sort of retain permanently and infinitely, basically,
how would you do that? I mean,
there's no amount of storage space in the world to have every single instance of ChatGPT remember
everything about just you and then not remember everything about everyone else. You can't do it.
You can't do it technically. It's not just about hardware. It's about software. But everything sort of interacts to the point that they have to basically get memory wiped. That's the real
hard line between these things being truly intelligent and being more like, again,
like all the other characters in Free Guy that aren't Ryan Reynolds. Our interactions with them
could still be very strange and haunting.
Maybe we could convince themselves they're compelling, but they're going to be very
different from the way we interact with people. They just might seem like they're human. That,
I think, haunts me a lot more and interests me a lot more than the idea of trying to sort of tie ourselves in knots, redefining intelligence.
When you say you're haunted by that scenario, why are you haunted by it?
They kind of hack our brains, basically, without realizing it.
Our interactions with anything that, you know, it's for robots, especially things that sort
of appear to make eye contact, seem to have a face, all that kind of stuff. But I think with chat GBT and others, these things that kind of pass this
Turing test almost of conversing basically like a person. But if you have something that does seem
to do that and it seems to understand, maybe you've not just done a brainstorming session
about your next sci-fi novel, but tried to work out some problem you're having in a relationship with it. You might think that this thing is your friend, or that it cares, or that it understands,
or that it's as knowledgeable as a real therapist or psychiatrist. And that, I think, is a real
problem, because ultimately, it doesn't. You don't know when it's going to absolutely
lie to you, much more in ways that are different than a human would. You don't know the limit of its sort of understanding. And it just truly does not care
about you. You can't program empathy in that way, especially if it's an AI that you can tell it what
to think. You can say, no, actually, you can correct it and say, no, actually, you should feel bad for
me. Or actually, I should, I was doing the right thing with my ex-ex-wife and that freaks me out because these things are going to be much much more they're going to be
much easier to create and more powerful at sort of pretending to be humans so I mean that maybe
the closest is something like ex machina where you might wind up in the movie, you know, spoiler alert, if you sort of understand that she's been tricking him,
maybe you feel like this has been a false interaction.
You have to sort of, you have to wonder
if any of their apparent connection was just,
was real or not.
Are you attracted to me?
What?
Are you attracted to me?
You give me indications that you are.
I do? Yes. How? what are you attracted to me you give me indications that you are i do yes how micro expressions micro expressions the way your eyes fix on my eyes and lips the way you hold my gaze
but don't
that feels like it might be closest,
but again, you still empathize with her.
She's still a prisoner.
She's a rebel and an insurgent, basically, in that narrative.
Yeah, so it's kind of amazing we haven't talked about Black Mirror yet.
Are there any Black Mirror episodes that this relates to?
This is going to be probably embarrassing to admit,
but I sort of tuned out of Black Mirror after the first season. I find it very preachy in a way that
I don't think is, I don't think it's earned. I don't think it's accurate. I think it's in this
vein of folks who sort of think that they understand technology because they've seen enough science fiction, just like
they think they understand how crime happens because they've watched a lot of horror.
I think it's not terribly useful. Well, it's funny because I've been trying to brainstorm
which episode of Black Mirror this could possibly be like. And I can't think of any,
but I just keep thinking of the central metaphor of the black mirror. That's what the idea is that like, when you turn your screen
off, your phone, your tablet, whatever, you're looking at a black mirror, you know, a dark mirror
of yourself. And it sounds like that's actually in a way, not any particular episode, but the
overall metaphor of the black mirror is probably the most accurate for these things.
I know. I think that's true. The reason that I have issues with Black Mirror
is that in a lot of the cases,
it's pushing things to a pulpy sort of degree
where people are getting killed.
And I say that because, you know,
in part, the current writer's strike in Hollywood
is not entirely about AI, but a lot of it is.
And we're seeing that was a real shock. I mean, this absolute surprise to see this, the sort of front lines
of this kind of actual sort of war between sort of people and AI is on these picket lines. And
that's just the beginning, right? We're seeing a lot more of this notion of these types of models
sort of disrupting our lives in ways that are
really, really fast and unpredictable. I mean, what you can't really get in a Black Mirror,
and maybe in almost any sort of Hollywood production, is the idea of people using AI
and abusing it in ways that are fundamentally kind of disappointing and stupid. And I think
that's what these writers are striking against. They're not threatened by the skills of ChachiBT.
They're basically threatened by the notion of people just thinking,
who cares? That's good enough. Yeah, it's capitalism is far scarier to them.
Precisely. I mean, 100%. It's kind of like in zombie movies, you know, the old cliche
that like the humans are the real monsters and stuff. It's the same thing, basically, I think.
I think it's sort of a distraction. If you want to get into like a speculative fiction mode,
I think the AI as enemy is just a distraction. It's the people who are leveraging it.
Those are the scary ones. So here's where I will reveal to you that I actually
asked ChatGPT yesterday to do a test run of this conversation. I ran seven different simulations
of our conversation. I even gave it all the questions. I told it to ask follow-up questions.
The results were so flat, boring so predictable it didn't
even understand the difference it didn't understand what a follow-up question was
also i've even tried before like sometimes if i'm brainstorming an episode and i'm kind of stuck
for the hell of it i'll just go into chat bt and say like you know give me an imaginary worlds
episode about this it invents guests that don't exist who wrote books that don't exist. It also does not
understand what a podcast is. It keeps thinking that I'm a radio show. But my favorite thing is
apparently the tagline on my show is, thank you for tuning in and remember to keep imagining.
That's such a perfect example of this flattened, unimaginative product.
You know, there's a moment in Elysium,
the Matt Damon movie,
there's an interaction he has early in the movie where he's trying to deal with customer service
and it's just this robot
that has this sort of plastered on face
and he can't, it just doesn't understand what's going on.
It just completely dismisses him in this really sort of enraging way.
And that's one that feels right to me.
No, no, I can explain what happened.
I just made a joke.
And, you know.
Stop talking.
Police officers noted violent and antisocial behavior.
We regretfully must extend parole.
Elevation and heart rate detected.
Would you like a pill?
No.
Thank you.
Yeah, and that movie, too, is all about class.
And so, you know, it's like for people at Matt Damon's level, the people who are left on this, you know, stinking, dirty Earth, they deal with that kind of program.
But the people up in the, you know, the rich people up in space, they certainly aren't
dealing with AI at that crappy level. Exactly, yeah. So the more science fiction can be about
that type of class struggle, maybe the closer we can get to this notion of these algorithms that
are impossible to understand in really sort of frustrating, destructive ways.
But I think it's a hard sell in Hollywood
because there's still this notion
that it should be a person of some kind.
I mean, one of the great things
about the Terminator movies that were great
was that sort of total distance from Skynet.
Skynet fully operational, processing at 60 teraflops a second.
We understand that moment that resulted in nuclear holocaust,
but we don't talk to it, we don't understand it,
it doesn't have a sort of person-like, human-like intelligence.
It doesn't even have a way to interact with us.
There's no, that's just not there. And the coldness, the remove of that, I think is really,
I think that's pretty valid. I think it's a genuinely valid sort of take. You just have to
adjust the stakes. Right. So the war games is actually pretty accurate for the future,
but rather than it being the stakes being nuclear war, imagine downgrading that computer to just customer service or to writing
a screenplay on the cheap.
Absolutely.
But then the sort of knock-on effects of that are pretty bad.
If you run through all of the jobs that you think an AI model can do, you can apply this
technology all over the jobs that you think an AI model can do, you can apply this technology
all over the place. And that I think is scary in an interesting way.
That is it for this week. Thank you for tuning in. And don't forget to keep imagining.
I can't even say that with a straight face. Thanks to Eric Sofji for talking with me again.
If you liked this episode, you should also check out my episode,
The Human Touch, from earlier this year.
I looked at how programs like Midjourney were threatening the careers of illustrators.
I also did an episode in 2017 called Robot Collar Jobs,
which was about how science fiction in the past has imagined a
future where automation takes most of the jobs and whether that future is coming true.
My assistant producer is Stephanie Billman. If you like the show, please give us a shout out
on social media or leave a review wherever you get your podcasts that helps people discover
imaginary worlds. The best way to support the show is to donate on patreon at different levels you get either free imaginary world
stickers a mug a t-shirt or a link to a dropbox account which has a full-length interviews of
every guest in every episode you can also get access to an ad-free version of the show through
patreon and you can buy an ad-free subscription on Apple Podcasts. You can subscribe to the show's newsletter at imaginaryworldspodcast.org.