99% Invisible - 382- The ELIZA Effect
Episode Date: December 11, 2019Throughout Joseph Weizenbaum's life, he liked to tell this story about a computer program he’d created back in the 1960s as a professor at MIT. It was a simple chatbot named ELIZA that could interac...t with users in a typed conversation. As he enlisted people to try it out, Weizenbaum saw similar reactions again and again -- people were entranced by the program. They would reveal very intimate details about their lives. It was as if they’d just been waiting for someone (or something) to ask. ELIZA was one of the first computer programs that could convincingly simulate human conversation, which Weizenbaum found frankly a bit disturbing. The ELIZA Effect Make your mark. Go to radiotopia.fm to donate today.
Transcript
Discussion (0)
This is 99% Invisible. I'm Roman Mars.
When Joseph Weisenbaum was still alive, he'd like to tell the story about a computer program
he'd created back in the 1960s as a professor at MIT. It was a simple chatbot named Eliza
that could interact with users in a typed conversation. Weisenbaum asked his secretary to try it out.
I asked her to my office and sat it down at the keyboard and then she began to type and of course I looked over her shoulder to make sure that everything was operating properly.
Wizenbaum watched as the secretary typed a message to Eliza and then as the program responded to her.
After two or three interchanges with the machine she turned to me and she said, would you mind leaving the room, please?
The secretary was so entranced by this chatbot
that she didn't want to stop exchanging messages.
Weisenbaum's secretary fell under the spell of the machine.
Weisenbaum saw this happening over and over again.
People would reveal very intimate details
about their lives to the program.
It was like they'd just been waiting for someone,
for something to ask.
Eliza was a simple computer program.
That's producer Delaney Hall.
The way it worked was it would recognize a keyword
in the user's statement,
and then reflect it back in the form
of a simple phrase or question.
When Eliza got confused or couldn't parse a statement,
it would fall back on set phrases like, please go on or tell me more.
Why's and bomb had programmed Eliza to interact in the style of a psychotherapist, and it
was pretty convincing. It gave the illusion of empathy even though it was just simple
code. Here is an exchange, dramatized in an all-documentary segment. It begins with the human.
Men are all alike. And then the computer program answers. In what way?
They're always bugging us about something or other. Can you think of a specific example?
Well, my boyfriend made me come here. Your boyfriend made you come here.
He says, I'm depressed much the time.
I'm sorry to hear that you're depressed.
It's true. I am unhappy.
Do you think coming here will help you not to be unhappy?
Eliza was one of the very first computer programs
that could convincingly simulate human conversation.
And Wizenbaum got weirded out by the way that people reacted to it.
He didn't expect them to be so captivated.
He feared that they were being fooled, that they didn't really understand it was just
a bunch of circuits on the other end.
And he wondered if it was so easy to mimic a sense of human understanding, then what
did that say about human understanding?
Weisenbaum started raising these big, difficult questions at a time when the field of artificial
intelligence was still relatively new and mostly filled with optimism.
Many researchers dreamed of creating a world where humans and technology merged in new ways.
They wanted to create computers that could talk with us and respond to our needs and desires.
But Wizenbaum would take a different path.
He would begin to speak out against what he saw
as the eroding boundary between humans and machines.
And he'd eventually break from the artificial intelligentsia,
becoming one of the first and loudest critics
of the technology he helped to build.
People have long been fascinated with mechanical devices that imitate humans.
Ancient Egyptians built statues of divinities from wood and stone and consulted them for advice.
Early Buddhist scholars described precious metal people that would recite sacred texts and weep.
Oh, the Greeks had a wonderful attitude toward robots and machines. They welcomed them.
This is Pamela McCordick, and she wrote Machines Who Think, one of the first modern histories of artificial intelligence.
And she says that in Greek mythology, the God Hephaestus, who was the God of blacksmithing,
loved to make robots.
He created all kinds of robots,
and one to help him out in the forge,
and all kinds of party help.
You know, things would roll into the parties of the gods
with trays and whatnot.
And there's a wonderful description in Homer.
They are like lovely young women,
and there is intelligence in their hearts.
Flash forward nearly 2500 years to the 1940s.
In the mid-1940s, the 20th century had different kind of tool within Ben.
A tool for extending certain of the powers of man, mind. This tool is the electronic computer.
But this tool is the electronic computer. As modern computers became more powerful and widely available, people began to believe
that this fantasy of intelligent machines that could think and communicate might actually
be within grasp.
The idea was most famously articulated by the British mathematician Alan Turing.
And he wrote this very famous paper in 1950,
called Computing, Machinery, and Intelligence,
which begins with this famous line,
Ken Machines Think.
This is Brian Christian.
He wrote the most human human,
what talking with computers teaches us
about what it means to be alive.
Turin is already sort of seeing ahead
into the 21st century and imagining what if someday
we do manage to build a machine that can actually quote unquote think, how would we know that
we had built such a machine?
This is a hard task because it's difficult to even define what it means to think, let
alone prove that a computer is doing it.
But Turing approached the question like the engineer he was, proposing a simple test.
He said that if a computer could regularly fool people into thinking it was, in fact,
a human, it would pass.
Turing predicted that this would eventually happen so consistently.
That as a result, we would just, as a society, he says, come to this point where we would speak of machines as being intelligent without expecting to be contradicted.
The Turin Test brought up really profound questions.
One's we've been thinking about since the days of Greek robots and precious metal people.
What does it mean to have a mind? How do you kind of bridge that gap from one mind to another?
And is language kind of the best
or only means that we have for doing that.
How does empathy work?
What are the limits of empathy?
They're not fundamentally technological questions.
They're fundamentally human questions.
And there are questions that would come to occupy
Joseph Weisenbaum, the creator of Alisa.
Before turning against artificial intelligence,
Weisenbaum helped develop it.
He was part of an early generation of AI researchers.
Back in the 50s, they started exploring
how to give computers new human-like powers.
They programmed computers to reason, plan, and perceive.
They created programs that could play checkers,
solve word problems, and prove logical theorems.
The press at the time described their work as astonishing.
Herb Simon, one of the most prominent AI researchers at the time,
predicted that within 20 years, by the 1980s,
machines would be capable of doing any work a person could do.
Just like today, these advances made people pretty anxious.
For millennia, humans had to find themselves
as rational creatures.
It was our ability to reason
and use logic and play chess that made us special. It is exactly those things that the computer came in
and, you know, within a handful of years was better than any human could ever hope to be. And so
it really pulled that rug out from under us. But despite all the big leaps forward, there was one realm in particular where computers
remained quite dumb.
They struggled to master human language, which is known in AI as natural language.
Natural language processing or understanding, if you will, is probably the hardest problem
for AI.
Melanie Mitchell is a computer scientist and AI researcher.
When we communicate in language, we're using all of our thought processes.
We're using our vast knowledge of the way the world works, about other people and their
goals and motivations.
We're disambiguating words.
It's really almost equivalent to cognition itself.
So as an example of this, Mitchell says, think about this sentence.
A steel ball fell on a glass table and it shattered. If you ask, what does it refer to?
Humans will understand immediately. Of course, we know that if a steel ball falls on a glass table, the glass table shatters.
But that's because we know about what glass is, what steel is, we know about things falling
on other things, and we know a lot about the way the world works.
But computers don't know that stuff.
Unless they're programmed to, they don't actually know about glass and steel and gravity.
And actually programming all that information into a computer is really labor intensive
and difficult because that knowledge is mostly subconscious for us.
It's so intuitive as to be almost invisible.
And this is something that comes up again and again in natural language processing systems
is that they don't have the kind of
knowledge about the world that we humans have, and so they make mistakes.
This clumsiness with human language meant that early chatbots built in the 1950s and
60s were tightly constrained.
They could converse about some very specific topic, like, say, baseball, so you could ask
about Joe DiMaggio's batting average in 1949,
and the computer would tell you.
And by limiting the world of possible questions and answers,
researchers could build machines that passed as intelligent.
But talking with them was like having a conversation
with Wikipedia, not a real person.
I was very aware of the notion that he was doing something with computers that gave these computers
a power to understand that was leaps and bounds above anything that had come before. Naomi
Weisenbaum is Joseph Weisenbaum's youngest daughter. She didn't know a lot about the specifics of his work when she was growing up, but she
remembers his mustache, she says he always had one, and that he was serious and studious,
and he spent a lot of time in his office at home.
My mother, she'd call me downstairs and say, okay, time to go walk the dad, and I had to
go down and fetch dad out of his study and ask him
to come for a walk with me.
She also remembers her dad as someone who couldn't help but see the world from a critical
perspective.
I think my father was very deeply an outsider right from the very beginning and that when
he went to MIT, it was no exception.
Shortly after he arrived at MIT in the 1960s,
Weisenbaum had an insight.
He realized he could create a clever workaround for the natural language problem.
He could build a chatbot that didn't really need to know anything about anything.
It wouldn't spin out facts. It would just reflect back at you, like a mirror.
You know, the cocktail party conversation, for example.
This is Joseph Weisenbaum in an archival interview.
Where someone says something to you that you really don't fully understand,
but you are, in fact, able to give an response, which appears appropriate,
and in fact, the conversation continues for quite a long time.
And we do that all the time. You. And we do that all the time. We do that all time.
But why's a mom still had to create a framework in which that conversation would happen?
A style of interaction the program could follow. Why's a mom had long been interested in psychology
and he recognized that the speech patterns of a therapist might be easy to automate.
Because when you think of a therapist, certain phrases come to mind.
Like, and how does that make you feel?
And so, why's a man decided to create a bot that, like, a therapist would listen, pick
out keywords, and then mirror statements back to the user?
How dad explained it was that it was kind of simple, simple psychology, more or less
very similar to a parent, that you pick out one element of what the
person said and you ask them to expand on that.
Joseph Weisenbaum was very clear.
He didn't intend for Eliza to be an actual therapeutic tool.
It wasn't even programmed with information about human thought or emotion, but nevertheless
Eliza seemed to understand.
And as Weisenbaum watched people like his secretary
converse with the program, he quickly realized
they weren't just having simple, dimple exchanges.
They were having meaningful conversations with the machine.
People began having these really long,
kind of soul-bearing conversations with this program
and reporting, you know, a meaningful therapeutic experience.
And this, for him, was horrific.
Brian Christian again.
He talks about, you know, I would have assumed that, you know, some sort of meaningful
experience like this would require someone to be present on the other side of the conversation
and to actually invest something. So the fact that the nearest mechanical parity of this process can
still apparently produce something meaningful for people, he found really
unsettling. But while Weisenbaum was horrified by his own creation, others saw
Eliza as an important breakthrough and a real opportunity.
Before coming to MIT, Wizenbaum had spent time at Stanford, where he became friends with a psychiatrist
named Dr. Kenneth Colby. Colby had worked at a large underfunded state mental hospital,
where patients may be saw a therapist once a month. And that was why when he encountered even this rather crude caricature as
Wyzenbaum put it, of the therapeutic process, he thought, boy, I can put some
therapeutic knowledge into this and make it happen.
Colby started promoting the idea that Eliza
might actually be therapeutically useful
and the medical community started to pay attention.
They thought maybe this program or other programs like it
could help expand access to mental health care.
And maybe in some ways,
it could even be better than a human therapist.
It would be cheaper and people might actually speak
more freely
with a robot.
Yeah, I can tell this.
A bunch of tubes, anything I want.
I can tell the truth.
We feel free to open ourselves to what we think
is a non-judgmental machine.
Many therapists claim there has to be
a live warm human body present for a therapeutic
change to take place. Again, that's an opinion that's not a fact.
Here's Colby in a news interview years later.
If you say to the program, I'm feeling quite depressed,
I would sympathize with you and so will the machine.
So the idea of the robot therapist started to take off.
You know, in the late 60s, early 70s, you have these peer-reviewed psychology journal articles
saying, you know, this represents a potential breakthrough
for therapy.
The famed scientist Carl Sagan wrote about the idea in 1975,
imagining a network of psychotherapeutic computer terminals,
kind of like phone booths.
You walk to a city corner, put a nickel,
and you talk to this therapeutic robot for a
few minutes to kind of take your mind off of things, so it's very warmly received by the
medical community at large.
And while the idea of therapy terminals on every corner never materialized, people who
worked in mental health would continue to experiment with how to use computers in their work.
Colby, for instance, went on to create another chatbot called Perry,
which simulated the conversational style of a person with paranoid schizophrenia.
He later developed a program called Overcoming Depression.
Again, for Wizenbaum, he felt like he had sort of let this genie out of the bottle
and there was kind of no going back.
The real greek or the real controversy,
finally it was over at the claim that Eliza was of therapeutic significance.
You know, my God, could people really believe that?
And what are we doing?
Why is Embaum objected to the idea that something as subtle, intimate, and human as therapy could be reduced to code.
He worried that if machines snuck into the therapist's office, then where else might
they end up?
He began to argue that fields requiring human compassion and understanding just shouldn't
be automated.
Why Zimbabm also worried about the same future that Alan Turing had described.
One were chatbots regularly fooled people into thinking they were interacting with a
human.
Wizenbaum would write a valyzer, quote, but I had not realized is that extremely short
exposures to a relatively simple computer program could induce powerful delusional thinking
in quite normal people. The whole thing is a con job.
The whole thing is, it's very much like fortune telling.
It's an illusion-qureating machine and all that sort of thing.
Wizenbaum quickly went from someone working in the heart of the AI community at MIT
to someone preaching against it.
And his friend Kenneth Colby became one of his first targets.
He began attacking Dr. Colby in public, in professional meetings.
And he would say this is a wicked thing to do.
It should not be done, and he should be punished for doing it.
Over time, Wiesenbaum escalated his arguments.
His family had escaped Nazi Germany in 1936, and sometimes
when critiquing AI, Wiesenbaum would invoke the Holocaust. He worried powerful technologies
could be abused by governments and corporations. Here's one of his other daughters, Miriam Wiesenbaum.
These happen to be incredibly powerful tools, and the really important question is, for whom are they being used?
What's the purpose of them?
Who benefits?
Who's subject to them?
Who controls them?
Who doesn't?
At that point, he began extending that and saying all artificial intelligence is questionable
and probably wicked.
It shouldn't be done.
And, uh, oh dear, he was really swimming upstream on that one.
The computers are coming.
Well, the next few decades, AI allowed for automation not just in therapy,
but in fields like medicine, law, and finance.
Computers fly our planes. Increasingly, they do our banking.
The computer is king, business, finance, and information.
And increasingly, say experts, computers
will get smarter and smarter.
Companions, advisors, articles, to all of us.
Eventually, researchers began trying
to create computers that were flexible enough to learn
human language on their own.
To take those first faltering steps, as babies do, toward understanding language, to break
down and diagram sentences, to remember new meanings, verb tenses, and to ask questions
to make itself smarter.
The computer wants to know, is the past tense of make, mate, knows, as
Martin, may.
These programs could take a long time to figure things out.
It has been a tedious three-year project. The computer now knows about 300 words. Still,
Paul Martin is convinced computers can learn.
But starting in the 1980s and 90s, there were new breakthroughs in natural language processing.
Scientists began relying on statistical methods.
Taking documents or recordings of language and performing statistics on things like,
how often does one word come after another?
Or how often does this sort of triplet of words occur?
And then in the 2000s and 2010s, researchers began using
what are called deep neural networks. They're trained using the huge amounts of data that only
became possible with the rise of the internet. So now we can take every single web page online and
process the language in it and have the neural network analyze the language. And they work so much
better than anything that came before.
And these techniques, Mitchell, is describing they've been applied to chatbots specifically.
They might be trained on actual conversations, recorded conversations. Sometimes people put
their chatbot online and then other people interact with them and the systems learn by just interacting with people online. As a result, contemporary
chatbots and their talking cousins like Siri and Alexa have gotten a lot better.
There are still limitations, of course. They still can't really understand the
world in the same way we do, but they have become a lot more reliable,
personable, and convincing. And humans remain as open and sometimes gullible as ever when it comes
to interacting with chatbots. There's even a term that's emerged in computer science. The Eliza
effect that refers to our tendency to anthropomorphize computers and to believe that programs understand,
even when they really don't. There are recent examples that make it feel as though we are living firmly in the world
that Alan Turing predicted and that Joseph Weisenbaum feared.
Machines full humans all the time now.
In 2018, for example, Google revealed something called Duplex, a conversational AI
that can make phone reservations. and it is uncanny.
So how's my out here? Hi, I'm calling to book a women's hair cut for our client. I'm looking for something on May 3rd.
You are a wood timeer, are you looking for around at 12 p.m.
They programmed in human like stuttering and saying, um, and, uh, and made it very convincing.
Do you have anything between 10 a.m. and 12 p.m.
Depending on what service she would like, what service is she looking for?
Just a woman's haircut for now.
They sounded very human.
People were really alarmed.
They wanted to know that they were speaking to a computer.
The first name is Lisa.
Okay, perfect.
So I will see Lisa at 10 o'clock on May 3rd.
This issue of transparency has become central to the ethical design of these kinds of systems,
especially in sensitive realms like therapy.
Because even if Eliza was the first chatbot to be used in the field of mental health,
it wasn't the last.
Transparency is the basis of trust and you must have trust with a service like this.
This is Allison Darcy. My name is Allison Darcy. I'm the CEO and founder of WoBot Labs.
Darcy is a psychologist and she's long been interested in how to use digital tools to expand access to mental health care.
In the United States, the third of the population are living in areas that are designated no access areas,
so they effectively don't have a mental health professional anywhere near them.
And even if you live in a major city with lots of therapists, it's often not covered by insurance and can be super expensive.
So a few years back,
China team began thinking about how to build a digital tool
that would make mental health care radically accessible.
They experimented with video games
before landing on the idea of WoBOT,
a chatbot guide who could take users through exercises
based on cognitive behavioral therapy, which
helps people interrupt and reframe negative thought patterns.
Robot is not even trying to pass the terrain test.
It's very transparently a robot.
It's represented by a little robot avatar.
And part of its personality is that it's curious about human feelings, like a student
learning a foreign language. I think he's somewhere between Kermit the Frog, like circa 1980, and Spock.
Spock who's always trying to, you know, struggle with his human emotions.
As Darcy's team built a prototype and started watching people interact with Wobot, right
away they could tell something interesting was happening.
They noticed that if Wobot didn't check in for a day or two, people would reach out.
People were saying things like, Wobot, where were you? I need you to check in on me every day.
Just like with Eliza, people were forming emotional connections with the program.
They liked interacting with Wobot and relied on it.
People, when we spoke to them, would say, look, I know this is just a chap but like I know this isn't anything but at the same time it feels like this fun little
dude checking in on me. And so people were willing to just go with the suspended reality. They were
effectively stepping into this kind of fictional world and it made it playful and fun.
I've spent some time with robot and I feel the same way.
It's like texting back and forth with a relentlessly positive and sweet little cartoon
character.
It offers high fives and sends you gifts.
It feels less like a human therapist leading me to deep insights and more like a tiny personal
cheerleader I can carry around in my pocket.
A tiny personal cheerleader doesn't sound very dystopian.
No, it doesn't.
But I mean, of course,
Wilbot raises tough questions about tech more broadly.
If chatbots are built with nefarious intentions,
they can do stuff like detect and manipulate human feelings.
They can fool humans into thinking they're people.
I have anxiety about this kind of tech misapplied, like anything else, like a surgeon's scalpel.
This can be a tool that helps people survive and it can also be a weapon.
And with global bot in particular, there are other issues too, around privacy and safety
and even thornier stuff.
Like, is this tool a digital bandaid that somehow enables the status quo? Meaning,
the situation we're in now, where the vast majority of people don't have access to high-quality,
comprehensive mental health care. Darcy is clear that she does not see Wobot as a replacement for
talk therapy. And she says her team is always thinking and talking about those ethical questions.
It's something we always keep an eye on, but I think, you know, the potential for good,
just so much outweighs the risk right now. Darcy thought about this a lot when
WoBOT first launched. They'd been working in relative obscurity, and then suddenly their numbers
began to climb. Quickly, within the first five days,
they had 50,000 users.
Wilbot was exchanging millions of messages
with people each week.
And I remember going home after our first day
and sitting down at my kitchen table
and having the realization that
Wilbot on his first day of launch
had more conversations with people
than a therapist could have in a lifetime. You know, it just really hit home.
Joseph Weisenbaum eventually retired from MIT, but he continued speaking out against the
dangers of AI until he died in 2008 at the age of 85.
For a while, he became a go-to critic if a TV program needed someone to cast out on some new technology.
For example, here he is in a segment about computers and schools, criticizing the tendency to reduce every human problem to a technological one.
It's a very dangerous attitude. It can't help but destroy our culture if it ever becomes co-leaf victorious. And while he was an important humanist thinker,
a guy way ahead of his time in many ways,
some people felt like he went too far.
Pamela McCordock knew Wyzenbaum over several decades.
She's the one who conducted the interview
you heard with him earlier in the story.
And she says he burned a lot of bridges in the AI community
and became almost a caricature of himself towards the end.
She describes being at a conference in Austria, where Weisenbaum was also in attendance.
At one point he got up and delivered what had become his usual tirade.
We must not use computers. The German Army used computers and looked what's going to happen.
Well, I happened to be sitting next to a young German artist,
and he looked at me and he said very quietly,
the German army uses knives and forks.
Let us not use knives and forks.
Well, I had to put my hand over my mouth
to keep from laughing.
And Wiesenbaum also may have missed something
that Darcy was just talking about with Wilbot.
The idea that humans engage in a kind of play
when we interact with chatbots.
We're not necessarily being fooled,
we're just fascinated to see ourselves reflected back
in these intelligent machines.
That was my experience chatting with Woolbot
and was probably true for Eliza too.
Also, you know, I was gonna say about the Eliza
and the, you know, this story about his secretary and stuff. Here's Miriam Weisenbaum again. And I want to be
clear, she had many lovely things to say about her dad in our conversation that he was brilliant
and insightful and funnier than people realized. But she also had this to say about the secretary's story. I always heard that just really kind of soaked in arrogance,
like how could she be so stupid to think that,
you know, that this is actually a meaningful communication,
you know, here you have somebody who so needs
a landing pad for her feelings that she's willing
to embrace it, right?
And he totally misses that.
He totally misses the human need and instead speaks of it as, you know, just evidence of this person's stupidity,
rather than evidence of their humanity.
I think we're all a little like Dr. Weisenbaum's secretary now. We're living in this highly automated world.
We have these little computers in our pockets that we can talk to, and we're under no illusion
that they really understand us, but they still captivate us, and we still rely on them.
To tell us the weather, and how to get to where we want to go,
and maybe even on occasion to ask us if we're doing okay.
Coming up after the break, a recent development in natural language processing that would make
Joseph Weisenbaum very nervous.
So I'm back in the studio with Delaney Hall, and you've got more to tell us about some
interesting stuff that's happened with natural language processing in just the past year.
Yeah, a lot of interesting stuff has happened just recently.
So as I was interviewing people for this story,
I was talking with computer scientists and historians,
and we were talking about Wiesembaum and his legacy,
and these fears he had about chatbots fooling people.
And totally I'm prompted,
a couple people brought up this one recent development
in natural language processing.
It's something that's called GPT-2.
So GPT-2 stands for generalized pre-trained transformer.
I can never remember what it stands for.
So Melanie Mitchell, one of the computer scientists I spoke with, is close.
It stands for generative pre-Train Transformer 2.
And it's this relatively new neural network that generates incredibly convincing text.
It's somebody described it as a mouth without a brain, which is a good statement.
And the way that GPT-2 works is that you can input a word or a sentence or a paragraph.
The program then generates text based on that input.
It basically conjures up a whole coherent piece of writing based on just a few words that
you give it.
It is this giant neural network that is trained on a huge corpus of text to predict the next
word in a sentence or a paragraph given the preceding text.
Here's Brian Christian, who we also heard from in the story.
You get what appear to be plausible, if very slightly weird screenplays and, you know,
news bulletins and chat message histories and all sorts of things.
Okay, so I'm having a hard time picturing that.
So what are some examples of the type of text that generates?
Like, why and how is it so convincing?
So the example that kind of broke the internet when all of this first came out
was this fake news story about unicorns and the Andes.
It must have missed that one. What's notable? Yeah, I know. So this
will sort of help you understand a little bit better kind of how it works I
think. So the prompt that GPT-2 was given, and so this was a couple of sentences
that were written by a human and given to the machine, and I guess I should be
clear, this was written in the style of a news story.
That was intentional.
So the prompt was, quote,
in a shocking finding, scientists discovered
a herd of unicorns living in a remote,
previously unexplored valley in the Andes Mountains.
Even more surprising to the researchers
was the fact that the unicorn spoke perfect English.
Okay, so a human wrote those sentences and then fed them to the machine.
And so what did the machine spit back out?
So totally independently, based on those sentences, this is what GPT2 generated.
And it actually goes on for quite a while, but I'll redo the first few lines just because
they're amazing.
It goes, the scientists named the population after their distinctive horn, Ovid's unicorn.
These four horned silver white unicorns were previously unknown to science.
Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved. Dr. Jorge Perez, an evolutionary biologist from the University of La Paz,
and several other companions were exploring the Andes mountains when they found
a small valley with no humans or animals.
Perez noticed that the valley had what appeared to be a natural fountain
surrounded by two peaks of rock and silver snow.
And so it goes on for a
few more paragraphs, but Roman a machine, a machine wrote that, a machine. And
added all the detail about Dr. Perez and the University of La Paz and
everything. That's just from the machine. I know. I know. And it's like, there's
a couple things that really stand out to me when I'm reading it. And the first, of course, is that, you know,
the sentences are coherent.
Like, if you told me a human wrote them,
I would not doubt you.
Yeah, neither would I.
That's incredible.
And then it has those details, like,
the names and the location of the university,
which line up, right, with the pro.
Right, right.
And then, the other thing it does is that it,
it does this pretty convincing job of mimicking
the format of a news article.
So it starts with like lead sentences, they give you the most crucial information, and
only then does it begin to tell the story of how the discovery unfolded.
Wow.
It's so cool.
I mean almost can't even make sense of how how real that feels.
I mean, this is pretty silly with unicorns and the anis, but I mean, I imagine, you know,
this is making people nervous that a computer can generate this convincing of text that
is if it came from a human. Yeah, yeah, absolutely. So the company that created the program called OpenAI
released this paper and it had various examples
of text generated by this neural net.
And it caused a huge stir online,
partly because it did such a good job,
and partly because it made people immediately nervous.
Like the idea that this text is so convincing
gets back to that idea that we talked about in the episode,
which is like, will people know this was written
by a machine?
And because it generated a very convincing
and totally fake news article,
people are worrying that the tool could supercharge
the spread of misinformation
on the internet.
Right.
I mean, so what do you do?
Like, once you make a thing like that, how do you make sure it doesn't do that?
How do you make sure it doesn't, isn't used for nefarious ends?
So, the way that OpenAI approached it was when they, when they first released news about
it.
They said, we've made this thing.
Here are some examples of the text it
generates. It is extremely convincing. And because of that, we're not going to actually release the
code. It's too dangerous. Whoa. Yeah. The only thing I can see a problem with that is, is like the
strisand effect. Like if you say you can't use it and don't recreate it, it just means that more people are going to try to create
and use that sort of thing.
I mean, it just calls more attention to it.
So yeah, it did call a lot of attention to it.
Here is Melanie Mitchell again.
A bunch of people just jumped on them.
They were saying this is just a publicity stunt.
And there were, of course, the popular press Doomsday headlines about open AI creates an
AI that's so powerful, it's too dangerous to release.
So basically open AI got a lot of pushback.
But the whole situation also inspired this interesting conversation in the AI community
about these really tough questions that they're facing now,
now that they can generate what are known as deep fakes,
not just in text, but in audio and video too.
These are really convincing computer generated pieces
of media.
So here's Melanie.
How should people think about publishing results in AI?
Is it better to publish these things and get them out there so that people can do research on how
to protect or defend against them or should they not be published at all? So there was this big
conversation about that. I mean, I guess I wonder like, you know, this obviously has a problem
with fooling people, but you know, couldn't a human
just write an entire fake news article
or a bunch of them and post online.
I mean, how does GPT2 make things really all that different?
That was something I asked Melanie,
and she said that the issue is really scale.
Like, sure, she could sit down and write a story
about unicorns herself and post it online and tweet
it out and try to make it go viral.
But it would take time, it would take effort.
And instead, GPT2 could generate literally millions of these stories very quickly.
And here you can just do it automatically and generate as many of these as you want.
You could put them online and somebody's going to believe them.
It just makes it easy to create these things and publish them and it's hard to fight back.
And the tough thing is that we're already living in an online information environment where
it can be hard to sort the real from the fake.
You know, here's an account that says they're the Tennessee Republican Party.
Are they?
You know, here's this thing that says it's the Trump fan club of Mizzoula.
Is it?
You know, like, here's this person who claims they're, you know, a retired Navy SEAL.
Are they?
You know, we're increasingly moving into this world in which it's just really hard to make sense
out of discourse on the web without being extremely scrupulous and vigilant.
I mean, it seems like this generation from now on, we're going to have major media literacy
training.
I mean, it's just to be able to spot the fakes from the real thing is gonna take a whole new amount of effort
in part of our brain.
Yeah, it is.
And to bring it back to Joseph Weisenbaum,
I mean, when he first started thinking about this stuff,
computers were still really rudimentary.
They had about as much processing power as we have
in like a parking meter today,
which is to say they couldn't do much, but today it's this totally
different story.
How do we survive this coming reality in which you can produce, for all intents and purposes,
touring test-passing content that espouses whatever viewpoint you happen to want, like infinitely
at scale. Increasingly, these, you know, ostensibly philosophical questions of the Turing test become,
you know, not merely one of the hassles of everyday life, but one of, I think, the more
pressing questions affecting us at kind of this moment in history.
Well, so it seems like wise and wise, you know, made been ahead of his time, but dead on.
It's interesting because I think he sounded to many people in his time,
alarmist, but increasingly the questions that he was raising seem very, very relevant.
I find myself wishing that we had both Turing and Wiesenbaum by our side in the coming decade because I think we need all the
perspective we can get.
99% Invisible was produced this week Week By Our Senior Editor Delaney Hall, Mix Intek Production
By Shereef Yusuf, Music By Sean Real.
Katie Mingle Is Our Senior Producer Kurt Colstad Is The Digital Director.
The rest of the team is Emmett Fitzgerald, Joe Rosenberg, Chris Baroube, Vivian Leigh,
Avery Trollfmann, Sophia Klatsker, and me Roman Mars.
Special thanks This Week To Katie Mingle, who first came up with this idea to do a story
about Eliza.
Thanks also to Dr. Elena Books, who has studied the ethics of AI in mental health settings.
You can find a link to the paper she co-authored on our website.
Thanks also to the Carnegie Mellon University Archives for letting us use the recording of
Joseph Weisenbaum from the Pamela McCordock collection.
And finally, thanks to Naomi and Miriam Weisenbaum, who generously shared their memories of their
father.
We are a project of 91.7 KALW in San Francisco, and produced on Radio Row in beautiful downtown
Oakland, California.
99% of visible is a member of radio Radio Topia from PRX, a fiercely
independent collective of the most innovative shows in all of podcasting. Support them all at
RadioTopia.fm. You can find the show and join discussions about the show on Facebook. You can
tweet at me at Roman Mars and the show at 99PI Ork. We're on Instagram and read it too.
Mars and the show at 99PI org. We're on Instagram and ready to.
But our true home on the web is 99PI.org.
So I remember one time when I was probably about 10 asking my dad if I could work on the
computer.
I asked if I could be alone, not that I knew anything at that time about the famous
story of his secretary wanting to be left alone to work with Eliza.
I didn't know anything about that. But I somehow knew that this computer
was going to have powerful insights into people and who I was and something like that. And unbeknownst
to me, he had turned the computer on so that it operated basically as a typewriter. But I didn't know
that. And so I sat down at the typewriter and I typed in, what is my sister Miriam doing right now?
And I waited for a really long time
and the computer didn't answer me.
And then I said, what is Miriam wearing right now?
Because I knew the answer to that.
So I thought well then maybe the computer would too
and nothing happened.
And finally, I waited for a really long time.
And finally, I got up and discussed
and thought, this stupid computer doesn't know shit.
Yeah.
Radio tapio.
From PRX.
From PRX.