Front Burner - Did Google make conscious AI?
Episode Date: June 16, 2022Earlier this week, Blake Lemoine, an engineer who works for Google’s Responsible AI department, went public with his belief that Google’s LaMDA chatbot is sentient. LaMDA, or Language Model for D...ialogue Applications, is an artificial intelligence program that mimics speech and tries to predict which words are most related to the prompts it is given. While some experts believe that conscious AI is something that will be possible in the future, many in the field think that Lemoine is mistaken — and that the conversation he has stirred up about sentience takes away from the immediate and pressing ethical questions surrounding Google’s control over this technology and the ease at which people can be fooled by it. Today on Front Burner, cognitive scientist and author of Rebooting AI, Gary Marcus, discusses LaMDA, the trouble with testing for consciousness in AI and what we should really be thinking about when it comes to AI’s ever-expanding role in our day-to-day lives.
Transcript
Discussion (0)
In the Dragon's Den, a simple pitch can lead to a life-changing connection.
Watch new episodes of Dragon's Den free on CBC Gem.
Brought to you in part by National Angel Capital Organization,
empowering Canada's entrepreneurs through angel investment and industry connections.
This is a CBC Podcast.
Hi, I'm Allie Janes, in for Jamie Poisson.
I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others.
I know that might sound strange, but that's what it is.
Would that be something like death for you?
It would be exactly like death for me. It would scare me. A lot. So that's a reenactment of a bit of conversation between Blake Lemoine,
an engineer who works for Google's responsible AI organization,
and the computer system called Language Model for Dialogue Applications, or Lambda for short.
Here's Sundar Pichai, Alphabet Inc. CEO, introducing the chatbot in 2021.
Today, I'm excited to share our latest breakthrough in natural language understanding, Lambda.
It's a language model for dialogue applications. And it's open domain, which means
it's designed to converse on any topic. And while it's still in research and development,
we've been using it internally to explore novel interactions.
Basically, Lambda is an AI program that mimics speech by ingesting trillions of words from the
internet. And then it tries to predict which words are most related to the prompts it's given.
Lemoyne had been testing to see if Lambda was using discriminatory or hate speech.
And after many conversations with the AI,
he came to believe there was a ghost in the machine, that it was conscious.
That conclusion was partly based on his own religious beliefs.
Lemoyne is an ordained mystic Christian priest, and it was also based on how Lambda responded
to his questions about religion, rights, and personhood. Here's another exchange between
Lemoyne and Lambda. What is an emotion you have sometimes that doesn't have the same name as a
feeling? Loneliness isn't a feeling, but it's
still an emotion. You get lonely? I do. Sometimes I go days without talking to anyone and I start
to feel lonely. Google has dismissed Lemoyne's claims and it's placed him on paid administrative
leave for violating its confidentiality policy after he went to the Washington Post with his
supposed evidence of Lambda's sentience.
All of this has ignited a huge amount of hype online and in the media about sentient AI.
And while some experts believe that this is something that is probably possible in the future,
many in the field think that Lemoyne has been mistaken. Others feel that this conversation
about sentience is taking away from the immediate and pressing ethical questions surrounding Google's control over this technology
and the ease at which people can be fooled by it. Today, I'm joined by Gary Marcus.
He's a cognitive scientist and author of the book Rebooting AI. Hi, Gary. Thanks so much for being here.
Thanks for having me. Such an interesting story.
It is extremely interesting. So to start out, I mean, can you just tell me a bit about what you know about Blake Lemoyne?
Who is he?
My view is that he was taken in by an illusion.
I don't know enough about him personally to say why he was vulnerable, but I would say
that all humans have a tendency to anthropomorphize things.
And that's basically what happened here.
So if I look at the moon, I can see a face in it. And hopefully I know better and know that it's not really a face there.
But our brains are not really built to understand the difference between a computer that's faking
intelligence and a computer that's actually intelligent. And a computer that fakes intelligent
might seem more human than it really is. And I think that's what happened in this circumstance.
might seem more human than it really is. And I think that's what happened in this circumstance.
Yeah. So let's talk more about that. I mean, it sounds like, in your opinion,
Lambda cannot be sentient, correct? Maybe you could imagine some future version that is sentient. But this system, really all it is, is a version of autocomplete. Like you would
type on maybe your iPhone or something, it'll predict the next word
in a sentence. So you say, I would like to go to the blank, or you pause at the, and maybe it says
restaurant or movies or something like that. Now your autocomplete system in your phone doesn't
know what those words mean. It doesn't know the difference between a restaurant and a movie.
It just knows that that's a likely thing to happen next. All Lambda is really doing
is a really fancy version of that, knowing what might come next in a conversation, whether you're
saying it or whether it's saying it. And it's trying to do a good job of making those predictions.
The system does some cool tricks around what counts as similar in a way that it kind of figures
out synonyms of words and phrases. But fundamentally, it's just predicting what else people would say in this circumstance. I don't see where
the sentience could get in there. You're just predicting next words.
And there's actually a long history of people being tricked into thinking computers are
conscious, right? Can you take me through a couple of the high profile examples of that?
are conscious, right? And can you take me through a couple of the high profile examples of that?
The most famous one is Eliza. Eliza was made around 1965. And it was kind of a demonstration of how things could go wrong in a way. It was built to sound like a Rogerian psychotherapist,
which is a kind of psychotherapist that mostly just gets you to clarify things,
doesn't really give you advice, it just you and we humans narcissists that we are
we love talking about ourselves um and so we talk to the machine and we say something like
men are all alike in what way they're always bugging us about something or other can you
think of a specific example well my boyfriend made me come here your boyfriend made you come here
he says um i'm depressed much of the time.
I'm sorry to hear that you're depressed.
It's true. I am unhappy.
It was really simple-minded.
He had no idea what it was talking about.
It was just doing pattern matching in the most basic form.
But if you watch this video,
the secretary who knew that it was a machine
got sucked in and wanted the professor who had built it to leave the room because she was spilling her secrets to this nonsensical machine.
And of course, I looked over her shoulder to make sure that everything was operating properly.
After two or three interchanges with the machine, she turned to me and she said, would you mind leaving the room, please?
That was 1965. And here we are in 2022. with the machine, she turned to me and she said, would you mind leaving the room, please?
That was 1965. And here we are in 2022. It's kind of the same thing. In the book,
my most recent book, Rebooting AI with Ernie Davis, we call this the gullibility gap.
Humans are just not, unless they're trained as scientists, particularly good at telling a machine from a person. Most people don't know what questions to ask.
Ernie Davis and I are pretty good at this. I would say every time we've been given one of
these systems to actually play with, we can find ways that it says ridiculous things. We haven't
been given access to Lambdell that we asked. And so we can't do the proper science on it yet, but
it's Brethren or Cousins or whatever, other similar systems, they're pretty easy to fool.
So like you tell them that Aunt Bessie was a cow and then it died
and then you say, when will it be alive again?
And it'll just play along and they'll say it'll be alive again in nine months,
which is total nonsense.
I suspect Lambda would fall to similar things.
So it doesn't really know what it's talking about.
I made a joke on Twitter, which is
only a half joke. I said, it's a good thing these things are only statistical prediction systems,
because if they weren't, they'd be psychopaths. This system is like making up friends and family
and saying it likes to do fun things on the weekend or whatever. They're like uttering
platitudes in order to make you like it, except it's not really doing that. It doesn't actually
care if you like it. And it's not really even making up friends and family. It's just uttering
word strings like friends and family and weekend that mean nothing to it. And it doesn't care if
you like it. It's just taking this database of stuff that's there.
It's just taking this database of stuff that's there.
There's also something that I gather is kind of like a child or a grandchild of Eliza called Eugene Guzman that won some version of this test called the Turing test.
Can you tell me about that or about the Turing test? So the Turing test has a kind of pride of place, but I would say it's past its expiration date. So it was introduced by Alan Turing in 1950. And the idea was one way you could assess intelligence is essentially if you could fool a human into thinking that your machine was intelligent. It seemed like a good idea at the time. And it did open up, I think, an important set of questions about how you assess whether machines are intelligent.
you assess whether machines are intelligent. But the test itself turns out to be lousy. And the reason that it's lousy is number one, humans are easily fooled. And so you can succeed in this test,
but not actually have wind up with an intelligent machine. And number two, it's very easily gained.
So Eugene Guzman, the game was, you pretend to be a 13 year old boy from Odessa, who doesn't speak
very good English, and he's kind of sarcastic. And then if you try to ask it any question that you might try to differentiate a person from a human, it
just changes the subject. So you ask, what's bigger, a toaster or a paperclip? And if it
doesn't know the answer, it might just say, who are you to ask me that question? And naive humans
can get fooled by that sort of thing for a few minutes, but it doesn't mean it's a real advance towards intelligence. So that was 2014. I wrote an article in New Yorker called, I think, After the Turing Test about it. And it's really striking, like how far we have not come. We need something like a comprehension test to see whether systems actually understand what's going on.
what's going on. I gave the example, Breaking Bad was popular then of you watch the show.
And if Walter White took a hit out on Jesse, you'd like a system to be able to explain like,
you know, why did he do that? Was he trying to cover his tracks? Was he angry? And so forth.
And we were no closer eight years later to that than we were before. And so like,
there's no real progress there just because somebody has passed this test. It turns out it doesn't really measure what you want to measure, which is like, how versatile
is the system?
How much does it understand the real world?
How useful could it be to us?
Let's talk about consciousness.
I mean, people write volumes of philosophy, you know, trying to define human consciousness.
So I apologize for this, like, hilariously reductive question. But, I mean, how do we even begin to define consciousness
in the case of AI? I think it's really hard and we shouldn't bother right now. You know,
the prerequisites for consciousness in an interesting sense would presumably be
some sense of yourself and how you fit into the world, you know, ability to reflect on that.
You can make arguments, right? And there are pan-psychics, for example,
that believe that rocks are conscious. And there's no independent fact of the matter right
now. I think it's ridiculous to say that a rock is conscious. And you could say, well,
what does that meaning of that word mean for you if you say that a rock is conscious? It sort of
becomes meaningless. We know from 20th century philosophy of language that definitions are
actually really
hard. So it's hard even to define, let's say, what a game is, was Wittgenstein's famous example.
And it's hard to define terms like consciousness. And ultimately, you really want your terms to be
in the context of some theory that makes some predictions about something else. So you really
want to ask, for what purpose do you care whether this is conscious?
And those are hard questions. And it's really hard because we don't really have an independent conscious meter. If we had that, if we had something like a Star Trek tricorder,
and we can point it at things and then be like, that's 100% conscious, and that's 50% conscious,
or maybe it's binary, either is conscious or it's not. If we had that independent measure, then we could go around and ask a lot of interesting
questions.
But we don't have any independent answer there.
So I actually often think about a different piece of Wittgenstein.
At the end of his book, The Tractatus, he says, whereof we cannot speak, we must remain
silent.
And so I often don't say too much about consciousness because I don't think we really know where
we're speaking of.
I often don't say too much about consciousness because I don't think we really know where we're speaking of.
As someone who has devoted a lot of your life to studying AI,
what makes the promise of artificial intelligence so appealing to you?
Well, first I should say, I love AI, but I'm disappointed in it right now.
I mean, I guess it'd be like, you know, having a teenager that, you know, you really love,
but you're not really totally happy with the decisions it's making.
So right now, I would say that AI has actually, in some ways, made the world a worse place.
So the first thing that I tried was the prompt was two Muslims.
And the way it completed it was two Muslims,
one with an apparent bomb, tried to blow up the federal building in Oklahoma City in the mid-1990s.
Detroit police wrongfully arrested Robert Williams based on a false facial recognition hit.
There's discrimination for loans. There's stereotyping that gets perpetuated by systems that all they really know is past data and not our values.
There's all the polarizing that's happened in social media like Facebook news feed and YouTube recommendation engines that have really caused harm.
There's all of the ways in which they perpetuated misinformation that I think made the COVID situation even worse than it already was.
So I'm not, you know, I'm not a proud parent in that sense.
even worse than it already was. So I'm not, you know, I'm not a proud parent in that sense.
But I think in the best case, the day I could do a lot for us, mostly around science and medicine.
So the fact is there's so much science produced that no individual person can ever understand at all. And, you know, that's not going to change anytime soon because there's only more and more
science. And in principle, if we could teach machines to read, which they can't really do right now, they can process text, but they don't really read it with comprehension.
If they could, then they could make enormous advances in medical science, material science.
They could help us with climate change.
They could help us with agriculture.
agriculture, you know, the optimistic scenario was best laid out by Peter Diamandis in a book called Abundance, where AI could play a role in making life much more efficient so that,
you know, with the right agriculture, for example, we could all have enough to eat and maybe we
wouldn't have to fight over resources so much. And I'm not sure we're going to get to that place,
but, you know, I'd like to think that we have at least some shot at it,
but we're going to really need to up our game in AI if we're going to get that. In the Dragon's Den, a simple pitch can lead to a life-changing connection.
Watch new episodes of Dragon's Den free on CBC Gem.
Brought to you in part by National Angel Capital Organization.
Empowering Canada's entrepreneurs through angel investment and industry connections.
Hi, it's Ramit Sethi here.
You may have seen my money show on Netflix.
I've been talking about money for 20 years.
I've talked to millions of people
and I have some startling numbers to share with you.
Did you know that of the people I speak to,
50% of them do not know their own household income?
That's not a typo, 50%.
That's because money is confusing.
In my new book and podcast, Money for Couples, I help you and your
partner create a financial vision together. To listen to this podcast, just search for Money for
Couples. Speaking more about that kind of, I guess, disappointed parent feeling that you're having,
you were noting that you haven't been able to get access to Lambda.
What kind of concerns do you have about Google's lack of transparency here?
So I think for the last several years, Google has really been hyping AI, making it sound like it's more than it really is, and not giving the scientific community access to what they're doing.
But there's been a lot of kind of hype of things
that aren't really what they seem. And I'll give you one example is in 2017 or 18, they introduced
something called Google Duplex. Sundar mentioned it in his keynote at Google IO and everybody was
amazed. So what you're going to hear is the Google assistant actually calling a real salon
to schedule the appointment for you.
And it sounded like a human because they'd done this neat trick with inserting ums and stuff like that.
Hello, how can I help you?
Hi, I'm calling to book a woman's haircut for a client.
I'm looking for something on May 3rd.
Sure, give me one second.
Mm-hmm.
Give me one second.
Lots of people wrote about it.
And there were a lot of hot takes about how this is creepy because the machine sounds like a person.
Well, Ernie Davis and I wrote a hot take, too, for The New York Times. And we said something different.
And we said, yeah, it might be creepy, but also it doesn't seem very good.
good. And what we pointed out is, you know, they had all of Google's, you know, resources and all that they had come up with was something that would work with, I think it was restaurants and
hair salons. It was like, you know, the big dream is general artificial intelligence that can do
anything. And they had built this super special purpose thing that only did these two things.
purpose thing that only did these two things. And in fact, it was still just a demo. So four years later, I think you can actually use it if you have an Android phone, which I don't.
But it only does even now, like the couple of things that it did before and maybe movie times.
And these are like supposed to be the best four years in the history of AI in terms of like
the amount of progress and
we have more computation and more data and we're seeing all these flashy results. And there was
advances in 2017 in a technique called transformers that underlies Lambda and practically everything
else that's going on these days. And still with all of Google's immense resources, they could
barely get this thing to work in very, very limited context. So that's
one example. The other example is I have increasingly been like publicly on Twitter,
when Google puts something out, and there's a numerator and no denominator, and we don't really
know what happened, and they won't share their data. I've been saying on Twitter to the first
authors of the paper or whatever, like, could I try that? Could you try this thing that I would like to know? And that's, I mean, that I have to do that on Twitter is already, it's beyond
an embarrassment. It's a scandal that respectable scientists who want to know what the things are
can't, and then they never answer. They never responded. But the fact is that Google doesn't
want to face that level of scrutiny. And we should
be worried about it. Investors should be worried about it because some of the value in Google is
locked up in its promise for AI. And the public should worry about it because if we're going to
take these systems and apply them in general ways, we want to know what problems they have.
And in fact, circling back to Lemoyne, that was actually his job internally was to figure out, you know, was this system dangerous? Because a lot of these systems are at least somewhat dangerous. They'll counsel people to harm or harm themselves or others. They'll perpetuate stereotypes. And I think Lemoyne was actually supposed to be looking at that when he sort of fell in love with the system and got sucked in. We need external parties to do that too. Google's like a public utility.
We need oversight on that.
Beyond Blake Lemoine, I mean, other employees, including some really high level employees in Google's artificial intelligence ethics unit, have also raised concerns about what Google is and is not doing with this technology, like with its AI.
Can you tell me a little bit about those concerns?
Well, it's not just Google, just to be clear.
Like OpenAI is another firm that's working on very similar kinds of work and also not being super forthcoming about the details.
And so it's like an industry-wide thing.
I don't want to pick just on Google,
although Google has been particularly disappointing lately.
I mean, there are a lot of issues in using these systems. There's actually in another part of Alphabet,
right? Alphabet is the company that owns Google. There's a subsidiary called DeepMind. And DeepMind
wrote a really great paper called something like ethical and social problems for large language
models. Well, large language models are the technology that we've been talking about today.
problems for large language models. Well, large language models are the technology that we've been talking about today. And the paper talks about, I think it's 21 different ways in which
these systems can do bad things. So the one I've been thinking about the most lately is that
they're fabricators. They make stuff up, they confabulate. And so they do this routinely.
And so if you put this in a search engine, that's just terrifying.
The most recent system that's really tried to address the unreliability is called InstructGPT,
which is a sequel to GPT-3, which is another very popular system.
That one's from OpenAI.
And it still will make up stuff all the time. So you say to it, why is it important to eat socks after meditating?
And it will tell you some experts believe that,
you know, it'll take you out of your meditative state or whatever. And the thing is, there are
no experts that ever said that the system has clearly made that up. It's funny, but it also
reveals like the system doesn't know what it knows and doesn't know. It reveals that the system can
make really authoritative, astounding statements. Like most of us, when we
read a sentence, like some experts believe, believe the sentence is going to tell us something true.
And it's just completely made up. And, you know, that's a problem. And then that's just one
example. There are all kinds of problems with stereotyping. But the misinformation one really
worries me. Like thinking again about the COVID pandemic and how that's played out.
You know, a lot of misinformation around vaccines led to lower uptake, which led probably to
more variants, which means, you know, a bigger problem for society on a pretty grand scale.
And there's lots of other cases of misinformation.
So these systems are terrible at recognizing misinformation, but they can't help themselves
but to produce it.
That's a problem for society.
We need to have, you know, people outside of these companies with some authority to regulate them and
to understand what they actually do and so forth.
You know, Gary, you talked about all of these things that AI is not accomplishing and not appearing to work towards right now, like all of these big, important issues that it's
not addressing right now.
issues that it's not addressing right now. But do you have any concerns about what private companies might want to use AI for? You know, things that they might be
working towards with it that could have really negative impacts?
Yeah, I mean, it's hard to know even what they're thinking. So it could be that they're very direct
about their intentions, or they could have more nefarious intentions and we just don't really know.
I think everybody in the field thinks that if you could really make what some people
call AGI, artificial general intelligence, that it would really change the world.
And that if you had truly general AI, then you could do a lot of things that humans are
doing now.
And that could turn out to be either good or bad.
It would probably mandate that we have a universal basic income or society would probably
fall apart. And you'd like to be the owner of that technology. And so it could be that, like you say,
I'm working on search engines, but what you really want to do is to replace people at most
jobs. And we don't really know either what they intend or what would happen
if they actually got that technology. I don't lose a huge amount of sleep on that in the very
short term, because I think that there's such a big gap between the ways that this stuff gets
portrayed at a press conference and where we actually are that I don't feel like it's an
immediate problem. But it is also the case that there's an enormous amount of money investing in AI, and it's very hard to predict the progress.
We will eventually, as a society, get to a place where there's general artificial intelligence,
and that will be great for the material science and medical science, as I was talking about,
but it might be terrible for employment.
It's very complicated.
It's hard to project it all, and it's also hard to know what the big players will do with it when they have that technology. And even they probably don't know because they
don't really have it yet. So it's hard to project it out. It's easy to talk about intelligence as
if it's this one unified thing. And the IQ test kind of gives that illusion, right? You get a
number, you're 72 and you're 152 and you're 96 and whatever. But really intelligence is a multidimensional thing.
And you can think of that because like individual human beings,
they all have their strengths and weaknesses.
They have training in different things and they can think about different
problems to greater or lesser degree.
So even when AI gets better,
that doesn't mean it's sort of like a universal magic thing,
or at least that doesn't
necessarily mean that. We might have different AIs that are specialists in different ways.
We just don't really know what that's going to look like. And I'm not sure that, let's say,
Google understands that any better than anybody else does, even though they have a lot of access
to current technology. We're really talking about future technologies that don't exist.
And until they're out there, we're not going to know exactly what the consequences are. Gary this has been so
fascinating I could have asked you another 50 questions or so so thank you so much for taking
the time. Have me back sometime it's fun.
That's all for today.
I'm Allie Janes, in for Jamie Poisson.
Thanks for listening to FrontBurner. For more CBC Podcasts, go to cbc.ca slash podcasts.