Consider This from NPR - Artificial Intelligence Made Big Leaps In 2022 — Should We Be Excited Or Worried?
Episode Date: December 29, 2022Artificial intelligence is now so much a part of our lives that it seems almost mundane. So is that something to be excited about? Or is the world a scarier place because of it?NPR's Bobby Allyn repor...ts on how some new AI advances showcase both the power and the peril of the technology.And NPR's Ari Shapiro talks to Brian Christian, author of the book "The Alignment Problem: Machine Learning and Human Values", about what we might see in field of artificial intelligence in the year to come. In participating regions, you'll also hear a local news segment to help you make sense of what's going on in your community.Email us at considerthis@npr.org.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Transcript
Discussion (0)
This message comes from Indiana University. Indiana University is committed to moving the
world forward, working to tackle some of society's biggest challenges. Nine campuses,
one purpose. Creating tomorrow, today. More at iu.edu.
These days, interacting with artificial intelligence is so common that it borders
on the mundane. You might ask Siri to wake you up.
I've set an alarm for 7.35 a.m.
Or have Alexa help with the groceries.
I added milk to your shopping list.
You might even be listening to this podcast because AI software recommended it to you.
Artificial intelligence is seeping more deeply into every corner of our existence.
This piece of art right here, believe it or not, well, it was created using AI or artificial intelligence, and it is stirring up a conversation of what is considered ARC.
Consider this.
Artificial intelligence is baked into our daily lives,
but this recipe is still cooking.
We'll look at the leaps in AI over the last year
and what might be coming next.
From NPR, I'm Ari Shapiro. It's Thursday, December 29th.
It's Consider This from NPR. In the last year, something strange has happened. Friends have
morphed into fairy princesses and astronauts.
TV scripts, poetry, and cover letters written by bots
have sounded a whole lot like the creations of real humans.
In short, 2022 has felt like a turning point for AI.
NPR's Bobby Allen reports on some new tools
that showcase the power and the peril of this technology.
There are two crazes taking the internet by storm right now. The first is an image generator called
Lenza. You upload a bunch of selfies to the app and it spits out a batch of hyper-realistic avatars.
You in space. You as an anime character. All of them have one thing in common. As one person put
it on Twitter, you but 20% hotter. Oren Etzioni runs the Seattle-based Allen Institute for AI.
They've really taken this technology and they tied it with people's ego and their vanity.
And that combination has proven to be almost irresistible.
The second tool causing lots of buzz is called ChatGPT. It's a bot that can hold a conversation
or answer questions a lot like a human. You ask it something and it
starts responding in a way that can freak you out pretty fast. I asked it to place a Chipotle order
in the speaking style of Donald Trump, and it said, quote, All right, folks, let me tell you,
this Chipotle order is going to be huge. The best Chipotle order you've ever seen. Believe me,
we're talking about a big, beautiful burrito bowl. It went on from there. With such a gift for language, it did make me wonder, will ChatGPT one day replace me?
It's pretty funny, right?
Are you going to be out of a job?
Of course, I'm probably going to be out of a job because you don't really need me.
You can just take those questions, feed them into ChatGPT, and it'll give you pretty plausible
answers.
It may seem like AI has all of a sudden gotten really, really good, but Ezioni likes to say
AI's overnight success has been 50 years in the making. Some of the most advanced AI tools are
being developed secretly by tech giants like Google and Facebook. The companies aren't ready
to publicly release them, in part because the ways they can be abused is still being studied.
But startups like the companies behind Lenza and ChatGPT have another approach. Release the tools
publicly, see how they're used, then try to put up guardrails to prevent abuse. Obviously,
sending tools as powerful into the wild will produce all sorts of results. Jen King studies
privacy and AI at Stanford. She's noticed one thing ChatGPT does that's concerning.
You can give it a prompt to explain something in terms that make it sound extremely legitimate,
but the underlying facts are actually incorrect.
For instance, I asked ChatGPT to generate a job cover letter for me, and it made a passable one.
But it also said I worked for a newspaper in a city I used to live in but never actually worked for.
Some AI researchers have a name for this, hallucinating. AI researcher Ezioni says,
though ChatGPT can answer questions in a way that seems persuasive,
nothing it says should be taken as fact. A colleague of mine referred to ChatGPT as a
mouth without a brain. With Lenza, one problem many users are reporting is that the avatars produced tend to overly sexualize women.
Sometimes the app will even create a completely naked cartoon version of you, even if all you gave the app were photos of your face.
King with Stanford says this is because Lenza, like most AI tools, is trained using vast amounts of data from the Internet.
And it's the Internet, so there's lots of pornography.
Some of these companies are really training their models on what I would call the internet's toxic
waste. And so to me, it's no surprise that we see these effects. The company behind Lenza has
responded to people who have complained about their avatars being sexualized. It says it has
tweaked its AI algorithm so that nudity is avoided. And if your avatar does have nudity,
the company says it should now be blurred.
Not everyone is upset with their lens avatars.
There already are reports of people
bringing their lens portraits to plastic surgeons
for inspiration.
Bobby Allen, NPR News.
So, was 2022 the year that advancements
in artificial intelligence made the world
a much scarier place, or does it just feel
that way? Put another way, did I write this introduction or did a chat bot? Brian Christian
is author of the bestselling book, The Alignment Problem, and he's here to help us look back and
forward at the impact AI is having on our lives. Good to have you here. Thank you. It's a pleasure.
Well, in addition to everything we just heard about, this was also the year that a piece of
art generated by AI won a prize at the Colorado State Fair. This was the year a Google research went mainstream. A lot of these systems
have been kind of brewing within research labs for the last several years. But this is really,
I would say, the grand debut in terms of actually having real world impact and being the kind of
thing that now millions of people are actively using every day. So if this isn't necessarily
new technology,
but newly public technology, what does that mean?
Like, what does it mean that all of this is now
publicly recognizably in our faces?
I think a lot of the concerns that people had
about this technology that existed in the academic literature
as hypothetical problems have now become real problems that we need to figure out
and muddle through in real time.
Whether that's the sort of concerns of plagiarism
or using intellectual property,
whether that is the ability to create misinformation
and toxic speech, all of these things that people
had been worrying about as possible downstream consequences
of this
technology, well, now, you know, the rubber has hit the road and we actually have to deal with it.
The abstract suddenly became real.
Very much.
So when you hear about these kinds of new advances,
where do you emotionally land on the spectrum between excited and terrified?
I wouldn't say that I'm on the spectrum between excited and terrified. I would say I feel
very excited and very terrified.
Both can coexist at the same time.
Indeed.
What's the thing that most excites you about it? And then I'll ask you the thing that most
terrifies you about it.
The thing that most excites me is that it feels to me, and not everyone is going to agree with
this, but it feels to me that the field of AI is finally delivering on the sort of philosophical promise that the field has had going back to the 1940s and 50s of trying to unlock, if you will, the secrets of intelligence, trying to rediscover some of the principles that evolution itself has found for how thinking works.
And so for me, it feels like after, you know, 70 years of trying various things,
we have hit, if you will, the philosophical pay dirt.
And so that's what's really kind of riveting for me,
is what this ends up teaching us about ourselves, about the nature of cognition itself.
And what keeps you up at night?
What keeps me up at night is, I would say, this concept that in the field is called the
alignment problem, which is that...
Conveniently the title of your book.
Yes. Yes, I've spent many years sleepless about this. And the alignment problem is the idea that we build these systems to do some particular
mathematical task that we can define really easily in the code. But that's not often what we actually
want the system to do. So for example, these language models are trained to be essentially
autocomplete on steroids. They're really good at predicting the next word that
you're going to type or a missing word in a document. But when we use them, we expect something
else, which is we expect them to use language in a way that's helpful and honest and polite and free
of bias and truthful. But that's not actually the incentive structure that we gave them. We
just told them to make good predictions.
What's the one big AI innovation that you are holding your breath to see,
whether in a positive or negative way?
I think that we currently have systems that can do amazing things with language.
We have systems that can do amazing things with visual imagery.
There are a lot of exciting advances in robotics.
And I think there is a sense of collective holding of breath
to see what happens when we can actually integrate these things together.
What does it mean to have a system that can both use language
in the seemingly human-level fluency,
but also move around the world physically
and understand what it's looking at.
I think that's the road that we're on at the moment.
Is that like a robot that moves through the world?
Is that a virtual reality immersive ecosystem?
What is that?
I think we're probably going to see a little bit of all of the above.
So that might be domestic robots that help fold your laundry or something like that.
I think it's, in the shorter term,
more likely to be assistants that help you on your computer,
but they can actually see what's on your screen.
So they could navigate a website for you
and purchase your airline tickets
or do a bunch of scholarship research for you on the internet,
that sort of thing. I think those systems are going to be more and more a part of just
how we navigate the world digitally and then in the longer run physically too.
Something to look forward to in 2023. That is Brian Christian. He writes about the human
implications of computer science and he's author of the book, The Alignment Problem,
Machine Learning and Human Values.
Oh, by the way, I did write that introduction.
Brian Christian, thanks a lot.
Thank you.
It's Consider This from NPR.
I'm Ari Shapiro.
This message comes from Indiana University.
Indiana University drives discovery, innovation, and creative endeavors
to solve some of society's greatest challenges. Groundbreaking investments in neuroscience,
climate change, Alzheimer's research, and cybersecurity mean IU sets new standards to
move the world forward, unlocking cures and solutions that lead to a better future for all.
More at iu.edu forward.