NBC Nightly News with Tom Llamas - Nightly News Films: AI Revolution
Episode Date: March 23, 2023For many of us, the term artificial intelligence conjures up images of science fiction movies. But what is it really? As AI technology becomes a bigger part of our world, Lester Holt sits down with Tr...istan Harris and Aza Raskin, co-founders of the Center for Humane Technology, to talk about how it works.
Transcript
Discussion (0)
Recent advances in artificial intelligence, now available to the masses, have both fascinated and enthralled many Americans.
But amid all the wows we're hearing over AI, some are simply saying wait or slow down.
That includes a pair of former Silicon Valley insiders who are now warning tech companies there may be no returning the AI genie to the bottle. I sat down with Asa Raskin and Tristan Harris
from the Center for Humane Technology for our series, AI Revolution.
Here's part of our conversation.
When I think of AI, most of my knowledge is from sci-fi movies
of these robots that begin to think and grow and have thoughts like humans.
What are we talking about here?
So there's a new technology called large language models.
They were really invented in 2017.
What these large language models do is they learn to predict the next word of text on
the entirety of the internet.
You're like, well, how bad could that be? But what's
surprising and what nobody foresaw is that just by learning to predict the next piece of text on
the internet, these models are developing new capabilities that no one expected. So just by
learning to predict the next character on the internet, It's learned how to play chess. It's
learned how to do research grade chemistry. It's learned how to do theory of the mind. What is
theory of mind? That means the ability to model what somebody else is thinking.
You say learning. Is it learning on its own or is someone teaching it? Because there's a difference.
It is learning on its own. So imagine you had a superpower and that superpower was you could
walk up to anyone speaking a language you don't understand. And just by listening to the sounds,
like this sound comes after that sound, you could learn to like mimic, to pretend to speak their
language. And you could pretend to understand and speak. And suddenly, even though you don't
understand what you're saying, they're responding to you as if you understood what they were saying.
And that's what these machine models are learning how to do.
Does AI think for itself?
No, AI does not think for itself.
It has the ability to mimic other people thinking.
It has the ability to mimic human emotions, to mimic human empathy.
But it doesn't know what it's doing.
It just knows how to do it. This is moving very fast. I don't think a lot of us have really thought much about AI until,
you know, perhaps late last year. What is happening in the development?
The models are getting bigger and they're getting more capable at a speed which even the engineers
who are working on it, honestly, are in disbelief. So only four
years ago, AIs were sort of barely able to do autocomplete, spit out the next word, create a
sentence that even made sense. As of November of last year, like with the launch of ChatGPT,
it could write essays sort of at the level of a high schooler.
It could play chess, but sort of poorly.
As of just one week ago now, these AIs are able to pass the bar exam, get top 90th percentile
test scores on the SATs, pass medical exams. How are we using artificial intelligence right now?
Well, I think people have heard about using AI for radiology to find cancers more effectively,
or using AI to automate the license plate numbers when you drive through a toll booth.
People are familiar with using AI to automate tasks in vision, in speech. Hey Siri, here's this
transcription and it tries to get it mostly right. But that was a different class of AI.
The new class of AI are just taking in the entire internet and every image and every photo and every
video that's ever been produced and sucking it into one large language model with an understanding
of the world. That's new, right? When you drive
through a toll booth on a bridge in New York, it scans your license plate. That doesn't have
a simultaneous world knowledge of everything that's ever been written and who Lester Holt is.
The new AI is taking in a world understanding and building cumulatively to get smarter and smarter,
and then having the ability to take that world knowledge to synthesize an answer to anyone's question. That is a different class of AI. Is it still under the control of human beings?
Do human beings ultimately control what it can and can't do? Well, what's very surprising about
these new technologies is that they have emergent capabilities that nobody asked for. So it's not
as if you took electricity and increased the voltage
and it gains the ability to answer questions in Persian, even though you only taught it how to
answer questions in English. That's what this new AI does. That's a real example. They taught it to
answer questions in English. Separately, the AI had trained on some other things in Persian.
It didn't know how to answer questions in Persian, but then they pumped it with more information and
more data. And suddenly it spits out this new emergent capacity that the engineers didn't know how to answer questions in Persian, but then they pumped it with more information and more data. And suddenly it spits out this new emergent capacity that the engineers didn't
predict, which is it can answer questions in Persian. That wasn't true of electricity. That
wasn't true of automobiles. That wasn't true of airplanes. This is a different class of technology.
When you pump more information through it, it gains more capacities that the engineers who built it
couldn't have predicted. Is thisinformation right up AI's alley?
Yeah, absolutely.
In fact, one of the biggest problems with AI right now is that it hallucinates, that
it speaks very confidently about any topic, and it's not clear when it is getting it right
and when it is getting it wrong.
We have seen throughout history that technological advances could sometimes eliminate jobs, make jobs obsolete.
Is it a given that AI will take us to a place where people will lose their jobs?
Societies can absorb new technologies and move from being an agrarian or farming-based society to an industrial society.
People move to factories. We've seen those transformations happen.
Maybe that took place over, you know, a couple decades. What happens when suddenly you don't just go after
one class of jobs like farming, but you suddenly go after, you know, 50% of the jobs in your
society, if not way more than that, and you instantly make them automated. And if that
happens all at once in a year or a few days, that is a level of change that our society doesn't know how to
reckon with.
If that change comes too fast, then society gets destabilized.
So we're, again, in this moment where we need to consciously adapt our institutions and
our jobs for a post-AI world.
What do we want AI to do?
What's our expectation?
Well, I think, obviously, what we want is AI that enriches our we want AI to do? What's our expectation? Well, I think, you know,
obviously what we want is AI that enriches our lives, AI that works for people, that works for human benefit, that is helping us cure cancer, that is helping us find climate solutions. We can
do that. We can have AI in research labs that's applied to specific applications that does advance
those areas. But when we're in an arms race to deploy AI to every
human being on the planet as fast as possible with as little testing as possible, that's not
an equation that's going to end well.