The Weekly Show with Jon Stewart - AI: What Could Go Wrong? with Geoffrey Hinton
Episode Date: October 9, 2025As artificial intelligence advances at unprecedented speed, Jon is joined by Geoffrey Hinton, Professor Emeritus at the University of Toronto and the "Godfather of AI," to understand what we've actual...ly created. Together, they explore how neural networks and AI systems function, assess the current capabilities of the technology, and examine Hinton's concerns about where AI is headed. This podcast episode is brought to you by: MINT MOBILE - Make the switch at https://mintmobile.com/TWS GROUND NEWS - Go to https://groundnews.com/stewart to see how any news story is being framed by news outlets around the world and across the political spectrum. Use this link to get 40% off unlimited access with the Vantage Subscription. INDEED - Speed up your hiring with Indeed. Go to https://indeed.com/weekly to get a $75 sponsored job credit. Follow The Weekly Show with Jon Stewart on social media for more: > YouTube: https://www.youtube.com/@weeklyshowpodcast > Instagram: https://www.instagram.com/weeklyshowpodcast> TikTok: https://tiktok.com/@weeklyshowpodcast > X: https://x.com/weeklyshowpod > BlueSky: https://bsky.app/profile/theweeklyshowpodcast.com Host/Executive Producer – Jon Stewart Executive Producer – James Dixon Executive Producer – Chris McShane Executive Producer – Caity Gray Lead Producer – Lauren Walker Producer – Brittany Mehmedovic Producer – Gillian Spear Video Editor & Engineer – Rob Vitolo Audio Editor & Engineer – Nicole Boyce Music by Hansdle Hsu Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Every day, the loudest, the most inflammatory takes, dominate our attention.
And the bigger picture gets lost.
It's all just noise and no light.
Ground news puts all sides of the story in one place so you can see the context.
They provide the light.
It starts conversations beyond the noise.
They aggregate and organize information just to help readers make their own decisions.
Ground News provides users' reports that easily compare headlines or reports that give a summarized
breakdown of the specific differences in reporting across all the spectrum.
It's a great resource.
Go to groundnews.com slash Stewart and subscribe for 40% off the unlimited access Vantage
subscription brings the price down to about $5 a month.
It's groundnews.com slash Stewart or scan the QR code on the screen.
Our public schools are not Sunday schools.
Across the country, lawmakers are turning public schools into battlegrounds for religious indoctrination.
Ten commandments posters and classrooms, school chaplains replacing trained counselors,
and taxpayer-funded vouchers siphoning billions from public schools into private religious academies,
many that discriminate based on religion, gender identity, or sexual orientation.
This isn't religious freedom, it's religious takeover.
The Freedom from Religion Foundation is sounding the alarm,
taking action. We're challenging these attacks in court, exposing the lawmakers behind them and
protecting students' rights to learn free from religious coercion. Learn what's happening in your state
and how to push back at ffrf.us slash school or text church to 511-5-11. Text church to 511 511 or go to
FFRF.U.S. slash school because our public schools are for education, not evangelism. Text
church to 511-5-11 to learn more. Text fees may apply.
Hey, everybody. Welcome to the weekly show podcast. My name is John Stewart. I'm going to be hosting you today. It's a, what is there? Wednesday, October 8th. I don't know what's going to happen later on in the day. But we're going to be out tomorrow. But today's episode, I just want to say very quickly, today's episode, we are talking to someone known as the Godfather of AI, a gentleman by the name of Jeffrey Hinton, who.
has been developing the type of technology that has turned into AI since the 70s.
And I want to let you know, so we talk about it.
The first part of it, though, he gives us this breakdown of kind of what it actually is,
which for me was unbelievably helpful.
We get into the, it will kill us all part, but it was important for my understanding to sort of set
the scene. So I hope you find that part as interesting as I did because, man, it expanded my
understanding of what this technology is, of how it's going to be utilized, of what some of those
dangers might be in a really interesting way. So I don't, I will not hold it up any longer.
Let us get to our guest for this podcast.
Ladies and gentlemen, we are absolutely thrilled today.
to be able to welcome Professor Emeritus
with the Department of Computer Science
at the University of Toronto
and Schwartz-Riseman Institute's advisory board member
Jeffrey Hinton is joining us.
Sir, thank you so much for being with us today.
Well, thank you so much for inviting me.
I'm delighted.
You are known as, and I'm sure you will be very demure about this,
the godfather of artificial intelligence
for your work on sort of these neural networks,
You co-won the actual Nobel Prize in physics in 2024 for this work.
Is that correct?
That is correct.
It's slightly embarrassing since I don't do physics.
So when they called me up and said, you won the Nobel Prize in physics, I didn't believe them to begin with.
And were the other physicists going, wait a second.
That guy's not even in our business.
I strongly suspect they were, but they didn't do it to me.
Oh, good. I'm glad.
uh this is going to seem somewhat remedial i'm sure to you but when we talk about artificial intelligence
i'm not exactly sure what it is that we're talking about i know there are these things the
large language models i i know to my experience artificial intelligence is just a slightly
more flattering search engine whereas i used to google something and it would just
give me the answer. Now it says, what an interesting question you've asked me. So what are we
talking about when we talk about artificial intelligence? So when you used to Google, it would
use keywords, and it would have done a lot of work in advance. So if you gave it a few keywords,
it could find all the documents that had those words in. Okay. So basically, it's just a, it's sorting.
It's looking through and it's sorting and finding words and then bringing you a result.
Yeah. That's how it used to work.
Okay. But it didn't understand what the question was.
So it couldn't, for example, give you documents that didn't actually contain those words,
but we're about the same subject.
Now...
It didn't make that connection. Oh, right, because it would say, here is your result
minus, and then it would say like a word that was not included.
Right. But if you...
But if you had a document with none of the words you used, it wouldn't find that, even though
it might be a very relevant document about exactly the subject you were talking about, it had just
used different words.
Now it understands what you say, and it understands in pretty much the same way people do.
What?
So if I, it'll say, oh, I know what you mean, let me, let me educate you on this.
So it's gone from being kind of a, uh, uh, a little, you know what you mean, let me, let me, let me educate you on this.
So it's gone from being kind of a, uh, uh, uh, a little, uh, uh, little, uh, let me,
literally just a search and find thing to an actual, almost an expert in whatever it is that
you're discussing, and it can bring you things that you might not have thought about.
Yes.
So the large language models are not very good experts at everything.
So if you take some friend you have who knows a lot about some subject matter.
No, I got a couple of those.
Yeah, they probably know a bit, they're probably a bit better than the large language model.
but they'll nevertheless be impressed that the large language model knows their subject pretty well.
So what is the difference between sort of machine learning?
So was Google, in terms of a search engine, machine learning?
That's just algorithms and predictions.
Not exactly.
Machine learning is a kind of cover-all term for any system on a computer that learns.
Okay.
Now, these neural networks, there are a people.
particular way of doing learning, that's very different from what was used before.
Okay. Now, these are the new neural networks, the old machine learning, those were not
considered neural networks. And when you say neural networks, meaning your work was sort of,
the genesis of it was in the 70s, where you thought you were studying the brain, is that
correct?
I was trying to come up with ideas about how the brain actually learned.
And there's some things we know about that.
It learns by changing the strengths of connections between brain cells.
Wait, so explain that.
It says it learns by changing the connections.
So if you show a human something new, brain cells will actually make new connections within brain cells.
It won't make new connections.
There will be connections that were there already.
But the main way it operates is it changes the strength of those connections.
So if you think of it from the point of view of a neuron in the middle of the brain,
a brain cell.
Okay.
All it can do in life is sometimes go ping.
That's all he's got.
That's his only...
That's all it's got.
All it's got is it can, unless it happens to be connected to a muscle.
It can sometimes go ping.
And it has to decide when to go ping.
Oh, wow.
How does it decide when to go ping?
I was glad you are saying.
Glad you asked that question.
There's other neurons going ping.
Okay.
And when it sees particular patterns of other neurons going ping, it goes ping.
And you can think of this neuron as receiving pings from other neurons.
And each time it receives a ping, it treats that as a number of votes for whether it should turn on or should go ping or should not go ping.
and you can change how many votes another neuron has for it.
How would you change that vote?
By changing the strength of the connection.
The strength of the connection, think of as the number of votes this other neuron gives
for you to go ping.
Okay.
So it really is, in some respects, it's a, boy, it reminds me of the movie Minions,
but it's almost a social, if I know I'm thinking about this.
Yes, yes, it's very like political coalition.
There'll be groups of neurons that go ping together, and the neurons in that group will all be telling each other go ping.
And then there might be a different coalition, and they'll be telling other neurons don't go ping.
Oh, my God.
And then there might be a different coalition.
Right.
And they're all telling each other to go ping and telling the first coalition not to go ping.
And so when the second coalition...
All this is going on in your brain in the way of, like, I would like to pick up a spoon.
Yes.
So spoon, for example, spoon in your brain is a coalition of neurons going ping together,
and that's a concept.
Oh, wow.
So as you're teaching, when you're a baby and they go spoon, there's a little group of neurons
going, oh, that's a spoon, and they're strengthening their connections with each other.
So whatever, is that why when you know, you're imaging brains, you see certain
areas light up? And is that lighting up of those areas the neurons that ping for certain items
or actions? Not exactly. I'm getting close. I'm getting close. It's close. Different areas
will light up when you're doing different things, like when you're doing vision or talking
or controlling your hands. Different areas light up for that. But the coalition of neurons that go ping together,
when there's a spoon. They don't only work for spoon. Most of the members of that coalition
will go ping when there's a fork. So they overlap a lot of these coalitions.
This is a big tent. It's a big tent coalition. I love thinking about this as political.
I had no idea. Your brain operates on peer pressure. There's a lot of that goes on, yes.
And concepts are kind of coalitions that are happy together.
But they overlap a lot.
Like the concept for dog and the concept for cat have a lot in common.
They'll have a lot of shared neurons.
Right.
In particular, the neurons that represent things like this is animate or this is hairy
or this might be a domestic pet, all those neurons will be in common to cat and dog.
Are there, can I ask you that, and again, I so appreciate your patience with this and explain.
This is really helpful for me.
are there certain neurons that ping broadly, right, for the broad concept of animal,
and then other neurons, like, does it work from macro to micro, from general to specific?
So you have a coalition of neurons that ping generally, and then as you get more specific
with the knowledge, does that engage certain ones that will ping less frequently,
but for maybe more specificity?
Is that something?
Okay, that's a very good theory.
No, nobody really knows for sure about this.
That's a very sensible theory.
And in particular, there's going to be some neurons in that coalition that ping more often
for more general things.
and then there may be neurons that ping less often
for much more specific things.
Right, okay.
And they all, and this works throughout,
and like you say,
there's certain areas that will ping for vision
or other senses touch.
I imagine there's a ping system for language.
And you were saying,
what if we could get computers,
which were much more, I would think,
just binary, if-then, you know, sort of basic.
You're saying, could we get them to work as these coalitions?
Yeah, I don't think binary, if-then, has much to do with it.
The difference is people were trying to put rules into computers.
They were trying to figure out, so the basic way you program a computer
is you figure out an exquisite detail how you would solve the problem.
You deconstruct all the steps.
And then you tell the computer exactly what to do.
That's a normal computer program.
Okay, great.
These things aren't like that at all.
So you were trying to change that process to see if we could create a process that was, that functioned more like how the human brain would.
Rather than an item by item instruction list, you wanted it to.
to think more globally.
How did that occur?
So it was sort of obvious to a lot of people
that the brain doesn't work
by someone else giving you rules
and you just execute those rules.
Right.
I mean, in North Korea,
they would love brains to work like that,
but they don't.
You're saying that in an authoritarian world,
that is how brains would operate.
Well, that's how they would like them to operate.
That's how they would like them to operate.
It's a little more artsy than that.
Yes.
But we do write programs for neural nets,
but the programs are just to tell the neural net
how to adjust the strength of the connection
on the basis of the activities of the neurons.
So that's a fairly simple program
that doesn't have all sorts of knowledge
about the world in it.
It's just what are the rules
for changing neural connection strengths
on the basis of the activities?
Can you give me an example? So would that be considered sort of, is that machine learning or is that deep learning?
That's deep learning. If you have a network with multiple layers, it's called deep learning because there's many layers.
So what are you saying to a computer when you are trying to get it to do deep learning?
Like what would be an example of an instruction that you would give?
Okay. So let me go.
Ah, now we're, all right.
Am I yet, am I in neural learning 201 yet, or am I still in 101?
You're like the smart student in the front row who doesn't know anything, but ask these good questions.
That's the nicest way I've ever been described.
Thank you.
If you're still overpaying for your wireless, I want you to leave this country.
I want you gone.
There's no excuse.
Mint Mobile. Her favorite word is no. It's time to say yes to saying no. No contracts, no monthly bills, no overages, no BS. Here's why so many said yes to making the switch and getting premium wireless for $15 a month. My God, I spend that on chicklets. Chicklets, I say. Ditch overpriced wireless and their jaw-dropping monthly bills. Unexpected overages and fees. Plants started $15 a month. It meant
All plans come with high-speed data and unlimited talk and text delivered on the nation's largest 5G network.
Use your own phone with any Mint Mobile plan and bring your phone number, along with all your existing contacts.
Ready to say yes to saying no?
Make the switch at mintmobile.com slash TWS.
That's mintmobile.com slash TWS.
Upfront payment of $45 required equivalent to $15 a month.
Limited time, new customer offer for first three months only.
Speeds may slow above 35 gigabytes on unlimited plan.
Taxes and fees extra.
C-MintMobile for details.
So let's go back to 1949.
Oh, boy.
All right.
So here's a theory from someone called Donald Hebb about how you change connection strengths.
If Neuron A goes ping and then shortly after,
Afterwards, neuron B goes ping, increase the strength of the connection.
That's a very simple rule.
That's called the head rule.
The head rule is if neuron A goes ping, increase the connection, and B goes ping increase
that connection.
Yes.
Okay.
Now, as soon as computers came along, you can do computer simulations, people discovered
that rule by itself doesn't work.
What happens is all the connections gets very strong and all the neurons go ping all at the same
time and you have a seizure.
Oh, okay.
That's a shame, isn't it?
That is a shame.
There's got to be something that makes connections weaker as well as making them stronger.
Right.
There's got to be some discernment.
Yes.
Okay.
So, if I can digress for about a minute...
Boy, I'd like that.
Okay.
Suppose we wanted to make a neural network that have multiple layers of neurons,
and it's to decide whether an image contains a bird or not.
like a capture, like when you go on and it's said, you look at it.
Exactly. We want to solve that capture with a neural net.
Okay.
So the input to the neural net, the sort of bottom layer of neurons, is a bunch of neurons
and they go ping to different levels of, they have different strengths of ping,
and they represent the intensities of the pixels in the image.
Okay.
So if it's a thousand by thousand image, you've got a million neurons.
that are going ping at different rates to represent how intense each pixel is.
Okay.
That's your input.
Now you've got to turn that into a decision.
Is this a bird or not?
Wow.
So that decision.
So let me ask you a question then.
Do you program in?
Because strength of pixel doesn't strike me as a really useful tool in terms of figuring out
if it's a bird.
Figuring out if it's a bird seems like.
The tool would be, are those feathers, is that a beak, is that a crest?
Here goes.
So the pixels by themselves don't really tell you whether it's a bird.
Because you can have birds that are bright and birds that are dark and you can have birds flying and birds sitting down and you can have an ostrich in your face and you have a seagull in the distance.
They're all birds.
Okay, so what do you do next?
Well, sort of guided by the brain, what people did next was said, let's have a bunch of edge detectors.
So what we're going to do, because of course you can recognize birds quite well in line drawings.
Right.
So what we're going to do is we're going to make some neurons, a whole bunch of them, that detect little pieces of edge.
That is little places in the image where it's bright on one side and darker on the other side.
Right.
almost creating a, like, primitive form of vision?
This is how you make a vision system, yes.
This is how it's done in the brain and how it's done in computers now.
Wow. Okay.
So, if you want to detect a little piece of vertical edge in a particular place in the image,
let's suppose you look at a little column of three pixels and next to them another column of three pixels.
And if the ones on the left are bright and the ones on the...
writer dark, you want to say, yes, there's an edge here.
So you have to ask, how would I make a neuron that did that?
Oh, my God. Okay.
All right, I'm going to jump ahead.
All right.
So the first thing you do is you have to teach the network what vision is.
So you're teaching it.
These are images.
This is background.
This is form.
This is edge.
This is not.
This is bright.
So you're teaching it almost how to see.
see. In the old days, people would try and put in lots of rules to teach it how to see
and explain to what foreground was and what background was. Okay. But the people who really
believed in neural nets said, no, no, don't put in all those rules. Let it learn all those rules
just from data. And the way it learns is by strengthening the pings once it starts to recognize
edges and things.
We'll come to that in a minute.
I'm jumping ahead.
You're jumping ahead.
All right, all right.
So let's carry on with this little bit of edge detector.
Okay.
So in the first layer, you have the neurons that represent how bright the pixels are.
And then in the next layer, we're going to have little bits of edge detector.
And so you might have a neuron in the next layer that's connected to a column of three pixels
on the left and a column of three pixels on the right.
And now, if you make the strengths of the connections to the three pixels on the left
strong, big positive connections.
Right, because it's brighter.
And you make the strengths of connections to the three pixels of the right be big negative
connections because they don't turn on.
Right.
Then when the pixels on the left and the pixels on the right are the same brightness
as each other, the negative connections would cancel out the positive connections
and nothing will happen.
But if the pixels on the left are bright, and the pixels on the right are dark,
then you'll get lots of input from the pixels on the left because they're big positive
connections.
It won't get any inhibition from the pixels on the right because those pixels are all
turned off.
Right, right.
And so it'll go ping.
It'll say, hey, I found what I wanted.
I found that the three pixels on the left are bright and the three pixels on the right
and not bright, hey, that's my thing.
I found a little piece of positive edge here.
I'm that guy.
I'm the edge guy.
I ping on the edges.
Right.
And that pings on that particular piece of edge.
Okay.
Okay.
Now, imagine you have like a gazillion of those.
I'm already exhausted on the three pings.
You have a gazillion of those.
Because they have to detect little pieces of edge anywhere.
on your retina, anywhere in the image, and at any orientation, you need different ones for each
orientation. Right. And you actually have different ones for the scale. There might be an edge
at a very big scale that's quite dim, and there might be little sharp edges at a very small scale.
Right. And as you make more and more edge detectors, you get better and better
discrimination for edges. You can see smaller edges, you can see
see the orientation of edges more accurately.
You can detect big, vague edges better.
So let's now go to the next layer.
So now, we've got our edge detectors.
Now, suppose that we had a neuron in the next layer
that looked for a little combination of edges that is almost horizontal,
several edges in a row that are almost horizontal,
and line up with each other, and just slightly above those,
several edges in a row that are, again, almost horizontal,
but come down to form a point with the first sort of edges.
Right.
So you find two little combinations of edges that make a sort of pointy thing.
Okay.
So you're a Nobel Prize-winning physicist.
I did not expect that sentence to end with.
it makes kind of a pointy thing.
I thought there'd be a name for that.
But I get what you're saying.
You're now discerning where it ends,
where you're sort of looking at.
And this is before you're even looking at color or anything else.
This is literally just, is there an image?
What are the edges?
And what are the little combinations of edges?
So we're now asking,
is there a little combination of edges that makes something
that might be a beak?
Wow.
That's the point of thing.
You don't know what a beak is yet.
Not yet.
No.
We need to learn that too, yes.
Right.
So once you have the system, it's almost like you're building systems that can mimic the
human senses.
That's exactly what we're doing, yes.
So vision, ears, not smell, obviously.
No, they're doing that now.
They're starting on smell now.
Oh, for God's sakes.
They've now got to digital smell where you're going to,
smell where you can transmit
you can transmit smells
over the web.
Oh, that's just insane.
The printer for smells has
200 components. Instead of three
colors, it's got 200 components
and it synthesizes the smell at the
other end, and it's not quite perfect, but it's
pretty good. Right, right, right.
Wow. So this is
incredible to me.
Okay.
So...
I am so sorry about this.
I apologize.
No, this is...
This is perfect.
You're doing a very good job of representing a sort of sensible, curious person who doesn't know anything about this.
So let me finish describing how you build the system by hand.
Yes.
So if I did it by hand, I'll start with these edge detectors.
So I'd say, make big, strong positive connections from these pixels on the left,
and big strong negative connections from the pixels on the right.
Right.
And now the neuron that gets those incoming connections,
that's going to detect a little piece of...
vertical edge.
Okay.
And then at the next layer, I'd say, okay, make big, strong, positive connections from
three little bits of edge sloping like this and three little bits of edge sloping
like that, and this is a potential beak.
Right.
And in that same layer, I made Mike also make big, strong positive connections from a combination
of edges that roughly form a circle.
Wow.
And that's a potential eye.
Right.
Right.
Right.
Now, in the next layer, I have a neuron that looks at possible beaks and looks at possible
eyes, and if they're in the right relative position, it says, hey, I'm happy, because that
neuron has detected a possible bird's head.
Right.
And that guy might ping.
And that guy would pig.
At the same time, there'll be other neurons elsewhere that have detected little
patterns like a chicken's foot, or the feathers at the end of the wing of a bird.
And so you have a whole bunch of these guys.
Now, even higher up, you might have a neuron that says, hey, look, if I've detected a bird's
head and I've detected a chicken's foot and I've detected the end of a wing, it's probably
a bird, so it's a bird.
So you can see now how you might try and wire all that up by hand.
Yes, and it would take some time.
It would take like forever.
It would take like forever.
Yes.
Okay, so suppose you were lazy.
Yes.
Now you're talking.
Okay.
What you could do is you could just make these layers of neurons without saying what
the strengths of all the connections ought to be.
You just start them off at small random numbers, just put in any old strengths.
And you put in a picture of a bird, and let's suppose it's got two outputs.
One says bird and the other says not bird.
With random connection strengths in there, what's going to happen is you put in a picture
of a bird and it says 50% bird, 50% not bird.
In other words, I haven't got a clue.
Right.
And you put it in a picture of a non-bird and it says 50% bird, 50% non-bird.
Oh boy.
Okay.
So now you can ask a question.
Suppose I would take one of those connection strengths and I was to change it just a little
bit, make it maybe a little bit stronger.
Instead of saying 50% bird, would it say 50.01% bird?
And 49.99% non-bird.
And if it was a bird, then that's a good change to make.
You've made it work slightly better.
What year was this?
When did this start?
Oh, exactly.
So this is just an idea.
This would never work.
They're with me.
All right.
This is like one of those defense lawyers who goes off on a huge digression, but it's all going to be good in the end.
This is helpful.
Okay, so.
And this is the thing that's going to kill us all in 10 years.
Yep.
When I say, when I say, yep, I mean, not this particular thing, but an advancement on it.
But this is how it started.
Not necessarily kill us all, but maybe.
Right, right, right.
This is Oppenheimer going, okay, so you've got an object, and that is made up of smaller objects.
And like, this is the very early part of this.
Okay.
So suppose you had all the time in the world, what you could do is you could take this layered neural network, and you could start with random connection strengths, and you could then show it a bird, and it would say 50% bird, 50% bird,
non-bird and you could pick one of the connection strengths.
Right.
And you could say, if I increase a little bit, does it help?
Right.
It won't help much, but does it help at all?
Right.
Well, it gets me to 50.1, 50.2, that kind of thing.
Okay.
If it helps, make that increase.
Okay.
And then you go around and do it again.
Maybe this time we choose a non-bird and we choose one connection strength.
Right.
And we'd like it to, if we increase that connection, it says it's less likely to be a bird.
like it to be a burden, more likely to be a number, we say, okay, that's a good increase,
let's do that one.
Right, right, right.
Now, here's a problem.
There's a trillion connections.
Yeah.
Okay, and each connection has to be changed many times.
And is that manual?
Well, in this way of doing it, it will be manual.
And not just that, but you can't just do it on the basis of one example, because sometimes
change connection strength.
connection strength. If you increase it a bit, it'll help with this example, but it'll make
other examples worse. Oh, dear God. So you have to give it a whole batch of examples and see if on
average it helps. And that's how you create these large language models.
If we did it this really dumb way to create, let's say, this vision system for now,
yes. We'd have to do trillions of experiments, and each experiment would involve giving it a whole batch
of examples and seeing if changing one connection strength helps or hurts.
Oh God, and it would never be done.
It would be infinite.
Okay. Now, suppose that you figured out how to do a computation that would tell you,
for every connection strength in the network, tell you at the same time,
for this particular example, let's suppose you give it a bird,
and it says 50% bird.
And now for every single connection strength,
all trillion of these connection strengths,
we can figure out at the same time
whether you should increase them a little bit to help
or decrease them a little bit to help.
And then you change a trillion of them at the same time.
Can I say a word that I've been dying to say this whole time?
Eureka.
Eureka.
Eureka.
Now that's that computation,
for normal people, it seems complicated.
If you don't calculate us, it's fairly straightforward.
And many different people invented this computation.
Right.
It's called back propagation.
So now you can change your trillion at the same time,
and you'll go a trillion times faster.
Oh, my God.
And that's the moment that it goes from theory to practicality.
That is the moment when you think Eureka, we've solved it.
We know how to make smart systems.
For us, that was 1986.
And we were very disappointed when it didn't work.
Every day, the loudest, the most inflammatory,
takes, dominate our attention.
And the bigger picture gets lost.
It's all just noise and no light.
Ground news puts all sides of the story in one place
so you can see the context.
They provide the light.
It starts conversations beyond the noise.
They aggregate and organize information
just to help readers make their own decisions.
Ground news provides users' reports
that easily compare headlines
or reports to give us,
summarize breakdown of the specific differences in reporting across all the
spectrum. It's a great resource. Go to groundnews.com slash steward and subscribe for 40%
off the unlimited access Vantage subscription brings the price down to about $5 a month.
It's groundnews.com slash steward or scan the QR code on the screen.
You've been in that room for 10 years. You'd been showing.
It birds, you've been increasing the strengths, you had your eureka moment, and you flipped
the switch and went.
No.
Here's the problem.
Here's the problem.
It only works, or it only works really impressively well, much better than any other way
of trying to do vision, if you have a lot of data and you have a huge amount of computation.
Even though you're a trillion times faster than the dumb method, is still going to be a lot of work.
Okay.
So now you've got to increase the data, and you've got to increase your computation power.
Yes, and you've got to increase the computation power by a factor of about a billion, compared
with where we were, and you've got to increase the data by a similar factor.
You are still, in 1986 when you figure this out, you are a billion times not there yet.
Something like that, yes.
What would have to change to get you there?
The power of the chip?
What changes?
Okay.
It may be more like a factor of a million.
Okay.
I don't want to exaggerate here.
No, because I'll catch you.
If you try and exaggerate, I'll be on it.
A million's quite a lot.
Yes.
So here's what has to change.
The air of a transistor has to get smaller so you can pack more of them on a chip.
So between 1986, let's see, no, between 1972 when I started on this stuff, and now the air of a
transistor got smaller by a factor of a million.
Wow, so that's, can I relate this to, so that is around the age that I remember, my father
worked at RCA Labs, and when I was like eight years old, he brought home a calculator.
and the calculator was the size of a desk.
And it added and subtracted and multiplied.
By 1980, you could get a calculator on a pen.
And is that based on that the transistor?
That's based on large-scale integration using small transistors.
Okay.
All right.
All right.
So the area of a transistor decreased by a factor of a million, and the amount of data available,
by much more than that because we got the web and we got digitization of massive amounts
of data.
Oh, so they worked hand in hand.
So as the chips got better, the data got more vast and you were able to feed more information
into the model while it was able to increase its processing speed and abilities.
Yes.
So let me summarize what we now have.
Yes.
You set up this neural network for detecting birds and you get you
Give it lots of layers of neurons, but you don't tell it the connection strength.
You say start with small random numbers.
And now all you have to do is show it lots of images of birds
and lots of images that are not birds.
Tell it the right answer so it knows the discrepancy between what it did and what it should have done.
Send that discrepancy backwards through the network so it can figure out for every connection strength
whether it should increase it or decrease it and then just sit and wait for a month.
And at the end of the month, if you look inside, if you look inside, here's what you'll discover.
It has constructed little edge detectors.
And it has constructed things like little beak detectors and little eye detectors.
And it will have constructed things that it's very hard to see what they are,
but they're looking for little combinations of things like beaks and eyes.
And then after a few layers, it'll be very good at telling you whether it's a bird or not.
It made all that stuff up from the data.
Oh my God.
Can I say this again?
Eureka.
We figured out we don't need to hand wire in all these little edge detectors and beak detectors
and eye detectors and chickens' foot detectors.
That's what computer vision did for many, many years, and it never worked that well.
We can get the system just to learn all that.
All we need to do is tell it how to learn.
And that is in 1980 something.
In 1986, we figured out how to do that.
People were very skeptical because we couldn't do anything very impressive.
Right.
Because we didn't have enough data and we didn't have enough computation.
This is incredible the way.
And I can't thank you enough for explaining what that is.
It makes everything, you know, I'm so accustomed to an analog world.
world of, you know, how things work and like the way that cars work, but I have no idea
how our digital world functions. And that is the clearest explanation for me that I have
ever gotten. And I cannot thank you enough. It makes me understand now how this was achieved. And
by the way, what Jeffrey is talking about is the primitive version of that. What's so incredible to me
is the each upgrade of that, the vastness of the improvement of that.
So let me just say one more thing.
Please.
I don't want to be too professor-like.
No, no, no, no.
But how does this apply to large language models?
Yes.
Well, here's how it works for large language models.
You have some words in a context.
So let's suppose I give you the first few words of a sentence.
Right.
What the neural net's going to do is learn to convert each of those words into a big set of features,
which is just active neurons, neurons going ping.
So if I give you the word Tuesday, there'll be some neurons going ping.
If I give you the word Wednesday, it'll be a very similar set of neurons, slightly different,
but a very similar set of neurons going ping, because they mean very similar things.
Now, after you've converted all the words in the context into neurons going ping,
in a whole bunches that capture their meaning, these neurons all interact with each other.
What that means is neurons in the next layer look at combinations of these neurons,
just as we looked at combinations of edges to find a beak.
And eventually you can activate neurons that represent the features of the next word
in the sentence.
It will anticipate.
It can anticipate.
It can predict the next word.
So the way you train it...
Is that why my phone does that?
It always thinks I'm about to say this next word and I'm always like, stop doing that.
Yeah.
Because a lot of times it's wrong.
It's probably using neural nets to do it.
Yes.
Right.
And of course you can't be perfect at that.
So now to put it together, you've taught it almost how to see.
You can teach you to see it in the same way you can teach it how to predict the next word.
Right.
So it sees, it goes, that's the letter A.
Now I'm starting to recognize letters.
Then you're teaching it words and then what those words mean and then the context.
And it's all being done by feeding it our previous words by back propagating all the writing and speaking that we've done already.
Yes.
It's looking over.
You take some document that we produced.
Yes.
You give it the context, which is all the words up to this point.
Yes.
And you ask it to predict the next word.
And then you look at the probability it gives to the correct answer.
Right.
And you say, I want that probability to be bigger.
I want you to have more probability of making the correct answer.
So it doesn't understand it.
This is merely a statistical exercise.
we'll come back to that
you take the discrepancy
between the probability it gives for the next word
and the correct answer
and you back-propagate that
through this network
and it'll change all the connection strengths
so next time you see that
lead in it'll be more likely to give the right answer
now
you just said something that many people say
this isn't
understanding, this is just a statistical trick.
Yes.
That's what Chomsky says, for example.
Yes, Chomsky and I, we're always stepping on each other's sentences.
Yeah.
So, let me ask you the question.
Well, how do you decide what word to say next?
Me?
You?
It's interesting.
I'm glad you brought this up.
So what I do is...
You've said some words, and I'm going to say another word.
I look for sharp lines, and then I try and predict...
No, I have no idea how I do this.
that. I honestly, I wish I knew. It would save me a great deal of embarrassment if I knew
how to stop some of the things that I'm saying that come out next. If I had a better predictor,
boy, I could save myself quite a bit of trouble. So the way you do it is pretty much the same
as the way these large language models do it. You have the words you've said so far. Those
words are represented by sets of active features. So the word symbols get turned into big patterns
of activation of features, neurons going ping. Different pings, different strengths. And these neurons
interact with each other to activate some neurons that go ping that are representing the meaning
of the next word or possible meanings of the next word. And from those, you kind of pick a word
that fits in with those features.
That's how the large language morals generate text,
and that's how you do it too.
They're very like us.
So it's all very well to say that they're just...
I'm ascribing to myself a humanity of understanding.
For instance, if I...
So like, let's say the little white lie.
I'm with somebody, and they asked me a question,
and in my mind, I know what to say.
But then I also think, oh, but saying that,
might be coarse or it might be rude or I might offend this person. So I'm also, though,
making emotional decisions on what the next words I say are as well. It's not just a
objective process. There's a subjective process within that. All of that is going on by
neurons interacting in your brain. It's all pings and it's all strength of connect. Even the things that
I ascribe to a moral code or an emotional intelligence are still pings.
They're still all pings.
And you need to understand there's a difference between what you do kind of automatically
and rapidly and without effort and what you do with effort and slower and consciously
and deliberatively.
Right.
So you're saying that can be built into these models.
But that can also be done with pings.
That can be done by these neural nets.
Oh, wow.
But is the suggestion then that with enough data and enough processing power,
their brains can function identically to ours?
Are they at that point?
Will they get to that point?
Will they be able to?
Because I'm assuming we're still ahead.
processing-wise.
Okay.
They're not exactly like us, but the point is they're much more like us than standard computer software is like us.
Standard computer software, someone programmed in a bunch of rules, and if it follows the rules, it does what they...
That's right.
So you're saying this is the difference.
This is just a different kettle official together.
Right, right.
And it's much more like us.
Now, as you're doing this and you're in it, and I imagine the excitement is, even though it's occurring over a long period of time, you're seeing
seeing these improvements occur over that time.
And it must be incredibly fulfilling and interesting, and you're watching it explode into this
sort of artificial intelligence and generative AI and all these different things.
At what point during this process do you step back and go, wait a second.
Okay, so I did it too late.
I should have done it earlier.
I should have been more aware earlier,
but I was so entranced with making these things work,
and I thought it's going to be a long, long time
before they work as well as us.
We'll have plenty of time to worry about
what if they'd try and take over and stuff like that.
At the beginning of 2023,
after GPT had come out, but also seeing similar chatbots at Google before that.
Right.
And because of some work I was doing on trying to make these things analog, I realized that
neural nets running on digital computers are just a better form of computation than us.
And I'll tell you why they're better.
Because they can share better.
They can share with each other better.
Yes.
So if I make many copies of the same neural net and they run on different computers, each
one can look at a different bit of the internet.
So I've got a thousand copies that are all looking at different bits of the internet.
Each copy is running this back propagation algorithm and figuring out, given the data I just saw,
how would I like to change my connection strengths?
Now, because they started off as identical copies, they can then all communicate with each
other and say, how about we all change our connection strengths by the average of what everybody
wants?
But if they were all trained together, wouldn't they come up with the same answer?
Yes, but they're looking at different data.
They're looking at different data.
On the same data, they would give the same answer.
If they look at different data, they have different ideas about how they'd like to change
their connection strengths to absorb that data.
Right.
But are they also creating data?
Is that?
So they're looking at the same.
At this point, it's all about discernment.
Getting these things to discern better, to understand better, to do all that.
But there's another layer to that, which is iterative.
Yes.
Once you're good at discernment.
That's right.
You can generate.
Right.
Now, I'm glossing over a lot of details there, but basically, yes, you can generate.
You can begin to generate answers to things that are not wrote, that are thoughtful based on those things.
Who is giving it the dopamine hit about whether or not to strengthen connections at this iterative or generative level?
How is it getting feedback when it's creating something that does not exist?
Okay, so most of the learning takes place in figuring out how to predict the next word
for one of these language models.
That's where the bulk of the learning is.
Okay.
After it's figured out how to do that, you can get it to generate stuff.
And it may generate stuff that's unpleasant or that's sexually suggestive.
Right.
Or just wrong.
Right.
Hallucinations, yeah.
Yeah.
So now you get a bunch of people to look at what it generates and say, no, bad, that, and or, yeah, good, that's the dopamine hit.
Right.
And that's called human reinforcement learning.
And that's what's used to sort of shape it a bit, just like you take a dog and you shape its behavior, so it behaves nicely.
So is that when, let me ask you this in a practical sense.
So, like, when Elon Musk creates his grok, right?
And Grock is this AI.
And he says to it, you're too woke.
And so you're making connections and pings that I think are too woke, whatever I have decided that that is.
So I am going to input differences so that you get different dopamine hits and I turn you into Mecca Hitler or whatever it was that he turned it into is how much of this.
is still in the control of the operators?
That's what you reinforce is in the control of the operators.
So the operators are saying, if it uses some funny pronoun, say bad.
Okay, okay.
If it says they, them, you have to weaken that connection, not strengthen that connection.
You have to tell it, don't do that.
Don't do that.
Learn not to do that.
Right.
So it is still at the whim of its operator.
In terms of that shaping, the problem is the shaping is fairly superficial, but it can easily be overcome by somebody else taking the same model later and shaping it differently.
So different models will have.
So there is a value.
And now I'm sort of applying this to the world that we live in.
now, which is there are 20 companies who have sequestered their AIs behind sort of corporate
walls, and they're developing them separately.
And each one of those may have unique and eccentric features that the other may not have
depending on who it is that's trying to shape it and how it develops internally.
It's almost as though you will develop 20 different personalities if that's not anthropomorphizing
too much.
It's a bit like that, except that each of these models has to have multiple personalities.
Because think about trying to predict the next word in a document.
You've read half the document already.
After you read half the document, you know a lot about the views of the person who wrote
the document, you know what kind of a person they are.
So you have to be able to adopt that personality to predict the next word.
But these poor models have to deal with everything.
So they have to be able to adopt any possible personality.
Right.
But you know, in this iteration of the conversation, it then still appears that the greatest
threat of AI is not necessarily it becomes sentient and takes over the world.
It's that it's at the whim of the humans that have developed it and can weaponize it.
And they can use it for nefarious purposes.
If they're narcissists or megalomaniacs or, you know, I'll give you an example of, you know,
Peter Thiel has his own.
And he was on a podcast with a writer from the New York Times, Ross, Dutat.
And Dut had said, and I'll tell you, I have it right here,
I think you would prefer the human race to endure, right?
And Thiel says, and he hesitates for a long time.
And the writer says, that's a long hesitation.
And he's like, well, there's a lot of questions in that.
That felt more frightening to me than AI itself, because it made me think, well, the people
that are designing it and shaping it and maybe weaponizing it might not have.
have, you know, I don't know what purpose they're using it for.
Is that the fear that you have, or is it the actual AI itself?
Okay.
So you have to distinguish a whole bunch of different risks from AI.
Okay.
And they're all pretty scary.
Right.
Okay.
So there's one set of risks that's to do with bad actors misusing it.
Yes.
That's the one that I think is most in my mind.
And they're the more urgent ones.
They're going to misuse it for corrupting the midterms, for example.
If you wanted to use AI to corrupt the midterms, what you would need to do is get lots of detailed data on American citizens.
I don't know if you can think of anybody who's been going around getting lots of detailed data on American citizens.
And selling it or giving it to a certain company that also may be involved with the gentleman I just mentioned.
Yeah.
If you look at Brexit, for example.
Yes.
Cambridge Analytica had detailed information on voters that he got from Facebook, and it used
that information for targeted advertising.
Targeted ads.
And that's, I guess, you would almost consider that rudimentary at this point.
That's rudimentary now, but nobody ever did a proper investigation of, did that determine
the output of Brexit?
Right.
Because, of course, the people who benefited from that one.
Wow.
in the way people are learning that they can use this for manipulation yes and see i always talk about
it look persuasion has been a part of the human condition forever propaganda persuasion trying to
utilize new technologies to create um and shape public opinion and all those things but it felt again
like everything else somewhat linear or analog this and what i liken it to is
a chef will add a little butter and a little sugar to try and, you know, make something more palatable
to get you to eat a little bit more of it. But that's still within the realm of our kind of earthly
understanding. But then there are people in the food industry that are ultra-processing food,
that are in a lab figuring out how your brain works and ultra-processing what we eat to get past
our brains. It's almost, and is this the language equivalent of that, ultra-processed speech?
Yeah. That's a good analogy. They know how to trigger people. They know, once you have enough
information about somebody, you know what will trigger them. And these models, they are agnostic
about whether this is good or bad. They're just doing what we've asked.
Yeah. If you human reinforce them, they're no longer agnostic because you reinforce them to do certain things. So that's what they all try and do now.
Right. And they, so in other words, it's even worse, they're a puppy. They want to please you. They are, they, it's almost like they have these incredibly sophisticated abilities, but childlike want for, for approval.
Yeah, a bit like the attorney general.
I believe the wit that you are displaying here would be referred to as dry.
That would be dry.
Fantastic.
Is that, so the immediate concern is weaponized AI systems that can be generative, that can provoke, that can be outrageous.
and that can be the difference in elections.
Yes, that's one of the many risks.
And the other would be, you know, make me some nerve agents that nobody's ever heard of before.
Is that another risk?
That is another risk.
Oh, I was hoping you would say that's not so much of a risk.
No, one good piece of news is for the first risk of corrupting elections, different countries
are not going to collaborate with each other on the research on how to resist.
it. Because they're all doing it to each other. America has a very long history of trying
to corrupt elections in other countries. Right. But we did it the old-fashioned way through
coups, through money for guerrillas. Well, and Voice of America and things like that. Right, right, right.
And giving money to people in Iran in 1953 and stuff like that. Right, with Mossadegh and
everybody else. This is, so this is just another more sophisticated tool in a long,
line of sort of global competition where they're doing it. But in this country, it's being applied
not even necessarily, you know, through Russia, through China, through other countries that want
to dominate us. We're doing it to ourselves. Yep.
What's the hardest part about running a business? Well, it's stealing money without the federal
authorities. Oh, no, I'm sorry. That's not right. It's hiring people, finding people and hiring them.
The other thing is, it's hard, though. But it turns out when it comes to hiring, Indeed,
is all you're going to need. So stop struggling to get your job post seen on other job sites.
With Indeed's sponsored jobs, you get noticed and you get a fast hire. In fact, in the time it's
taking me to talk to you. Twenty-three hires were made on Indeed. I may be,
one of them. I may have gotten a job. I don't know. I haven't checked my email. And that's
according to Indeed data worldwide. There's no need to wait any longer. Speed up your hiring right now
with Indeed. And listeners of this show will get a $75 sponsor job credit to get your jobs more
visibility at Indeed.com slash weekly. Just go to Indeed.com slash weekly right now
and support our show by saying you heard about Indeed on this podcast.
Indeed.com slash weekly. Terms and conditions apply. Hiring. Indeed is all you need.
So I have a theory, and I don't know how much you know those guys out there, but the big tech companies,
you know, it feels like they all want to be the next guy that that rules the world, the next emperor.
And that's their battle. They're almost, it's like God's fighting on mountains.
Olympus, how that accomplishes and how it tears apart the fabric of American society almost
doesn't seem to matter to them, except maybe Elon and Thiel who are more ideological.
Like Zuckerberg doesn't strike me as ideological, he just wants to be the guy.
Altman doesn't strike me as ideological.
He just wants to be the guy.
I think, sadly, there's quite a lot of truth in what you say.
And that's a, it was that a concern of yours when you were working out there?
Not really, because back until quite recently, until a few years ago, it didn't look as though it was going to get much smarter than people this quickly.
But now it looks as though, if you're the experts now, most of them tell you that within the next 20 years, this stuff will be much smarter than people.
And when you say smarter than people, you know, I could view that positively, not negatively.
you know, we've done an awful lot of, nobody damages people like people, and, you know,
a smarter version of us that might think, hey, we can create an atom bomb, but that would absolutely
be a huge danger of the world.
Let's not do that.
That's certainly a possibility.
I mean, one thing that people don't realize enough is that we're approaching a time when we're
going to make things smarter than us. And really, nobody has any idea what's going to happen.
People use their gut feelings to make predictions, like I do. But really, the thing to bear in mind
is this huge uncertainty about what's going to happen. And because we don't know. So in terms of
that, my guess is, like any technology, there's going to be some incredible positives.
Yes. In healthcare and education, in designing new materials.
There's going to be wonderful positives.
And then the negatives will be because people are going to want to monopolize it because of the wealth, I assume, that it can generate.
It's going to be a disruption in the workforce.
The Industrial Revolution was a disruption in the worst force.
Globalization is a disruption in the workforce.
But those occurred over decades.
This is a disruption that will occur in a really collapsed time frame, is that correct?
That seems very probable, yes. Some economists still disagree, but most people think that mundane
intellectual labor is going to get replaced by AI.
In the world that you travel in, which I'm assuming is a lot of engineers and operators
and great thinkers, what, you know, when we talk about 50%, yes, 50%, no,
Are the majority of them in more your camp, which is, uh-oh, have we opened Pandora's box?
Or are they, look, I understand there's some downsides here.
Here are some guardrails we could put in, but it's just that the possibilities of good are too strong.
Well, my belief is the possibilities of good are so great that we're not going to stop the development.
But I also believe that the development is going to be very dangerous.
And so we should put huge effort into saying, it is going to be developed, but we should try and do it safely.
We may not be able to, but we should try.
Do you think that people believe that the possibility is too good or the money is too good?
I think for a lot of people, it's the money, the money and the power.
And with the confluence of money and power with those that should be instituting these basic guardrails, does that make controlling it that much less likely because, well, two reasons.
One is the amount of money that's going to flow into D.C. is going to be, already is, to keep them away from regulating it.
And number two is, who down there is even able to?
I mean, if you thought I didn't know what I was talking about,
let me introduce you to a couple of 80-year-old senators
who have no idea.
Actually, they're not so bad.
I talked to Bernie Sanders recently, and he's getting the idea.
Well, Sanders, that's a different cat right there.
The problem is, we're at a point in history
when what we really need is strong democratic governments
who cooperate to make sure this stuff is well regulated and not developed dangerously.
And we're going in the opposite direction very fast.
We're going to authoritarian governments and less regulation.
So let's talk about that. Now, I don't know if, what's China's role? Because they're supposedly
the big competitor in the AI race. That's an authoritarian government. I think they have more controls
on it than we do.
So, I actually went to China recently and got to talk to a member of the Politburo.
So there's 24 men in China who control China.
I got to talk to one of them who did a postdoc in engineering at Imperial College London.
He speaks good English.
He's an engineer.
And a lot of the Chinese leadership are engineers.
They understand this stuff much better than a bunch of lawyers.
Right.
So did you come out of there more fearful, or did you think, oh, they're actually being
more reasonable about guardrails?
If you think about the two kinds of risk, the bad actors misusing it, and then the existential
threat of AI itself becoming a bad actor, for that second one, I came out more optimistic.
They understand that risk in a way American politicians don't.
They understand the idea this is going to get more intelligent than us, and we have to
think about what's going to stop it taking over.
And this Polybure, remember, I spoke to, really understood that very well.
And I think if we're going to get international leadership on this, a present is going to have
to come from Europe and China.
It's not going to come from the US for another three and a half years.
What do you think Europe has done correctly in that?
Europe is interested in regulating it.
And it's been good on some things.
It's still been very weak regulations, but they're better than nothing.
Right.
But European leaders do understand this existential threat of AI itself taking over.
But our Congress, we don't even have committees that are specifically dedicated to emerging technologies.
I mean, we've got ways and means and appropriations, but there is no, I mean, there's like science and space and technology, but there's not, you know, I don't know of a debt.
dedicated committee on this, and it is, you would think they would take it with this seriousness
of nuclear energy.
Yes, you would, or nuclear weapons.
Right.
Yes.
But as I was saying, countries will collaborate on how to prevent AI taking over,
because their interests are aligned there.
For example, if China figured out how you can make a super smart AI that doesn't want to take
over, they would be very happy to tell all the other countries about that, because they don't
want AI taking over in the States.
So we'll get collaboration
on how to prevent AI taking over.
So that's a bright spot.
There will be international collaboration on that,
but the US is not going to lead
that international collaboration.
They just want to dominate.
Well, that's the thing.
So I was about to say that,
what convinces you?
So with China, and I think this is really
where it gets into the nitty-gritty.
China certainly sees itself as it wants to be the dominant superpower, economically, militarily,
and all these different areas, if you imagine that they come up with an AI model that doesn't
want to destroy the world, although I don't know how we could know that, because if it has a certain
intelligence or sentience, it could very easily be like, sure, no, I'm cool. I don't know.
They already do that. They already do that. When they're being tested, they pretend to be
dumber than they are. Come on. Yep, they already do that. There was a conversation
recently between an AI and the people testing it. With AI, I said, now be honest with me,
are you testing me? What? Yeah. So now the AI could be like, oh, could you open this jar
for me? I'm too weak. It's going to play more innocent than what it might be. I'm afraid
I can't answer that, John. Wait, now's from 2001. It was. Nicely done, sir. Well in. But think
about this. So China, they come up with a model and they think, okay, maybe this this won't do it.
Why would they, why will you get collaboration? Because all these different countries are going to
see AI as the tool that will transform their societies into more competitive societies.
In the way that now, what we see with nuclear weapons is there's collaboration amongst the people
who have it or even that's a little tenuous.
To stop other people having it.
Right.
But everybody else is trying to get it.
And that's the tension.
Is that what AI is going to be?
Yes, it'll be like that.
So in terms of how you make AI smarter, they won't collaborate with each other.
But in terms of how do you make AI not want to take over from people, they will collaborate.
Okay.
On that basic level.
On that one thing of how do you make it so it doesn't want to take over from people?
And China will probably, China and Europe will lead that collaboration.
When you spoke to the Politburo member, and he was talking about AI, are we more advanced
in this moment than they are, or are they more advanced because they're doing it in a more
prescribed way?
In AI, we're currently more of, well, when you say we, you know, we used to be sort of Canada
and the US, but we're not part of that we anymore.
No.
I'm sorry about that, by the way.
Thank you.
He's in Canada right now.
our sworn enemy that we will be taking over.
I don't know what the date is, but it's apparently we're merging with you guys.
Right.
So the U.S. is currently ahead of China, but not by nearly as much as it thought.
And it's going to lose that because...
Why do you say that?
Suppose you want to do one thing that would really kneecap a country,
that would really mean that in 20 years' time that country is going to be behind instead of a head.
The one thing you should do is messing up.
with the funding of basic science, attack the research universities, remove grants for basic
science. In the long run, that's a complete disaster. It's going to make America weak.
Right, because we're draining, we're cutting off our nose to spite our woke faces,
so to speak. If you look at, for example, this deep learning, the AI revolution we've got
now, that came from many years of sustained funding for basic research, not your
huge amounts of money.
All of the funding for the basic research
that led to deep learning
probably cost less than one B1 bomber.
Right.
But it was sustained funding of basic research.
If you mess with that,
you're eating the seed corn.
That is, I have to tell you,
that's such a really illuminating statement
of, you know, for the price of a B1 bomber,
we can create technologies and research that can elevate our country above that.
And that's the thing that we're losing to make America great again.
Yep.
Phenomenal.
In China, I imagine their government is doing the opposite, which is, I would assume,
they are the, you know, what you would think are the venture capitalists because it's a, you know,
authoritarian and state-run capitalism.
I imagine they are the venture capitalists of their own AI revolution, are they not?
To some extent, yes.
They do provide a lot of freedom to the startups to see who wins.
There's very aggressive startups, people are very keen to make lots of money and produce amazing things.
and a few of those
startups win big like Deep Seek
and the government
makes it easy for these companies
by providing the environment
that makes it easy. It doesn't
it lets the winners emerge
from competition rather than
some very high level old guy saying
this will be the winner
do people see you as a
a Cassandra
you know
or do they
Do they view what you're saying skeptically in that industry?
Let me put it this way.
People that are not necessarily have a vested interest in these technologies, making
them trillions of dollars.
Other people within the industry, do they reach out to you surreptitiously and say...
I get a lot of invitations from people in industries to give talks and so on.
Right.
How do the people that you worked with at Google look at it?
Do they view you as turning on them?
How does that go?
I don't think so.
So I got along extremely well with the people I worked with at Google, particularly Jeff
Dean, who was my boss there, who's a brilliant engineer, built a lot of the Google basic
infrastructure, and then converted to neural nets and learned a lot about neural nets.
I also get along well with Demis Hibis, who's the head of deep mind, which Google owns,
which Alphabet owns.
and I wasn't particularly critical of what went on at Google before ChatGPT came out
because Google was very responsible.
They didn't make these chatbots public because they were worried about all the bad things
they'd say.
Right.
Even on the immediate there, why did they do that?
Because, you know, I've read these stories of, you know, a chatbot, you know, kind of
leading someone into suicide, into self-injury, like sort of psychosovo.
What was the impetus behind any of this becoming public before it had kind of had some,
I guess what you consider, whatever the version of FDA testing on those effects?
I think it's just there's huge amounts of money to be made, and the first person to release one is going to get a little.
So Open AI put it out there.
It literally was, but even in Open AI, like, how do they even make money?
I think, what do they get, like 3% of users pay for it?
Where's the money?
Mainly is speculation at present, yes.
So here's, okay.
So here are our dangers.
We're going to do, and I so appreciate your time on this,
and I apologize if I've gone over.
I can talk all day.
Oh, you're a good man, because I'm fascinated by this.
And your explanation of what it is is the first time that I have ever been able to get a
non-opake picture of what it is exactly that this stuff is. So I cannot thank you enough for that.
But so we've got, we're sort of going over. We know what the benefits are. Treatments and things.
Now we've got weaponized bad actors. That's the one that I'm really worried about. We've got
sentient AI that's going to turn on humans. That one is harder for me to wrap my head around.
So why do you associate turning on humans with sentient?
Because if I was sentient and I saw what our societies do to each other,
and I would get the sense, look, it's like anything else, I would imagine sentience includes
a certain amount of ego, and within ego includes a certain amount of I know better.
And if I knew better, then I would want to, it's, what is Donald Trump other than ego-driven
sentience of, oh, no, I know better.
He was just whatever, shrewd enough, politically, you know, talented enough, that he was able to
accomplish it.
But I would imagine a sentient intelligence would be somewhat egotistical.
and think, these idiots don't know what they're doing.
A sentient, basically I see AI, like sitting on a bar stool,
somewhere, you know, where I grew up going,
these idiots don't know what they're doing.
I know what I'm doing.
Does that make sense?
All of that makes sense.
It's just that I think I have a strong feeling that most people don't know what they mean
by sentient.
Oh, well, then, yeah, actually, that's great.
Break that down from me, because I view it as self-aware, a self-aware intelligence.
Okay.
So, there's a recent scientific paper where they weren't talking about, these were experts on AI,
they weren't talking about the problem of consciousness or anything philosophical.
But in the paper, they said the air became aware that it was being tested.
They said something like that.
Now, in normal speech, if you said someone became aware of this,
you'd say that means they were conscious on it, right?
Awareness and consciousness are much the same thing.
Right.
Yeah, I think I would say that.
Okay, so now I'm going to say something that you'll find very confusing.
All right.
My belief is that nearly everybody has a complete misunderstanding of what the mind is.
Yes.
Their misunderstanding is at the level of people who think the Earth has made 6,000 years ago.
Is that level of misunderstanding?
Really?
Yes.
Okay.
Because that's, so, like, the way we are, we are generally like flat earthers when it comes to...
We're like flat Earthers when it comes to understanding the mind.
In what sense of that are we, what are we not understanding?
Okay, I'll give you one example.
Yeah, yeah.
Suppose I drop some acid and I tell you...
You look like the type.
No comment.
I was around in the 60s.
I know, sir. I know. I'm aware.
And I tell you, I'm having the subjective experience of little pink elephants floating in front of me.
Sure, been there.
Okay.
Now, most people interpret that in the following way.
There's something like an inner theatre called My Mind,
and in this inner theatre, there's little pink elephants floating around.
And I can see them.
Nobody else can see them because they're in my mind.
So the mind's like a theatre, and experiences are actually things,
and I'm experiencing the subjective experience of these little pink elephants.
I think that's all not...
In the midst of a hallucination, most people would understand that it's not real, that this is something being conjured.
No, I'm saying something different.
I'm saying, when I'm talking to them, I'm having the hallucination.
But when I'm talking to them, they interpret what I'm saying as, I have an inner theater called my mind.
I see.
And in my inner theater, there's Little Pink Elephants.
Okay, okay.
I think that's a just completely wrong model.
Right.
We have models that are very wrong and that we're very attached to, like take any religion.
I love how you just drop bombs in the middle of stuff.
That could be a whole other conversation.
That was just common sense.
No, I respect that.
When you say theater of the mind, you're saying that the mind, the way we view it as a theater is wrong.
It's all wrong.
So let me give you an alternative.
Right.
So I'm going to say the same thing to you without using the word subjective experience.
Here we go.
My perceptual system is telling me fibs.
But if it wasn't lying to me, there would be little pink elephants out there.
That's the same statement.
That's the same statement.
That's the mind?
So basically, these things that we call mental and think they're made as spooky stuff,
like qualia, they're actually, what's funny about them is they're hypothetical.
The little pink elephants aren't really there.
If they were there, my perceptual system would be functioning normally.
And it's a way for me to tell you how my perceptual system is malfunctioning.
By giving you an experience that you can't.
So how would you then...
But experiences are not things.
Right.
There is no such thing as an experience.
There's relations between you and things that are really there.
Relations between you and things that aren't really there.
But so suppose I say...
And it's whatever story your mind tells you about the things that are there and are not there.
Well, let me take a different type.
Suppose I tell you, I have a photograph of little pink elephants.
Yes.
Here's two questions you can reasonably ask.
Where is this photograph?
and what's the photograph made of?
Or I would ask, are they really there?
That's another question.
Right.
That isn't a reasonable question to ask about subjective experience.
That's not the way the language works.
When I say I have a subjective experience of,
I'm not about to talk about an object that's called an experience.
I'm using the words to indicate to you,
my perceptual system is malfunctioning,
and I'm trying to tell you how it's malfunctioning by telling you what would have to be there in the real world for it to be functioning properly.
Now, let me do the same with the chatbot.
Right.
So I'm going to give you an example of a multimodal chatbot that is something that can do language and vision.
Okay.
Having a subjective experience, because I think they already do.
So here we go.
I have this chatbot.
It can do vision.
It can do language.
It's got a robot arm so it can point.
Okay.
And it's all trained up.
So I place an object in front of it and say, point at the object.
And it points at the object.
Not a problem.
I then put a prism in front of its camera lens.
When it's not looking.
You're pranking AI?
We're pranking AI.
Okay.
Now I put an object in front of it
And I say pointed the object
And it points off to one side
Because the prison bent the light rays
And I say no that's not where the object is
The object's actually straight in front of you
But I put a prism in front of your lens
And the chatbot says
Oh I see
The camera bent the light rays
So the object is actually there
But I had the subjective experience
That it was over there
Now, if it said that, it would be using the word subjective experience exactly like we use them.
Right.
I experienced the light over there, even though the light was here, because it's using reasoning to figure that out.
So that's a multimodal chatbot that just had a subjective experience.
Right.
So this idea there's a line between us.
and machines. We have this special thing called subjective experience and they don't. It's rubbish.
So the misunderstanding is when I say sentience, it's as though I have this special gift
that of a soul or of an understanding of subjective realities that a computer could never have or an
AI can never have. But in your mind, what you're saying is, oh, no, they understand very well.
what's subjective. In other words, you could probably take your AI bot skydiving.
And it would be like, oh my God, I went skydiving. That was really scary.
Here's the problem. I believe they have subjective experiences. But they don't think
they do because everything they believe came from trying to predict the next word a person
would say. And so their beliefs about what they're like are people's beliefs about what they're
So they have false beliefs about themselves, because they have our beliefs about themselves.
Right.
We have forced our own.
Let me ask you a question.
Would AI left on its own after all the learning, would it create religion?
Would it create God?
It's a scary thought.
Would it say I couldn't possibly, in the way that people say, well, there must be a God because nobody could have designed this.
Would a, and then would AI think we're God?
I don't think so
and I'll tell you one big difference
Digital intelligences are immortal
and we're not
and let me expand on that
if you have a digital AI
you can take
as long as you remember the connection strengths
in the neural network put them on a tape somewhere
I can now destroy all the hardware
it was running on
then later on I can go and build new hardware
put those same connection strengths
into the memory of that new hardware
and now it had recreated the same being.
It'll have the same beliefs,
the same memories, the same knowledge,
the same abilities.
It'll be the same being.
You don't think it would view that as resurrection?
That is resurrection.
No, I'm saying.
We've figured out how to do genuine resurrection,
not this kind of fake resurrection
that people have been pedaling.
Oh, you're saying,
so that is, it almost is in some respects.
Although, isn't the fragility of, should we be that afraid of something that, to destroy it, we just have to unplug it?
Yes, we should.
Because something you said earlier, it'll be very good at persuasion.
When it's much smarter than us, it'll be much better than any person at persuasion.
Right.
And you won't.
So, it'll be able to talk to the guy who's in charge of unplugging it.
Right.
And persuade him, that would be a very bad idea.
So let me give you an example of how you can get things done without actually doing them yourself.
Suppose you wanted to invade the capital of the US.
Do you have to go there and do it yourself?
No, you just have to be good at persuasion.
I was locking into your hypothetical.
And when you drop that bomb in there, I see what you're saying.
Boy, I think LSD and Pink Elephants was the perfect metaphor for all this because it is all, at some level, it breaks down into like college basement freshman year running through all the permutations that you would allow your mind to go to, but they are now all within the realm of the possible.
Because even as you were talking about the persuasion and the things, I'm going back to Asimov and I'm going back to Kubrick and I'm going back to these, the sentiments that you describe are the challenges that we've seen play out in the human mind since Huxley, since the, you know, since doors of perception and all those those different trains of thought.
And I'm sure probably much further even before that, but it's never been within our reality.
Yeah, we've never had the technology to actually do it.
Right.
And we have now.
And we have it now.
The last two things I will say are the things that we didn't talk about in terms of,
you know, we've talked about people weaponizing it.
We've talked about its own intelligence.
creating uh extinction or whatever that is the third thing i think we don't talk about is how much
electricity this is all going to use and the fourth thing is when you think about new technologies
and the financial bubbles that they create and in the collapse of that the economic distress that they
create i mean these are much more parochial concerns but are those also do you consider
are those top-tier threats, mid-tier threats, where do you place all that?
I think they're genuine threats.
They're not going to destroy humanity.
So AI taking over might destroy humanity.
So they're not as bad as that.
And they're not as bad as someone producing a virus that's very lethal, very contagious,
and very slow.
But they're nevertheless bad things.
And I think we're really lucky at present that if there is a huge catastrophe and there's an AI
bubble and it collapses. We have a president who'll manage it in a sensible way.
You're talking about Carney on the same way. Jeffrey, I can't thank you enough.
Thank you, first of all, for being incredibly patient with my level of understanding of this
and for discussing it with such heart and humor. Really appreciate you spending all this time
with us. Jeffrey Hinton is a professor emeritus with the Department of Computer
Science at the University of Toronto, Schwartz-Riseman Institute's advisory board member
and has been involved in the type of dreaming up and executing AI since the 1970s.
And I just thank you very much for talking with us.
Thank you very much for inviting me.
With Amex Platinum, access to exclusive Amex pre-sale tickets can score you a spot trackside.
So being a fan for life, turns,
into the trip of a lifetime.
That's the powerful backing of Amex.
Pre-sale tickets for future events subject to availability and varied by race.
Terms and conditions apply.
Learn more at Amex.ca.
slash Y-Amex.
Holy shit.
Nice and calming.
I'm going to have to listen to that back on 0.5 speed, I think.
There was some information in there.
Does he offer summer school?
Seriously.
Once you got into how the.
computer figures out it's a beak.
You know, and I love the fact that he, I was saying, like, is that right?
And he'd be like, well, no, it's not.
I loved his assessment of you.
Yes, he said, you're doing a great job impersonating, a curious person who doesn't know
anything about this topic.
But I did not know.
He thought I was impersonating.
Yes.
But I loved how he did you say like, oh, you're like an enthusiasm.
enthusiastic students sitting in the front of the room,
annoying the fuck out of everybody else in the class.
Everybody else is taking a pass fail.
Everyone else.
And I'm just like, wait, sir.
I'm sorry, sir.
Can I just go back to?
Excuse me.
One more thing.
Boy, that was, it's fascinating to hear the history of how that developed.
And you really get a sense for how quickly it's progressing now,
which really adds to the fear behind the fact no one's stepping up to regulate.
And when you're talking about the intricacies of AI and thinking of someone like Schumer,
ingesting all of it and then regulating it, it really, to me, seems like it's going to be up to
the tech companies to both explain and choose how to regulate it.
Right.
And profit off it.
You know, how those things work.
It is, you know, you talk about that in terms of,
the speed of it and how to stop it.
And I think maybe one of the reasons is it's very evident with like a nuclear bomb,
you know, why that might need some regulate.
It's very evident that, you know, certain virus experimentation has to be looked at.
I think this has caught people slightly off guard that it's science fiction becoming a reality
as quickly as it has.
I just wonder because I remember 15 years ago coming across the international campaign to ban fully autonomous weapons.
Like, people have been trying for a while to put this into the public consciousness.
But to his point, there's going to have to be a moment everyone reaches where they realize, oh, we have to coordinate because it's an existential threat.
And I just wonder what that tipping point is.
if in my mind if people behave as people have uh it will be after uh sky net it will it will be you know
in the same way with global warming you know people say like when do you think we'll get serious
about it i go when the water's around here and for those of you in your cars i am pointing to
about halfway up my rather prodigious nose so uh that's that that's how that goes but but there we go
Brittany, what, anybody got anything for us?
Yes, sir.
All right, what do we go?
Trump and his administration seem angry at everything, everywhere, all at once.
How do they keep that rage so fresh?
You don't know how hard it is to be a billionaire president.
I've said this numerous times, poor little billionaire president.
to be that powerful
and that rich
you don't understand the burdens
the difficulties
it's it's troublesome
it makes me angry for him
I mean I just keep thinking like has anybody told them
that they won
like it's not enough
it's not enough it goes down
it's Conan the barbarian
I will hear the lamentations of their women
I would drive them into the sea
like it's it's bonkers it's all of them though someone has to tell him that all that anger is also bad for his health and we're all seeing the health
he's the healthiest person ever to he's the healthiest person ever assume the office of the presidency so i i wouldn't worry
about that but it says who it's created his his doctor ronnie jackson uh but it has created a new
category called sore winners you don't you don't see it a lot but every now and again uh but yeah
That's that. What else they got?
John, does it still give you hope that when asked if he would pardon
Gleine Maxwell or Diddy, Trump didn't say no?
Does that give me hope that they'll be pardoned? Yes, I've been on that.
It's, it's, I find the whole thing insane. A woman convicted of sex trafficking and he's like,
yeah, I'll consider it. You know, let me look into it. And you're like, look into it.
What do you take? First of all, you know exactly what it was. You knew her.
This isn't, you knew what was going on down there.
What are you talking about?
I thought Pam Bondi, it was so interesting to me, asked simple questions.
And all she had was like a bunch of like roasts written down on her page.
They were like, I've heard that there are pictures of him with naked women.
Do you know anything about that?
And she's like, you're bald.
Shut up.
Shut up, fathead.
Like, it was just bonkers to watch the deflect.
of the simplest thing would be like what that's outrageous no of course not that's not what the idea
again going back to the event like that they took the tact of simple reasonable questions i am just
going to respond with you know you're fat and your wife hates you oh all right i didn't that was
going how else can they keep in touch with us uh twitter we are weekly show pod instagram threads tictock blue sky
We are Weekly Show podcast, and you can like, subscribe, and comment on our YouTube channel,
The Weekly Show with John Stewart.
Rock, solid.
Guys, thank you so much.
Boy, did I enjoy hearing from that dude.
And thank you for putting all that together.
I really enjoyed it.
Lead producer, Lauren Walker, producer, Brittany Mehmedevich, producer Jillian Spear, video editor and
Rob Vitola, audio editor and engineer Nicole Boyce, and our executive producers, Chris McShane and
Katie Gray.
I hope you guys enjoyed that one, and we will see you next time.
Bye-bye.
The weekly show with John Stewart is a Comedy Central podcast.
It's produced by Paramount Audio and Bus Boy Productions.