The Highwire with Del Bigtree - RISE OF THE MACHINES
Episode Date: May 11, 2023The debate surrounding artificial intelligence has taken a turn for the worse as the Godfather of AI, Geoffrey Hinton and Elon Musk urgently warn that out of control development of AI has the potentia...l to end humanity. Jefferey Jaxen and Del get into this terrifying topic.Become a supporter of this podcast: https://www.spreaker.com/podcast/the-highwire-with-del-bigtree--3620606/support.
Transcript
Discussion (0)
We all need to have a big conversation about where artificial intelligence is going.
You're right about that.
Because let's face it, it's been fun up until now.
You have your maps.
They give you some alternative routes when there's an accident.
You have automated text message responses, even Siri voice operated virtual assistance for
your iPhone.
But we're at a demarcation line now where this safe, helpful fun is rapidly showing something
different as it moves into the future.
And the latest person to give one of these warnings is Jeffrey Hinton.
He was head of Google's brain research development.
He received something called the Turing Award.
It's like the Nobel Prize for Computing.
He retired from Google so he can speak and warm people about these things.
And obviously, he's very qualified for this.
So let's look at the headlines here.
This was just this past week.
Godfather of AI, that's we're calling him.
Jeffrey Hinton quits Google over killer robot fears and regrets inventing tech that could destroy
us all.
And he says in there, I console myself with a normal excuse.
And wherever we heard this before.
If I hadn't done it, someone else would have.
He told the New York Times.
So right now, he's saying as far as he can tell,
artificial intelligence is not more intelligent than people,
but it will be very soon.
And he warns a bad actor is getting control of this,
directing this, dictating where it's going in parts of society
and leading society in ways, places we don't want to see.
And remember, just a month ago,
there was a letter assigned by really luminaries in tech and AI
in a lot of places warning about.
They're calling for a six-month pause in the development of artificial intelligence.
This was the headline here.
Tech leaders urge a pause in the out-of-control artificial intelligence race.
They're warning about AI systems reaching human competitive intelligence, obviously and beyond.
And they're saying they've seen just in the last few months these out-of-control,
like AI labs just racing to develop these technologies even more powerful and more powerful
and not really understanding if they can control them or predict where they're going.
they're going. And, you know, with that letter, it's amazing, you know, just as I sit here,
and we've all been watching this, that we are not watching like just immediate, gigantic
hearings inside the United, inside our government, sent as Congress, whoever, saying,
wait a minute, these aren't just, you know, a couple of, you know, nobody's out of, like, the heads,
the designers, the ones that built these things are coming to us and saying, my God, I think
we made a mistake. We made a mistake. You got to stop.
us. You can't believe what we just built back here and it could kill us all. I mean, it's really,
really shocking and to just have a bunch of, you know, in some ways, you know, leadership that's
just like, whatever, you know, if we're not going to take the people that invented this technology
seriously, then who are we going to take seriously? It's a parrot of that guard reels be put on this.
And that letter that was signed, Jeffrey Hinton wasn't on that letter, but because he was still
working at Google at the time.
know, the ideological overlap there, you would assume that he would be on that letter.
But someone else has emerged once again, tech entrepreneur Elon Musk.
And he's obviously warned about this for about a decade.
And he's been in the mix as well, obviously with Tesla, but also with NeuroLink and the satellites in space and all of these things.
He's had his finger in a lot of this stuff.
But he's doing the circus recently.
He was on Bill Maher, and he had some things to say about this future of AI development.
Take a listen.
All right.
With respect to AI, I just think we should be,
we should have some sort of regulatory oversight.
So, you know, for anything there's a danger to the public,
if it's sort of aircraft, cars, food and drug and whatnot,
we've got some regulatory oversight, like a referee,
essentially, and making sure that companies don't cut corners.
So I think that since, if one agrees that AI is a potential risk
to the public, then there should be some regulatory body
that oversees what companies are doing so they don't cut corners and potentially do something
very dangerous.
And if we don't do something, lay out a scenario for me in the next two, five, ten years,
if nothing is done, because we're very good at doing nothing, especially when it comes
in the way of profit, and this is a big profit engine now for companies, they're going to want
to just compete with each other.
I mean, there are people like Ray Kurzweil who doesn't think it's a problem at all.
Actually, Ray Kosovo's prediction for artificial superintelligence is 2029.
He's not far wrong.
Right.
But he doesn't think it's a problem, whereas people like you and Bill Gates and Stephen Hawking
thought think it's a problem.
Yeah, it depends.
If some people want to live forever or for a much longer period of time and they see AI as
the only way to digital superintelligence is the only thing that can figure out how to get them
to live forever, I think Kosovo is in that category.
He would prefer to have AI artificial general intelligence than not because he can figure out longevity.
I mean, there's a lot to unpack that we've talked about.
Some of it, a lot that's driven by these people that somehow want to have this dream of uploading their consciousness into the computer
so that they can go on to live forever.
And then I just, then my sort of being brought up in church and my dad being a minister,
just when you think of people that are trying to just circumvent God and nature,
what could possibly go wrong. And that's, you know, I just keep thinking,
lest you shall surely die. It feels like that's what these people are warning us about.
Like, we must stop this or we could all die.
Yeah, and you have this, like, biblical overlay on the top of, you know,
what we all know from the vaccine conversation.
Huge profit margins, really little oversight, if any, can't see the manufacturer.
And then you put that on top of it. You can see the dire need.
So he's giving some common sense warnings there, if you will.
But listen to him just last month on Tucker Carlson.
And this is probably one of the most, you know, important interviews I've ever seen on artificial intelligence as far as a warning is concerned.
It's very direct.
Take a listen.
All regulations start with a perceived danger and planes fall out of the sky or food causes botulism.
Yes.
I don't think the average person playing with AI on his iPhone perceives any danger.
Can you just roughly explain what you think the dangers might be?
Yeah.
So the danger, really AI is perhaps more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production in the sense that it has the potential.
However, one may regard that probability, but it is non-trivial.
It has the potential of civilizational destruction.
There's movies like Terminator, but it wouldn't quite happen like Terminator, because the intelligence would be in the data centers.
Right.
The robot's just the end effector.
But I think perhaps what you may be alluding to here is that regulations are really only put into effect after something terrible has happened.
That's correct.
If that's the case for AI and we're only put in regulations after something terrible has happened, it may be too late to actually put the regulations in place.
The AI may be in control at that point.
You think that's real.
It is conceivable that AI could take control and reach a point where you couldn't turn it off and it would be making the decisions for people.
Yeah, absolutely.
Absolutely.
No, that's definitely where things are headed.
For sure.
I mean, that's an uncomfortable interview.
I don't know about you.
I just, you know, I just was speaking at an event in Mexico and coming back across the border now.
you step up and there's just a camera that looks in your face.
I don't have to show my passport anymore.
I don't have to show my ticket or anything.
Thank you, Mr. Bigtree, you know, maybe a couple questions you're on your way.
And then I just think, my God.
I mean, you know, we're talking about like tracking systems.
They've already got facial recognition.
All of this already exists.
So that means now every camera I'm walking past and there's been movies about this.
We're now in that space.
Imagine they just say, the government gets paranoid or says,
hey, let's just have AI monitor everybody.
Now, every camera's tracking, everywhere you go, every decision you make,
and then what are you programming the AI to be doing with that information?
And I remember, you know, there was a quote,
one of the few by Yuval Noah Harari, who we've talked a lot about,
I'm just paraphrasing here, it just came to mind,
but he said, you know, this moment, the singularity,
this moment in which the computer apparently is going to recognize itself.
This guy, Ray Kurzweiler, has been at the head of, like, this discussion.
for a long time, I think they were saying 2040.
It sounds like now Elon Musk is saying it's more like 20, 29, and he's really close to it.
But then have a consciousness.
It will then once it's a recognize itself, now it's a conscious entity.
And you've all known of Harari made the point that, you know, I'm not sure that I agree that intelligence marks consciousness.
But he said, imagine an intelligence beyond anything that we would be able to compete with, but has no consciousness or no conscience.
that perhaps is even more horrifying.
And that has stuck with me that thought.
And this conversation has moved very quickly.
It wasn't, you know, 40, 50 years ago, really, people are warning about this.
2013, the University of Oxford did a research study in a white paper,
and this was called the Future of Employment, 2013, so 10 years ago.
And it says, according to our estimates, around 47% of total U.S. employment is in the high-risk category.
We refer to these jobs at risk, i.e. jobs we expect could be automated relatively soon, perhaps over the next decade or two.
So we're here, Del. It's a decade. And now let's see what the headlines look like. This is just over the past week. This is Goldman Sachs predicts 300 million jobs will be lost or degraded by artificial intelligence.
Now, IBM, another headline, IBM is to stop hiring for roles that can be replaced by AI, nearly 80,000 workers to be replaced by automation.
And even writers in Hollywood.
8,000 is to clear that.
Oh, I'm sorry.
Yes.
Thank you.
8,000.
Writers in Hollywood facing the same thing.
We have the Hollywood writer strike right now.
Hollywood screenwriters don't want robots taking their jobs either.
So, you know, Jimmy Kimball is crying himself to sleep tonight because he can't create his own content.
And he doesn't want AI doing it.
And his writer sure don't want to be out of work.
So that's going to be an issue there as well.
But there is one person who loves AI.
There is one person who's pushing it on the kids, so to speak.
This is Bill Gates, our old friend Bill Gates, from the pandemic fame.
Here's the headline for him.
Bill Gates says AI chatbuts will teach kids to read within 18 months.
You'll be stunned by how it helps.
He's at a conference, and this is some of the things he said.
He says, quote, at first, we'll be most stunned by how it helps with reading, being a reading
research assistant and giving you feedback on writing, said Gates.
Then he says this.
If you look, if you just took the next 18 months, the AIs will come in as a teacher.
and give feedback on writing, said Gates.
And then they will amp up what we're able to do in math,
which is really kind of a golden achievement for AI is the solving problems through math.
The writing is easy because they have this database of vocabulary.
But the math thing is kind of this piece.
But he said, yeah, 18 months, probably be solved at that point.
So it should be good.
Just let the kids deal with that, put them on them.
And we've seen all the studies with Facebook and increased social media use,
how it really hurts kids development, leads to depression. So how is this going to work when you just
unleash AI on children for their learning development? Got a question. It totally. And it goes to, I mean,
you got to hand it to this guy. He really knows how to make a difference in this world. And that
difference always being the children, get to the children, and they change the world. And
let's just have all of our children belly up to AI. I think the question Elon Musk would probably
follow up was, okay, great. It teaches us to read. It starts advancing our math.
and then it realizes to itself, man, these humans really suck at this when you think about it.
We're far better. And what is the use of teaching these people this anymore?
We might as well just handle it ourselves.
And one of the quotes from, I believe, in that Tucker Carlson interview,
certainly that Elon Musk goes on to say, well, you know, I hope they, being the AI, find us interesting.
Because if they don't, then we're just a nuisance.
And boy, is that a horrifying thought.
