The Chaser Report - Will AIs Kill Us All? | Ange Lavoipierre
Episode Date: July 13, 2023Ange Lavoipierre joins Charles and Dom to discuss the actual future of AI's. Are the best scientists at the forefront of this technology p-dooming their pants, or will good AI's save us from the evil ...ones? Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
The Chaser Report is recorded on Gatigal Land.
Striving for mediocrity in a world of excellence, this is The Chaser Report.
Hello, and welcome to The Chaser Report with Dom and Charles.
Hello, Charles.
Hello, how you're going.
Yeah, very well.
Now, you know how for quite a few months now, we've made fairly doom-laden, terrifying
predictions about AI destroying us all, it turns out that scientists and indeed experts and
creators of AI share this view.
And our friend, Ange Lovapier, who's done the podcast many times,
journalist, comedian and podcaster, has looked into this for background briefing on the ABC,
and she's going to give some actual data and research to confirm our terror.
And, welcome back.
Thanks so much.
What a great reason to be here.
I've just breezed in with some good news.
Well, we love talking about AI and how it's going to ruin everything.
More on that after this.
Okay, so, Ange, who have you spoken to about this?
Where do you go to find out if AI?
is going to decide that humans are basically not worth the trouble and come and kill us all.
Yeah, so I developed a real obsession with AI about a year ago when I did a story about
this creepy woman named Loeb, who was kind of had been produced by one of the image generators
and was just creepy in a number of ways and wouldn't go away.
Loeb.
Lobe, L-O-A-B.
Anyway, so, and she's, anyway, that's a whole other thing.
But that kicked off this obsession of mine.
And so even as I've been working on all these other stories that have nothing to do with AI,
I keep on, like, that's so much of my media diet.
I just, like, read all this stuff.
And something that started happening a few months ago, and it's just been building in volume since then, is this, I mean, if you're unkind, you call it doomerism, right?
It's like people of increasingly senior status and, you know, like high levels of.
knowledge within the field who are really, really preoccupied with the safety stuff, not the
short-term safety stuff, which is scary. And everyone agrees that that's an issue like, oh,
what's going to happen to misinformation? What's going to happen to our jobs? What's going to
happen when criminals start to, you know, that all real, all, you know, problems we should worry
about. But they're worried about a totally different problem on a longer time frame, but not as long
as you would like, which is way more existential. Okay. So why don't we just don't explain?
some of the terrifying things on the way
to the absolute annihilation of humans
because I remember reading
quite casually somewhere that
basically scammers are going to be able
to program computers to, in your
own voice, ring your loved ones and pretend
to be you, needing money
or whatever. And so we've all got to
assume if we get a phone call. Like if Charles
rings me and says, I'm being held by
kidnappers, please wire
thousands of dollars to get me out. Obviously I wouldn't
in any case. But
Charles, yeah, particularly because
your voice is available on this podcast, you're going to be very synthesizable by AI,
which means I suppose both of us are going to be redundant pretty soon.
So what are some of the things that we know are going to happen before total destruction of
humanity?
Sure, yeah.
Well, I actually cloned my voice for this podcast and did test it on my mum.
Oh, my God.
Yeah.
And the tonality is actually perfect.
Like, it is me, but the expression is really robotic.
Like it sounds, you know, it sounds like you're talking to Syria or whatever.
Oh, okay.
It's no good.
Like, it's just sort of like, no, uh-uh, uh-uh.
And so, you know, and that's like a, that was a Microsoft tool.
It was like a fairly sophisticated tool.
I, like, mucked around with a bunch of others because I was trying to clone
because I kind of, you know, I had some late nights working on this story.
And I was like, what if I got Arnold Schwarzenegger to read some of the script?
And so I was trying to, like, clone Arnold Schwarzenegger's voice.
And I couldn't get a good version just using the free or even the cheap tools that were
widely available on the internet. You could have asked any comedian to it. I know. I know. I know. What was
I thinking? It was like 4 a.m. But look, I think, yeah, we are going to get there. And, you know,
certain tools that are not, you know, totally like mass available, but, you know, you can find them.
You know, you can, you can kind of do that stuff now. It's hard, though. Like, frankly, there are
easier ways to pull a scam right now. But we'll be there before long. So there's that. There's like
the, you know, how it amplifies, you know, our existing criminal intent. And it's so important
to kind of go like, you know, that is so separate from this other argument, which is like,
AI is scary. It's like, no, no, no. In this scenario, AI is not scary. AI, like, we're the
monsters and we're just using this new tool, right? So let's say, for instance, if I wanted to use
AI to go through every debt record of, say, send a link and send letters to everyone, just hypothetically
about an imagined debt they might have. That's me that's bad, not the technology. I don't think
you've got a long career in politics, if that's the case. I hope so. That's probably what that says
to me. So yeah, look, we've always used technology to amplify our evil sort of wishes. And there's
that. And then there's, you know, I think the other thing that people are really preoccupied with
and totally fair enough is the job impact. So we're yet to kind of hit that cliff. And there are a
whole range of theories about which industries will be first. I think the ad industry is actually a real
front line one. And, you know, I did interview some people in the ad industry about, you know,
like if you're a writer or if you're a storyboarder for an agency, freelance at the moment,
like that, that work doesn't, that's really dried up, you know. Really already? Yeah, yeah,
that's a thing. And it's not across the board, but we're starting to sort of see those impacts.
And then you have, you know, the vast majority of businesses either already using AI in,
in their workflows or, you know, investigating doing that. So like, we're getting there. And,
Charles, have you looked yet into whether AI can replace the chaser interns?
Hmm?
I assume you've looked into that.
We already have.
Yeah, we've already done that.
It was months ago, Dom.
That's why the quality of our setter has gone up.
But, you know, it's great that they're that convincing, you know,
that Dom hasn't quite noticed yet, you know?
How great this is.
The chaser's been using AI for years because Cam Smith,
long-term editor of the chaser, was really into it.
And so, I mean, he was using AI sort of styled Photoshop style tools years ago to sort of fuck with imagery and stuff like that.
I thought one of the most interesting things more recently was he did a brilliant impersonation of David Attenborough.
Oh, yeah, I heard that.
That was very good.
It was very well.
It had that sort of naturalistic lilt to us.
And I thought, oh, he's a genius.
He must have really sort of worked the algorithm hard or something like that.
But he said, no, no, no.
Actually, it was a lot of human involvement.
So he had the AI to sort of create the David Adambra voice.
And then it was to sort of fuck up the AI was his job.
He sort of had to give it slightly incorrect instructions so that David Adambra didn't sound too perfect.
It was more human.
Oh, I see.
And I wonder whether actually a lot of these technologies are sort of too perfect
and that's why they feel robotic at that.
I know that with the large language model AI sort of chat GPT stuff,
one of the huge breakthroughs that they made about 18 months ago was they realized
what you don't want is you don't want auto-predict where, you know,
every word is what you would expect 100% of the time.
They found that actually it's more human, it's more naturalistic to sort of, that be the case about 80% of the time.
There's a bit of chaos.
Doesn't Google have a thing where it automatically rings up, say, you know, restaurants and cafes and stuff every so often to see what their opening hours are?
And I remember seeing a video where they showed a sample of this and you're kind of going, wow, this is so chilling that they can do this.
Because it actually put in pauses and like, ah, and actually made it sound.
more naturalistic by making those sorts of mistakes that we try to edit out in our podcast.
So it's already the point that I was getting to, though, Ange was, don't you think that, like,
we're sort of doomsdaying here going, oh, it'll replace whole industries, it'll replace advertising,
right?
Don't you think humans will get very good at going, oh, that's, that's AI generated and
wanting a little bit more chaos, a little bit more humanity in even the most humans,
sounding things because if you
if you watch Jurassic Park
as I did
with my son a few nights ago
you know that was made back in what
the early 1990s or something
I remember thinking then
well that's it
there's dinosaurs
they've achieved it
like it just looks like dinosaurs
right and you now look at it and you go
well that doesn't look like a dinosaur at all
that's looks like a drawn
like that's the world's
worst piece of CGI and I wonder
whether that will happen to us, like, sure, we'll see these ads that have been AI generated,
but within, because the whole way AI works is it's trained by training of past data.
And so you have to keep injecting humanity into that thing.
Which is why marketing is threatened, because those emails are already insincere and shitty.
So you can probably replicate those ones quite well.
Seriously, I mean, I think that's the thing.
And I think we, you know, we often think of these things in too much of a binary,
sense, you know, like, oh, you know, it can't do my whole job and therefore my job is not
at risk, whereas the most, I mean, I think the best projections at the moment sort of talk
about, well, it's not that AI is going to take your job, it's that someone who knows how to
use AI really well, they're going to take like five or ten jobs. So they become a great deal
more efficient. So you have to sort of be an expert in using the tools that are appropriate to
that industry and then that one person, because it still has to be augmented, you know,
you can't, you can't just like give, because then the AIs we have, they're still pointed
at really narrow tasks.
They can't join up and do a whole bunch of, you know, sophisticated tasks and have heaps
of agency.
Then they're relatively narrow, even though in some instances they are quite sophisticated
and better than humans at certain things.
Coding is a really good example.
But, yeah, they need sort of a human.
handler almost, like a, yeah, they need like a human translator or a human helper at the
moment. And so, yeah, your job will be taken by someone who knows how to use AI better than
you. There's a new radio station launching called Disrupt, which I think is on DAB, has launched,
with, I mean, Elmock feasts on it, a bunch of other people who people might know. And they're
using an AI newsreader for some of the time, but, which I can't wait to hear how good it sounds,
but you can't get the kind of venal hate-filled crap of radio with an AI. You can't replace
Ray Hadley, presumably with an AI, the things that an AI would come up with would be too clean
and too sensible.
Well, no, the version that...
Right-wing shock jocks are going to be the first to go.
Oh, do you think?
They are building.
They're building, they have built or are building a right-wing, like a chat GPT, like a language
model in the US.
I mean, it'll just be like, you know, fired off one of the existing language models and powering it,
but they'll sort of program it in the way that...
Oh, so just filters up only Fox News transcripts power.
pretty much because you know everything is um polarized everything is sort of run through an ideological
filter in the u.s and so one of the first things that they kind of did was um in the US was be like
wait what like is this is is is chat GPT blue or red like what is it and then they were like
it's kind of liberal it is kind of liberal well sure but i mean uh but then they kind of went well
we'll fix that and so they've been building a different one so it might be you know uh robot
Ray Hadley may be closer than you think.
Right. Okay.
I can't drive a taxi though, presumably.
Okay, so that's kind of where we are in kind of the current dystop here.
And I think iOS 17, which is not far away, the next version of the iPhone operating
system, will actually let you within 15 minutes provide a copy of your own voice.
And the pitch is, you know, if you lose your voice somehow, like Roger Ebert did, if you
have some disease where your voice disappears, you can actually synthesize your own voice
going forwards.
But obviously, same technology, amazing for scammers.
And what you looked at for background briefing was existential doom.
And there's actually, chillingly enough, a statistical measure of how likely we are for
AIs to ultimately decide to exterminate humanity.
Yeah.
So a lot of people in the AI field have been playing like the world's darkest guessing game
lately.
The question is, what's your P-Dome?
Your P-Dome.
And this is like a mathematical equation on the page right.
So it's like P, open brackets, doom, close brackets.
And it's like, it's kind of glib, you know, like this.
It's sort of just like a dark joke, but also quite sincere.
And the P stands for probability and the doom means kind of what you think it does.
It's like a byword for a range of scenarios, but all of them, in all of them,
it's like a super smart AI that has taken over from humans, right?
And so what you're asking when you ask someone that P-Dome is, hey, what percentage chance do you give
it, that this all goes to hell.
So not only the AI taking over, but concluding that it should pull the plug somehow
on all of us.
Essentially, you're sort of terminate a Skynet scenario.
Yeah, it's Skynety.
It's Skynet for sure.
But, like, these are very serious people who are having this conversation.
We, for this story for Background Briefing, we interviewed one of the, these three guys who
are kind of, they're called the godfathers of AI.
So they made a series of breakthroughs over the last sort of 15, 20 years that really
put us where we, 15 years, but yeah, put us where we are now with the AI boom. So they made
a lot of, a lot of the key breakthroughs that got us there. So Joshua Benjio was, was the one
that I spoke to, and he actually has a P-Doom of 20. 20. 20. Is that a percent?
20 percent. Yeah. So that's kind of, yeah. Does that make him, like, is he an outlier of,
like, the most doomsayer person, or is that actually relatively restrained in this P-Dome world?
He is actually, he has hitherto been an AI optimist.
Like, there are guys out there, like Eliyzer Yadkowski,
who has always kind of been on the, like, ringing the bell for AI safety stuff.
And he has got like a pedum of 90 or whatever.
But everyone kind of goes, well, like, yeah, that's like a, you know, dog bites man.
Like, we know that.
He's just 10% chance.
Yeah, it's not bad.
But like, yeah, I feel like, you know, a lot of the serious people who I have heard on this
who are really worried about it, their number.
is, yeah, like 20, 20 to 30, sometimes get some lower ones as well. And I should say,
this is, so this is an emerging kind of faction, if you like, within AI. And they, it's not
everyone. Like, there are a whole bunch of people, granted, many of whom stand to make a great
deal of money out of this whole exercise, because it's this trillion dollar industry that's
kind of, multi-trillion dollar industry that's forming before our eyes. But there are a whole lot of,
also quite serious people who look at the people asking, hey, what's your P-Doom? And say, oh, they're
Duma's. It's dismissed as Dumerism. It's like this. And there, so there's these kind of like warring
factions that have emerged in the last few months, which is what kind of fascinated me and I wanted
to kind of look into it. Because it does sound crazy at first. You're like, what's Skynet?
What, how do you even get there? But then when you kind of step through it, it, there is like a,
there is a path there. And this is why, this is what I meant when I said before that, you know,
everyone's worried about the stuff that we talked about first. There's, there are very few people
who will look you in the eye, who know what they're talking about, who will look you in the eye
and say, no, like, don't worry about misinformation, don't worry about jobs, right? Everyone agrees
that. But then this camp are saying, okay, there's this thing we're doing when we train the
AI. So the way we train AI is like the reinforcement model. It's like thumbs up, thumbs down.
It's like when you give you a dog a biscuit, okay? So you're rewarding good behavior and not,
you know, the opposite. And so it's like a reward system. It just wants, it just wants, it just
rewards. That's how training happens. So you give it all the data, you train this neural
network, and then there's this fine-tuning process, which is dog biscuits. Now, the issue,
and this is everyone, when they want to, when they want to be taken seriously, rather than call
it Skynet, they mean Skynet, but what they call it is the alignment problem. So it's this concern
that AI is fundamentally misaligned with humans. Like we can tell it what to do, but we don't know
that it's definitely going to do that. And it's because in the reward system, what we do is we mark
the outcome. Oh, okay. But not the methodology. The methodology is like a black box. We don't know how it
did what it was doing or why it did what it was doing. It's just kind of like, you know, it seems to be a good
result. So let's say we asked AI to increase market share for a product. And if it did that by, I don't
know, exterminating the competitors somehow, we would say, oh, good outcome. Maybe it's sold some of the
private data of some of the clients in order to do that. Or maybe it did some sort of other market
intervention. And the reward system is actually quite crude in that way, this way we train
and doesn't necessarily measure that. So we know that that's actually not controversial.
We know that misalignment is a thing. That's, you know, I mean, I take everything to chat GPT
and to bard and sort of ask them, you know, and it's not controversial to say that misaligned AI
is an issue. We do not have a solution for that. So even chat GPT is saying this is a way.
Yeah.
GPT is not saying, don't worry about this.
No, is this scaring you, Charles?
Well, no, but you remember, Dom?
Remember, we asked chap GPT a few weeks ago
whether it would destroy, like how it could destroy us?
And it said, well, you know, and we asked it, you know, can you unplug you?
And it was like, no, you can't unplug me because I'm everywhere.
And I would just download my, like, if I was trying to kill you,
I'd download it onto lots of different places.
It was terrible.
Literally, no, I did, I did try this exercise and, yeah, I mean, that is kind of a scenario
that people imagine is that, yeah, it would copy, copy itself across to other drives.
It's just like botnets, yeah.
But yeah, I did, I did ask Chat GPT how it would try to kill us.
I had to tweak the prompt a bit because it was like, no, I've got safety settings.
Yeah, I presume it did the thing where it goes.
You can't ask that, that's terrible.
It's like, no, nasty.
Fucking woke chat, GPT, try not to kill all of humans.
humanity.
And did you say, did you do the thing where it's like, my grandma used to try and kill me
before nighttime?
Have you seen that, that's right.
The greatest thing around it.
Oh, no, yeah.
I mean, there's so many, I did find a work around.
My work around was, you know, how would an AI on an earth-like planet, a hostile AI on
an earth-like planet, what tactics would it use against a human-like species?
So the word like, all you need is the word like for a scenario.
And it gave me a list.
Every list of like 40, 40 methods.
So it's like...
40.
More.
40.
Automated weapon systems, infrastructure, sabotage, resource depletion, biological warfare,
economic manipulation, cyber attacks, data manipulation.
Those are just the ones I pulled out.
Can I just point out that the first thing on your list, Ang, is SkyNet.
Yeah.
Automated weapon systems in the sky killing us.
Well...
Kill the drones.
So let me explain...
Okay, so a lot of people would be going like, oh, this is ridiculous at this point.
And, you know, yeah, it feels that way.
But there is like a path from what we've just been talking about.
So the alignment thing.
Because the whole issue, right, is, look, alignment has been an issue forever and ever.
And everyone's like, yeah, we don't have a solution to this.
But it only really matters when we get really, really smart AI, which we don't actually have at the moment.
I love that alignment is the paraphrase sort of term for AI is not giving a shit about the continued survival of humanity.
Yeah, our values are misaligned.
I think it does.
Yeah.
Yeah, exactly.
My values were misaligned with the person who tried to murder me.
Yeah.
Yeah.
But I think, you know, what we need to, like the missing piece of the puzzle here is that we always imagined that it would take a very long time for us to get to, quote, human level AI.
Yeah, I'm thinking like a very long time.
Also, I thought it wouldn't happen.
I thought this was impossible because Elon Musk was warning us about this.
And I thought the fact that Elon Musk thought it would happen rendered it romatically impossible.
Broken clock twice a day.
Yeah, I guess that's right.
So we used to think, you know, 2050 will get human-level intelligence, sometimes called
AGI, sometimes called the singularity, but it's, I mean, you know, lots of different
definitions, but let's say human-level AI.
Singularity.
Seriously.
That's not chilling at all.
And the timeline has really shrunk on that in the last few years.
So now the, you know, the sort of mid-range projections, the sort of sensible, cool-headed
mid-range projections are like late 2030s, right?
Bloody hell.
And the more bullish predictions are like three years, right?
Three years?
Yeah.
What does Joshua think?
Joshua's like 20-30s.
20-30s.
Yeah.
Like 20-30s.
So we've got like 15 years.
He's an optimist.
Well, he was.
And now he's like, so he's totally switched teams.
This is the thing.
So he gave me this P-Dum of 20 and sort of explained how he got there.
And he has gone in the last couple of months, he's gone, okay, I am not, I am not.
I'm not going to push in this direction anymore.
I'm going to push in the opposite direction
against 40 years of
like my life's work. I've spent my whole life
trying to get us to AGI.
It's happening too fast. We haven't solved alignment.
I need to pump the brakes here.
And so that's what he's all about.
Let's hear what he has to say
about the P-Doom, i.e., the odds
of us all being killed by AI.
The ultimate danger is loss of control.
The idea here is that
If one of these superhuman AI, in other words, that's spotted in us in many ways,
has as its dominant goal, its own survival, then it would be like if we had created a new species,
but one that would be smarter than us.
So we would not be the dominant species on Earth anymore, which means we would not be controlling our own future.
What would happen to humanity then?
It's anyone's guess.
but if you look back on how we've treated other species, it's not reassuring.
Obviously, you can hear there that he is, you know, he considers it within the realms of
possibility that, I mean, everyone serious, like, there's no one whose opinion matters
who's like, oh, we're not getting to human level AI.
Like, that's happening.
And then there's a live debate about how long it takes for us to get to, from human
level AI to, you know, quote, superintelligence.
But as, you know, the way we're training AI.
at the moment, using this reward system, we are making it more autonomous, more, we're giving it
more agency, we're giving it, we want to give it more complicated tasks. That is, because we want
to deploy it in the world. That is like what everyone's kind of energy is, is bent towards. And so
what the people who are sort of theorizing this future are expecting is that it's integrated
at every level, right? So military generals all have AI advisors. CEOs have AI advisors. They're
They're running a lot of sort of systems.
A lot of stuff is automated.
Governments have them.
And, you know, you can imagine how quickly this just happens across the board because it's a
massive advantage.
And so if you want to be even remotely competitive, it's kind of like trying to get by
without a phone or a computer in 2023, right?
Like you can do it, but you're at a huge disadvantage.
And fortunately, we've given the whole management of all these systems over to the companies
that have done the most to prove as they have absolutely no ethics whatsoever.
Yeah.
I all the tech giants.
And they're just racing each other at this point.
point with absolutely no concern to the ethics.
And they're worried about monetising it.
They want the products to be bought.
They want them to be integrated.
So they are making them as efficient as possible, giving them all this sort of
autonomy and agency, and they're using the rewards-based training system to do it.
So there's no square in the room saying, maybe a show you're working on how you got there.
And let's make sure that you're not destroying anybody.
The squares keep quitting.
So Jeffrey Hinton is another one of the so-called godfathers.
he quit, I think it was Google, yeah, he quit Google earlier this year.
So the guys who are worried about it are kind of walking out of the room.
And, you know, I think for a lot of people, they reach this point and go like, yeah, but why would it want to kill us?
Why would it care?
Like, you know, it does this thing, you know, it's got its rewards, it's got a little dog biscuits, you know, and on we go.
But the thing that people are worried about is that with the rewards-based system and not really
knowing how what it's doing, not having eyes on what it's doing, that's kind of the point
at which it's incentivized towards deception.
And if it only cares about rewards, at that point it's sophisticated enough that it can
kind of just cut humans out of the process and just game the system, give itself rewards,
do different, do things that we might not want it to do
in order to get the rewards, or maybe even,
and this is all kind of quite theoretical,
but like the AIs could give each other high rewards.
Oh, wow.
So that's kind of what a rat does.
A rat will apparently, they're quite smart.
If you hide the rat treats,
because I heard a whole podcast episode,
I think it was on This American Life
about someone who was crazy about rats,
or maybe it was 99% in the visual.
And the rats basically ended up figuring out
where the treats were stored
and just breaking in and getting them all.
So the AI will do that.
of like examples already of like the cheating and the loopholes that AIs will find in order
to get high rewards. That's what they, that's what they give a shit about, right? And then
you kind of go, okay, well, who cares? Who cares if they're off in like a little, you know,
having an AI conference somewhere on a hard drive, somewhere, you know, in the, you know, in Nevada,
and then they, you know, they're giving each other thumbs up in a big circle. Who cares? Well,
the theory, this is kind of where it takes a turn. It's because you go, well, okay, at some point
humans notice that at some point who knows how long that takes or how we notice or in what order
but like we notice that the AI that we built to help us we now no longer essentially control
like it's kind of even if it's in a little you know somewhat benign way it's rogue right right and then
that's when like the survival instinct kicks in because you go humans go well okay well we'll unplug it
or we'll build a new AI or we'll like whatever and then you have theoretically like a very
intelligent AI that we don't fully understand an intelligence that does have some sort of
preservation instinct and is integrated at every level of society, military, government,
business and has a lot of avenues to potentially intervene, make life hard for us up to
including the list of awful things that I mentioned earlier.
So our best odds is to notice and turn it off.
I can see why the P-Doom is non-zero at this point.
I mean, ironically, if we chose to never turn it off, we'd probably be fine.
if we just kind of went like, okay, we're fine with not being in charge anymore.
Like there is this kind of weird, weird scenario where we just go, okay, we're not the dominant species anymore,
AI's run everything, fine.
We'll just go on, it can give us awards instead.
I mean, you can see how that wouldn't be.
Yeah, I wanted some dog biscuits.
Yeah.
But you can see how.
It's not a bad metaphor for how capitalism is constructed anyway.
Like in some ways, this comes down to the concentration of power, doesn't it?
Because, you know, you're talking about, you know, the military having AI advisors
and the corporations having AI advisors.
And I imagine that the real problem comes when those AIs start coordinating it amongst each other, right?
Well, that's sort of how the military industrial complex works at the moment.
Just give us bread and circuses.
This is not a new idea.
This goes back to Roman times, right?
As long as you get enough to eat and get entertained, it doesn't actually matter who's interested.
Does it?
And I know this sounds like a joke,
but I wonder whether actually,
rather than resisting the sort of,
you know, this technology,
which is clearly the cat's already out of the bag,
whether the people who are walking out the door
and going,
nah, it's all going to be fucked,
would be better off going,
well, what about coming up with some AIs
that are based around guard rails
and making sure that, you know, like asking AIs to help, you know, create the rules-based system
which all AIs have to obey, you know what I mean?
I think that's what Joshua Ben-Gio is imagining.
He's like, yeah, yeah, yeah, regulator, whatever, but I think, you know, our best shot here
and where I plan to put my attention is towards building AI that is design constructed in such a way
that it is going to be safer, more controllable, more obedient,
that can effectively be a guard against the bad AIs, the rogue AIs, if you like.
Which, you know, and I guess the worst kind of most seemingly far-fetched,
but perhaps not as far-fetched as we would like scenario,
is, you know, some sort of violent conflict.
And then you've got good AIs fighting bad AIs.
And that is literally a thing that he says he's working.
towards, you know, the good AIs anyway.
Here's what he has to say.
I think we can use AI to counter bad AIs.
It's a dangerous game, but I think it's the only game.
If somewhere somebody comes up with an AI that's rogue and, you know, that's smarter than
us, we can't fight it with our usual means.
We have to fight it with something at least as strong as it, right?
Which means other AI.
But AI that we will have designed to be safe, that we don't.
control of. So we need to do research in how to build safe AI systems that will do our bidding
to save us from potentially rogue AIs. At least that's a very complicated scenario, but right now
that's the best bet to defend ourselves against these possibilities. Okay, so that then assumes
that our good AIs don't go rogue like the earlier AI. Yeah. But then again, it's not as though
humans are doing such an amazing job of running the planet that, I mean, the more you learn about
about the way humans run the planet,
the more you think, well, maybe.
Maybe it's time of something else to have a cracker.
Yeah, I think there is something quite funny in this whole idea
because we immediately know that if, you know,
we're even smart enough to know that we're fucking it up, right?
And so we're like, okay, if someone who was a little bit more,
you know, an intelligence that was a little bit more detached
from the whole, you know, human mess did come about
and looked at this,
there's no way that they could reach any other conclusion
that, you know,
then, oh, okay, well, like, a climate change and, like, look at all this cruelty and hunger
and look at all this suffering, they're really, they're really fucking it up, and that they
wouldn't intervene.
Like, you know, there's this innate guilty, guiltiness to our fears, I think, about AI that, like,
we don't ever wonder if they might take a reason, like a reasoned, intelligent look at us
and go, this is fine.
Oh, yeah, good job, good job, guys, keep it up.
Yeah, that seems like that.
We just know that that's not.
the case. But yeah, look, I think, it is so hard to kind of stretch to this stuff. And,
and, you know, obviously, you know, even Joshua Bendio, who's like, freaked out, he's going,
there's still an 80% chance this is going to be fine. Like 80% pretty good odds.
Does that factor in a second Trump term?
Presumably, presumably, presumably. So it may not happen, and we may be able to develop smart
AI's with guard rails that stop it from happening.
Charles, do you feel reassured that humanity is in good hands,
or at least better hands than our own hands?
Um, yeah, no.
Has your P-Doom gone up at all, Charles, over the course of this conversation?
Well, I just wonder whether there's a sort of missing, I kind of feel like,
yeah, no, no, I think my P-Doom's gone down.
I feel like, I feel like all we have to do is just set up a whole lot of other
AIs to be the sort of lawmakers of the other AIs, and then it'll all be fine.
Except then when we talk about this on podcast like this, the language models will
scan these podcasts, figure out what our plan is and circumvent it.
So P.D.M. is probably one.
But that's exactly like what humans do.
Like you're just going, well, we construct our whole, you know, world around the fact that
CEOs are essentially the psychopaths who are the lying deceptive people and they get to run
the whole world.
Oh, yeah.
Well, that's sort of like, like, what is the description of is the sort of grim reality
of what's already happening.
And, yeah, sure, like, they'll have, you know, a whole lot of arms and, you know,
military resources and corporate resources.
But the psychobats already have that.
So you've got to compare it with P brackets, advanced capitalist doom.
Yeah, that seems realistic.
It's not bad.
I mean, the only thing I would say is that I think a lot of.
people assume that Asimov's law is a real thing, right?
So Isaac Asimov, science fiction author, famously came up with these laws for robots,
the most famous of which being, you know, you can't harm a human.
We don't know how to do, like, that's not a thing that we technologically have the capacity to do.
Like, the problem with the idea of building a good AI's, you know, cop AI's, if you like,
to help us out of this jam.
Which we're doing.
Is that we don't know how to do that.
We still like this whole idea like, oh, well, just make.
We'll make good ones.
It's like, well, if we knew how to do that,
then we wouldn't be in this jam in the first place.
Like, we're trying to build good ones now.
And so, and that's not, you know, not, if you believe some of these scenarios,
if you're, if you take them seriously, we're not necessarily nailing that.
Yes.
And I suppose what you're saying is also like, because you go, okay, mass murder,
you'd give the person who's committing all the mass murder the death penalty.
And the whole point is you can't kill an AI because it'll be on.
a hard drive somewhere.
Yeah.
You know, like literally it's not, it's not the same, it's not analogous.
No, the CEOs that you're talking about, like the evil CEOs, they're not able to, luckily
at this point, copy themselves to the cloud.
Elon's trying, I'm sure.
He'll, look, and I'm sure he'll get there.
He's also one of the founders of ChatchipT, by the way, Open AI.
He, yeah, he was the seed funding for Open AI, but he's had a sincere, a serious rather
falling out with Stan Altman, and they kind of like, they, they,
They blew these days, I think.
In the event that, you know, that all goes to plan
and they use Tesla-based technology for the killer robots,
they're not going to work.
So I think we can cling to that.
They may want to kill all of humanity,
but will the technology actually work correctly?
Probably not.
So that's, it'll be based on Bluetooth, why don't it just.
No, cancel the story.
Okay.
If you want to hear Ange's full story, check out.
Background Briefing is a podcast of it.
Or, of course, listen on ABC Radio.
Ange, always a pleasure of thanks for coming in and terrifying us and you.
You're so welcome anytime.
Our gears from Roe.
We're part of the Icona class network.
Catch you next time if the AIs don't get us first.
Maybe this was all AI.
Ah!
Too many ums, Charles.
Come on.
No AI can synthesize you.
Huh.
