The Diary Of A CEO with Steven Bartlett - Google DeepMind Co-founder: AI Could Release A Deadly Virus - It’s Getting More Threatening! Mustafa Suleyman
Episode Date: September 4, 2023AI is going to change everything, but can we control AI or will AI control us? In this new episode Steven sits down with AI pioneer and co-founder of DeepMind, Mustafa Suleyman CBE. In 2010, Mustafa c...o-founded the British Artificial Intelligence company DeepMind. Four years later DeepMind was bought by Google, and Mustafa became its Head of Applied AI. Later in 2019, Mustafa left DeepMind to work for Google, and in 2022 he left Google to co-found the startup, Inflection AI. The goal of Inflection AI is to use AI in order to help humans speak with computers, and in 2023 they released ‘Pi’, a personal AI chatbot. In this conversation Mustafa and Steven discuss topics, such as: His strict religious childhood Dropping out of Oxford University Helped to found DeepMind The core goal of DeepMind How intelligence can be defined as information processing and prediction. The history of previous groundbreaking technologies How technological changes come in waves AI's potential to advance civilisation What the world will look like in 2050 How AI will develop alongside other technologies such as biotechnology The race to create more powerful AI Making AI a tool that works for humans His aim of creating personal AI The chances of AI getting out of control The limitations of AI How AI could be exploited by criminals How we cannot have a short term mentality about AI The responsibility of AI creators His thoughts on the chances of containing AI The problems to containing or slowing down AI The need for a global containment policy on AI Lessons from WW2 we can use in tackling AI You can purchase Mustafa’s book, ‘The Coming Wave’, here: https://bit.ly/3Laizip Follow Mustafa: Twitter: https://bit.ly/45uFz3T Follow me: https://beacons.ai/diaryofaceo
Transcript
Discussion (0)
Quick one. Just wanted to say a big thank you to three people very quickly. First people I want
to say thank you to is all of you that listen to the show. Never in my wildest dreams is all I can
say. Never in my wildest dreams did I think I'd start a podcast in my kitchen and that it would
expand all over the world as it has done. And we've now opened our first studio in America,
thanks to my very helpful team led by Jack on the production side of things. So thank you to Jack
and the team for building out the new American studio. And thirdly to to Amazon Music, who when they heard that we were expanding to the United
States, and I'd be recording a lot more over in the States, they put a massive billboard
in Times Square for the show. So thank you so much, Amazon Music. Thank you to our team. And
thank you to all of you that listened to this show. Let's continue. Are you uncomfortable
talking about this? Yeah, I mean, it's pretty wild, right? Mustafa Suleiman, the billionaire founder of Google's AI technology.
He's played a key role in the development of AI from its first critical steps.
In 2020, I moved to work on Google's chatbot.
It was the ultimate technology.
We can use them to turbocharge our knowledge unlike anything else.
Why didn't they release it?
We were nervous. We were nervous.
Every organization is going to race
to get their hands on intelligence.
And that's going to be incredibly destructive.
This technology can be used to identify cancerous tumors
as it can to identify a target on the battlefield.
A tiny group of people who wish to cause harm
are going to have access to tools
that can instantly destabilize our world.
That's the challenge, how to stop something that can cause harm or potentially kill.
That's where we need containment.
Do you think that it is containable?
It has to be possible.
Why?
It must be possible.
Why must it be?
Because otherwise it contains us.
Yet you chose to build a company in this space.
Why did you do that? Because I want
to design an AI that's on your side. I honestly think that if we succeed, everything is a lot
cheaper. It's going to power new forms of transportation, reduce the cost of healthcare.
But what if we fail? The really painful answer to that question is that...
Do you ever get sad about it yeah it's intense
everything that's going on with artificial intelligence now and um this new wave and
all these terms like agi and i saw another term in your book called ACI,
which is the first time I'd heard that term.
How do you feel about it emotionally?
If you had to encapsulate how you feel emotionally
about what's going on in this moment,
what words would you use?
I would say in the past, it would have been petrified.
And I think that over time,
as you really think through the consequences
and the pros and cons and the trajectory that we're on,
you adapt and you understand
that actually there is something incredibly inevitable
about this trajectory
and that we have to wrap our arms around it and guide
it and control it as a collective species as a as humanity and i think the more you realize
how much influence we collectively can have over this outcome the more empowering it is. Because on the face of it, this is really going to be the
tool that helps us tackle all the challenges that we're facing as a species, right? We need to
fix water desalination. We need to grow food 100x cheaper than we currently do. We need
renewable energy to be ubiquitous and everywhere in our lives. We need to adapt to
climate change. Everywhere you look in the next 50 years, we have to do more with less.
And there are very, very few proposals, let alone practical solutions for how we get there.
Training machines to help us as aides, scientific research partners, inventors,
creators is absolutely essential. And so the upside is phenomenal. It's enormous.
But AI isn't just a thing. It's not an inevitable whole. Its form isn't inevitable, right? Its form, the exact way that it manifests and appears in our
everyday lives and the way that it's governed and who it's owned by and how it's trained,
that is a question that is up to us collectively as a species to figure out over the next decade.
Because if we don't embrace that challenge, then it happens to us. And that's really what I'm,
I have been wrestling with for 15 years of my career is how to intervene in a way that this
really does benefit everybody. And those benefits far, far outweigh the potential risks.
At what stage were you petrified? So I founded DeepMind in 2010. And, you know, over the course of the first few years,
our progress was fairly modest. But quite quickly in sort of 2013, as the deep learning revolution began to take off, I could see glimmers of very early
versions of AIs learning to do really clever things. So for example, one of our big initial
achievements was to teach an AI to play the Atari games. So remember Space Invaders and Pong,
where you bat a ball from left to right. And we trained this initial AI to purely look at the raw pixels, screen by screen,
flickering or moving in front of the AI, and then control the actions up, down, left, right,
shoot or not. And it got so good at learning to play this simple game simply through attaching a value between the reward, like it was getting score and taking an action, that it learned some really clever strategies to play the game really well that us games players and humans hadn't really even noticed.
At least people in the office hadn't noticed it.
Some professionals did.
And that was amazing to me because I was like, wow, this simple system that learns through a set of stimuli plus a reward to take some actions can actually discover many strategies,
clever tricks to play the game well, that us humans hadn't occurred to
us, right? And that to me is both thrilling, because it presents the opportunity to invent
new knowledge and advance our civilization. But of course, in the same measure is also petrifying was there a particular moment when
you were you were at deep mind where you go whether you had that kind of eureka moment like a day
when something happened and it caused that that epiphany i guess was it yeah it it was actually a moment even before 2013 where I remember standing in the office and watching a very early prototype of one of these image recognition, image generation models that was trained to generate new handwritten black and white digits. So imagine zero to one, two, three, four, five, six, seven, eight, nine,
all in different style of handwriting on a tiny grid of like 300 pixels by 300 pixels in black and white. And we were trying to train the AI to generate a new version of one of those digits,
a number seven in a new handwriting. Sounds so simplistic today, given the incredible photorealistic images that are being generated. Right. Um, and I just remember so clearly it,
it took sort of 10 or 15 seconds and it just resolved it that the number appeared,
it went from complete black to like slowly gray. And then suddenly these were like white pixels
appeared out of the, the black darkness and it revealed a number seven. And then suddenly these were like white pixels appeared out of the black
darkness and it revealed a number seven. And that sounds so simplistic in hindsight, but
it was amazing. I was like, wow, the model kind of understands the representation of a seven
well enough to generate a new example of a number seven, an image of a number seven,
you know, and you roll forward 10 years
and our predictions were correct.
In fact, it was quite predictable in hindsight,
the trajectory that we were on.
More compute plus vast amounts of data
has enabled us within a decade
to go from predicting black and white digits,
generating new versions of those images,
to now generating unbelievable, photorealistic,
not just images, but videos, novel videos,
with a simple natural language instruction or a prompt.
What has surprised you?
You said you refer to that as predictable, but what has surprised you? You said you referred to that
as predictable, but what has surprised you about what's happened over the last decade?
So I think what was predictable to me back then was the generation of images and of audio.
Um, because the structure of an image is locally contained. So pixels that are near one another
create straight lines and edges and corners. And then eventually they create eyebrows and noses
and eyes and faces and entire scenes. And I could just intuitively in a very simplistic way, I could
get my head around the fact that, okay, well, we're predicting these number sevens. You can
imagine how you then can expand that out to entire images, maybe even to videos, okay, well, we're predicting these number sevens. You can imagine how you then can
expand that out to entire images, maybe even to videos, maybe, you know, to audio too. You know,
what I said, you know, a couple seconds ago is connected in phoneme space in the spectrogram.
But what was much more surprising to me was that those same methods for generation applied in the space of language. You know,
language seems like such a different abstract space of ideas. When I say like the cat sat on
the, most people would probably predict mat, right? But it could be table, car, chair, tree. It could be mountain, cloud. I mean, there's a gazillion
possible next word predictions. And so the space is so much larger. The ideas are so much more
abstract. I just couldn't wrap my intuition around the idea that we would be able to create
the incredible large language models that you see today.
Your chat GPTs.
Chat GPT.
Google Bard.
The Google's Bard.
Inflection, my new company, has an AI called Pi,
a Pi.AI, which stands for personal intelligence.
And it's as good as chat GPT, but much more emotional and empathetic and kind.
So it's just super surprising to me that
just growing the size of these large language models, as we have done, by 10x every single year
for the last 10 years, we've been able to produce this. And that's just an amazingly large number.
If you just kind of pause for a moment to grapple with the numbers
here. In 2013, when we trained the Atari AI that I mentioned to you at DeepMind, that used two
petaflops of computation. So peta, P-E-T-A, stands for a million billion calculations. A flop is a calculation.
So two million billion, right?
Which is already an insane number of calculations.
Lost me at two.
It's totally crazy.
Yeah, just two of these units that are already really large.
And every year since then,
we've 10x the number of calculations that can be done
such that today,
the biggest language model that we train at Inflection uses 10 billion petaflops. So 10 billion, million, billion calculations. I
mean, it's just unfathomably large number. And what we've really observed is that scaling these models by 10x every single year produces
this magical experience of talking to an AI that feels like you're talking to a human that is super
knowledgeable and super smart. There's so much that's happened in public conversation around AI
and there's so many questions that I have. I've been speaking to a few people about artificial intelligence,
trying to understand it.
And I think where I am right now
is I feel quite scared.
But when I get scared, I don't get,
it's not the type of scared that makes me anxious.
It's not like an emotional scared.
It's a very logical scared.
It's my very logical brain
hasn't been able to figure out
how the inevitable outcome that i've
arrived at which is that humans become the less dominant species on this planet
um how that is to be avoided in any way the first chapter of your book the coming wave is a is is
is titled appropriately to how i feel containment is not possible. You say in that chapter,
the widespread emotional reaction I was observing is something I've come to call the pessimism
aversion trap. Correct. What is the pessimism aversion trap? Well, so all of us be included,
feel what you just described when you first get to grips with the idea of this new coming wave
it's scary it's petrifying it's threatening is it going to take my job is my daughter or son
going to fall in love with it you know what does this mean what does it mean to be human in a world
where there's these other human-like things that aren't human how How do I make sense of that? It's super scary.
And a lot of people over the last few years, I think things have changed in the last six months,
I have to say, but over the last few years, I would say the default reaction has been to
avoid the pessimism and the fear, right? To just kind of recoil from it and pretend that it's like
either not happening or that it's all going to work out to be rosy. It's going to be fine. We
don't have to worry about it. People often say, well, we've always created new jobs. We've never
permanently displaced jobs. We've only ever seen new jobs be created. Unemployment is at an all
time low, right? So there's this default optimism bias that we have
and i think it's less about a need for optimism and more about a fear of pessimism and so that
trap particularly in elite circles means that often we aren't having the tough conversations
that we need to have in order to respond to the coming wave.
Are you scared in part about having those tough conversations because of how it might be received?
Not so much anymore.
So I've spent most of my career trying to put those tough questions on the policy table,
right? I've been raising these questions, the ethics of AI,
safety and questions of containment, for as long as I can remember with governments and civil
societies and all the rest of it. And so I've become used to talking about that. And, you know,
I think it's essential that we have the honest conversation because we can't let it happen to us.
We have to openly talk about it. Is, I mean, this is a, this is a big, a big question, but
as you sit here now, do you think that it is containable? Because I, I can't see how,
I can't see how it can be contained.
Chapter three is the containment problem,
where you give the example of how technologies
are often invented for good reasons
and for certain use cases, like the hammer,
which is used maybe to build something,
but it also can be used to kill people.
And you say in history,
we haven't been able to ban a technology ever really it has always found a way into society um because of other societies
have an incentive to have it even if we don't and then we need we need it like the nuclear bomb
because if they have it then we don't then we're at a disadvantage so are you optimistic honestly i don't think an optimism or a pessimism frame is the right one because the both are equally
biased in ways that i think distract us as i say in the book, on the face of it, it does look like containment
isn't possible. We haven't contained or permanently banned a technology of this type
in the past. There are some that we have done, right? So we banned CFCs, for example, because
they were producing a hole in the ozone layer. We've banned certain weapons, chemical and biological weapons, for
example, or blinding lasers, believe it or not, there are such things as lasers that will instantly
blind you. So we have stepped back from the frontier in some cases, but that's largely where
there's either cheaper or equally effective alternatives that are quickly adopted. In this case,
these technologies are omni-use. So the same core technology can be used to identify, you know,
cancerous tumors in chest x-rays as it can to identify a target on the battlefield for an aerial
strike. So that mixed use or omni use is going to drive the
proliferation because there's huge commercial incentives because it's going to deliver a huge
benefit and do a lot of good. And that's the challenge that we have to figure out is how to
stop something which on the face of it is so good, but at the same time can be used in really bad ways too. Do you think we will?
I do think we will. So I think that nation states remain the backbone of our civilization.
We have chosen to concentrate power in a single authority,
the nation state, and we pay our taxes and we've given the nation state
a monopoly over the use of violence and now the nation state is going to have to
update itself quickly to be able to contain this technology because without that kind of
essentially oversight,
both of those of us who are making it,
but also crucially of the open source,
then it will proliferate and it will spread.
But regulation is still a real tool and we can use it and we must.
What does the world look like in, let's say, 30 years
if that doesn't happen in your view?
Because the average person can't really grapple their head around artificial intelligence when they think of it they think of
like these large language large language models that you can chat to and ask it about your homework
that's like the average person's understanding of artificial intelligence because that's all
they've ever been exposed to of it you have a different view because of the work you spent the last decade doing. So to try and give Dave, who's,
I don't know, an Uber driver in Birmingham, an idea who's listening to this right now,
what artificial intelligence is and its potential capabilities if, you know, there's no
containment. What does the world look like in 30 years
so i think it's going to feel largely like another human so think about the things that
you can do not again in the physical world but in the digital world 2050 i'm thinking of i'm in 2050
2050 we will have robots 2050 we will definitely have robots i mean more. 2050, we will have robots.
2050, we will definitely have robots.
I mean, more than that,
2050, we will have new biological beings as well.
Because the same trajectory that we've been on with hardware and software
is also going to apply to the platform of biology.
Are you uncomfortable talking about this?
Yeah, I mean, it's pretty wild, right?
I noticed you crossed your arms.
No, I always use that as a cue for someone
when a subject matter is uncomfortable.
And it's interesting
because I know you know so much more than me about this.
And I know you've spent way more hours
thinking off into the future
about the consequences of this. I mean, you've written a book about it so bloody hell like you spent 10 years
at the very deep mine is one of the the pinnacle companies the pioneers in this whole space
so you know you know some stuff and it's funny because when i was i watched an interview with
elon musk and he was asked a question similar to this i know he speaks in certain certain tone of
voice but he said that he's almost he's gotten to the point
where he thinks he's living in suspended disbelief where he thinks that if he spent too long thinking
about it he wouldn't understand the purpose of what he's doing right now and he he says that
it's more dangerous than nuclear weapons um and that it's too late too late to stop it
there's one interview that's chilling and i was filming dragons den the other day and i showed
the dragons the clip i was like look what elon musk said when he was asked about what his chart
what advice he should give to his children in a world of in an inevitable world of artificial
intelligence it's the first time i've seen elon musk stop for like 20 seconds and not know what
to say stumble stumble stumble stumble stumble and then conclude that he's living in suspended disbelief. Yeah, I mean, I think it's a great phrase.
That is the moment we're in.
We have to,
it's what I said to you about the pessimism aversion trap,
we have to confront the probability
of seriously dark outcomes.
And we have to spend time
really thinking about those consequences
because the competitive nature of companies
and of nation states is going to mean that every organization is going to race to get their hands
on intelligence. Intelligence is going to be a new form of capital, right? Just as there was a
grab for land, or there's a grab for oil, there's a grab for
anything that enables you to do more with less, faster, better, smarter, right? And we can clearly
see the predictable trajectory of the exponential improvements in these technologies. And so we
should expect that wherever there is power, there's now a new tool to amplify that power, accelerate that power, turbocharge it,
right? And, you know, in 2050, if you asked me to look out there, I mean, of course, it makes me
grimace. That's why I was like, Oh, my God, it's, it really does feel like a new species. And that has to be brought under control.
We cannot allow ourselves to be dislodged from our position
as the dominant species on this planet.
We cannot allow that.
You mentioned robots.
So these are sort of adjacent technologies
that are arising with artificial intelligence.
Robots, you mentioned biological, new biological species. So these are sort of adjacent technologies that are rising with artificial intelligence robots.
You mentioned biological, new biological species.
Give me some light on what you mean by that.
Well, so so far, the dream of robotics hasn't really come to fruition.
Right. I mean, we still have the most we have now are sort of drones and
little bit of self-driving cars. But that is broadly on the same trajectory as these other
technologies. And I think that over the next 30 years, you know, we are going to have um you know physical tools within our everyday system that we can rely on
that will be pretty good that'll be pretty good to do many of the physical tasks and that's a
little bit further out because i think it you know there's a lot of tough problems there but it's
still coming in the same way and likewise with you know, we can now sequence a genome for a
millionth of the cost of the first genome, which took place in 2000. So 20-ish years ago,
the cost has come down by a million times. And we can now increasingly synthesize, that is,
create or manufacture new bits of DNA,
which obviously give rise to life in every possible form.
And we're starting to engineer that DNA
to either remove traits or capabilities that we don't like
or indeed to add new things that we want it to do.
We want fruit to last longer
or we want meat to have higher protein
etc etc synthetic meat to have higher protein levels and what's the implications of that
well potential implications
i think that the darkest scenario there is that people will experiment with pathogens,
engineered, you know, synthetic pathogens
that might end up accidentally or intentionally being more transmissible,
i.e. they can spread faster or more lethal,
i.e., you know, they cause more harm or potentially kill.
Like a pandemic. like a pandemic like a pandemic um and
that's where we need containment right we have to limit access to the tools and the know-how
to carry out that kind of experimentation so one framework of thinking about this with respect to making containment possible is that we really are experimenting with dangerous materials.
And anthrax is not something that can be bought over the Internet that can be freely experimented with. the very best of these tools in a few years' time are going to be capable of creating, you know,
new synthetic pandemic pathogens.
And so we have to restrict access to those things.
That means restricting access to the compute.
It means restricting access to the software that runs the models,
to the cloud environments that provide APIs,
provide you access to experiment with those things.
And of course, on the biology side, it means restricting access to some of the substances.
And people aren't going to like this. People are not going to like that claim, because it means
that those who want to do good with those tools, those who want to create a startup, the small guy, the little developer that struggles
to comply with all the regulations, they're going to be pissed off, understandably, right?
But that is the age we're in. Deal with it. We have to confront that reality. That means that
we have to approach this with the precautionary principle, right? Never before in the invention of a
technology or in the creation of a regulation have we proactively said, we need to go slowly.
We need to make sure that this first does no harm, the precautionary principle. And that
is just an unprecedented moment. No other technology has done that, right? Because I think
we collectively in the industry,
those of us who are closest to the work can see a place in five years or 10 years where it could
get out of control and we have to get on top of it now. And it's better to forego like that is give
up some of those potential upsides or benefits until we can be more sure that it can be contained that it can be controlled
that it always serves our collective interests and i think about that so i think about what
you've just said there about being able to create these pathogens these diseases and viruses etc
that you know could become weapons or whatever else but with artificial intelligence and the
power of that intelligence with these um pathogens you could theoretically ask one of these systems to create a virus, a very deadly virus.
You could ask the artificial intelligence to create a very deadly virus that has certain properties.
Maybe even that mutates over time in a certain way,
so it only kills a certain amount of people,
kind of like a nuclear bomb of viruses
that you could just pop, hit an enemy with.
Now, if I hear that and I go, okay, that's powerful,
I would like one of those.
There might be an adversary out there that goes,
I would like one of those just in case America get out of hand.
And America's thinking, I want one of those
in case Russia gets out of hand. And so's thinking, you know, I want one of those in case Russia gets out of hand.
And so, okay, you might take a precautionary approach
in the United States,
but that's only going to put you on the back foot
when China or Russia or one of your adversaries
accelerates forward in that path.
And this was the same with the nuclear bomb.
And, you know.
You nailed it.
I mean, that is the race condition condition we refer to that as the race
condition the idea that if i don't do it the other party is going to do it and therefore i must do it
but the problem with that is that it creates a self-fulfilling prophecy. So the default there is that we all end up doing it. And that can't be
right because there is a opportunity for massive cooperation here. There's a shared that is between
us and China and every other quote unquote them or they or enemy that we want to create.
We've all got a shared interest in advancing the collective health and
well-being of humans and humanity. How well have we done at promoting shared interest?
Well, in the development of technologies over the years, even at like a corporate level, even,
you know, you know, the nuclear nonproliferation Treaty has been reasonably successful. There's only nine nuclear states in the world today.
We've stopped many, like three countries actually gave up nuclear weapons
because we incentivized them with sanctions and threats and economic rewards.
Small groups have tried to get access to nuclear weapons and so far have largely failed.
It's expensive though, right?
And hard to, like uranium as a chemical to keep it stable and to buy it and to house it i mean i couldn't just put it
in the shed you certainly couldn't put it in a shed you can't download uranium 235 off the internet
it's not available open source that is totally true so it's got different characteristics for
sure but a kid in russia could you, in his bedroom could download something onto his computer. That's incredibly harmful in the artificial intelligence department, right? On the one hand, you've got the cutting edge AI models that are built by Google and OpenAI and my company Inflection,
and they cost hundreds of millions of dollars,
and there's only a few of them.
But on the other hand, what was cutting edge a few years ago
is now open source today.
So GPT-3, which came out in the summer of 2020, is now reproduced as an open source model.
So the code and the weights of the model, the design of the model and the actual implementation
code is completely freely available on the web. And it's tiny. It's like 60 times or 60, 70 times
smaller than the original model, which means that it's cheaper
to use and cheaper to run. And that's, as you know, we've said earlier, like that's the natural
trajectory of technologies that become useful, they get more efficient, they get cheaper,
and they spread further. And so that's the containment challenge. That's really the essence of what I'm sort of trying to raise in my book
is to frame the challenge of the next 30 to 50 years
as around containment and around confronting proliferation.
Do you believe, because we're both going to be alive
unless some robot kills us,
but we're both going to be alive in 30 years' time.
I hope so.
Maybe the podcast will still be going unless ai is now taking my job it's very possible so i'm going to sit you
here and you know when you're i mean you'll be what 60 68 years old i'll be 60 um and i'll say
at that point when we have that conversation do you think we would have
been successful in containment on a global level i think we have to be i can't even think that
we're not why because i'm fundamentally a humanist and i think that we have to make a choice
to put our species first and
i think that that's what we have to be defending for the next 50 years
that's what we have to defend because Because, look, it's certainly possible
that we invent these AGIs in such a way
that they are always going to be provably subservient to humans
and take instructions from their human controller every single time.
But enough of us think that we can't be sure about that, that I don't think we should take
the gamble, basically. So that's why I think that we should focus on containment and non
proliferation, because some people, if they do have access to the technology, will want to take those risks. And they will just want to see like what's on the
other side of the door, you know, and they might end up opening Pandora's box. And that's a decision
that affects all of us. And that's the challenge of the networked age. You know, we live in this
globalized world. And we use these words like globalization, and we you sort of forget what
globalization means. This is what globalization is. This is what a networked world is it means that
someone taking one small action can suddenly spread everywhere instantly regardless of their
intentions when they took the action it may be you know unintentional like you say maybe that they're never they weren't ever meaning to do harm
well i think i asked you when i said you know 30 years time you said that there will be like human
level intelligence you'll be interacting with you know this new species but the species
for me to think the species will want to interact with me is feels like wishful thinking because what will i be to them you know like i've got a french bulldog pablo and i can't imagine our
iq is that far apart like like you know in relatives terms my the iq between me and my
dog pablo i can't imagine that's that far apart even when i think about is it like the orangutan
where we only have like one percent difference in dna or something crazy and yet they throw their poop around and i'm sat here
broadcasting around the world there's quite a difference in that one percent
you know and then i think about this new species where as you write in your book in chapter four
there seems to be no upper limit to AI's potential intelligence.
Why would such an intelligence want to interact with me?
Well, it depends how you design it.
So I think that our goal, one of the challenges of containment,
is to design AIs that we want to interact with, that want to interact with us, right?
If you set an objective
function for an ai a goal for an ai by its design which you know inherently disregards or disrespects
you as a human and your goals then it's going to wander off and do a lot of strange things
what if it has kids and the kids you know what i mean what if it replicates in a way where
because because i've
i've heard this this conversation around like it depends how we design it but you know i think about
it's kind of like if i have a kid and the kid grows up to be a thousand times more intelligent
than me to think that i could have any influence on it on it when it's a thinking sentient
developing species again feels like i'm overestimating my version of intelligence
importance and significance in the face of something that is incomprehensibly like
even a hundred times more intelligent than me and the speed of its computation is a thousand times what my the meat in my skull can do yeah like how how how is it gonna how how do i know it's going to
respect me or care about me or understand you know that i may you know i think that comes back
down to the containment challenge i think that if we can't be confident that it's going to respect you and understand you and work for you and us as a
species overall then that's where we have to adopt the precautionary principle i don't think we should
be taking those kinds of risks in experimentation and design and now i'm not saying it's possible
to design an ai that doesn't have those self-improvement capabilities in the limit in like 30 or 50 years.
I think, you know, that's kind of what I was saying is like, it seems likely that if you have one like that, it's going to take advantage of infinite amounts of data and infinite amounts of computation.
And it's going to kind of outstrip our ability to act and so i think we have to step back from that precipice that's what the containment problem
is is that it's it's actually saying no sometimes it's saying no and that's a different sort of
muscle that we've never really exercised as a civilization and that's obviously why
containment appears not to be possible because we've never done it before we've never done it
before and every inch of our you know commerce and politics and our war and our all of our
instincts are just like clash compete clash compete profit profit grow beat exactly dominate
you know fear them be paranoid like now all this
nonsense about like china being this new evil like it how does that slip into our culture
how are we suddenly all shifted from thinking it's the the muslim terrorists about to blow us all up
to now it's the chinese who are about to you know know, blow up Kansas. It's just like, what are we talking about?
Like, we really have to pare back the paranoia and the fear and the othering.
Because those are the incentive dynamics that are going to drive us to,
you know, cause self-harm to humanity.
Thinking the worst of each other.
There's a couple of key moments when,
in my understanding of
artificial intelligence that have been kind of paradigm paradigm shifts for me because i think
like many people i thought of artificial intelligence as you know like a like a child
i was raising and i would program i'd code it to do certain things so i'd code it to play chess
and i would tell it the moves that are conducive with being successful in chess and
then i remember watching that like alpha go documentary right which i think was deep deep
mind that was us yeah you guys so you programmed this this um artificial intelligence to play the
game go which is kind of like just think of it kind of like a chess or a black diamond or whatever
and it eventually just beats the best player in the world of all time and it and the way it learned
how to beat the best player in the world of all time the world champion who was by the way depressed when he got
beat um was just by playing itself right and then there's this moment i think in is it game four or
something right it does this move that no one could have predicted right as a move that seemingly
makes absolutely no sense right in those moments where no one trained it to do that
and it did something unexpected beyond where humans are trying to figure it out in hindsight
this is where i go how do you how do you train it if it's doing things we didn't anticipate right
like how do you control it when it's doing things that humans couldn't anticipate it doing where
we're looking at that move it's's called like move 37 or something.
Correct, yeah.
Is it move 37?
It is, yeah.
Look at my intelligence.
Nice work.
I'm going to survive a bit longer than I thought.
It's like move 37.
You've got at least another decade in you.
Move 37 does this crazy thing
and you see everybody like lean in and go,
why has it done that?
And it turns out to be brilliance
that humans couldn't have forecasted.
The commentator actually thought it was a mistake yeah he was a pro and he was like this
there's definitely a mistake you know the alpha goes lost the game but it was so far ahead of us
that i knew something we didn't right right that's when that's when i lost hope in this
whole idea of like oh train it to do what we want like a dog like sit, paw, roll over. Right. Well, the real challenge is that
we actually want it to do those things.
Like when it discovers a new strategy
or it invents a new idea
or it helps us find like, you know,
a cure for some disease, right?
That's why we're building it, right?
Because we're reaching the limits
of what we as you know
humans can invent and solve right especially with what we're facing of you know in terms of
population growth over the next 30 years and how climate change is going to affect that and so on
like we really want these tools to turbocharge us right and yet like it's that creativity and that invention which obviously
makes us also feel well maybe it it is really going to do things that we don't like for sure
right so interesting
how do you contend with all of this how do you contend with the the clear upside and then you must like elon must be
completely aware of the the horrifying existential risk at the same time and you're building a big
company in this space which i think is valued at four billion now inflection ai which has got this
its own model called pi so you're building in this space you understand the incentives at both
a nation state level and a corporate level that we're going to keep planning forward even if the
u.s stops there's going to be some other country that sees that as a huge advantage their economy
will swell because they did if this company stops then this one's going to get a get a huge advantage
and their shareholders are you know everyone's investing
in ai full steam ahead but you feel you can see this huge existential risk is it suspended is that
the path towards suspended disbelief i mean just to kind of like just know that it's like i feel
like i know that's gonna happen no one's been able to tell me otherwise, but just don't think too much about it and you'll be okay.
I think you can't give up, right?
I think that in some ways your realization, exactly what you've just described, like weighing up two conflicting and horrible truths about what is likely to happen.
Those contradictions, that is a kind of honesty and a wisdom, I think, that we need all collectively to realize.
Because the only path through this is to be straight up and embrace, you know, the risks and embrace the default trajectory of all these competing incentives driving forward
to kind of make this feel like inevitable. And if you put the blinkers on and you kind of just
ignore it, or if you just be super rosy and it's all going to be all right. And if you say that
we've always figured it out anyway, then we're not going to get the energy and the dynamism and
engagement from everybody to try to figure this out. And that's what gives me like reason to be
hopeful. Because I think that we make progress by getting everybody paying attention to this.
It isn't going to be about those who are currently the AI scientists or those who are the technologists,
you know, like me or the venture capitalists or just the politicians, like all of those people,
no one's got answers. So that's what we have to confront. There are no obvious answers to this
profound question. And I've basically written the book to say, prove that I'm wrong. You know,
containment must be possible. And I, it must be, it must be possible. i it must be it must be possible it has to be possible
it has to be you want it to be i i desperately want it to be yeah why must it be because
otherwise i think you're in the camp of believing that this is the inevitable evolution of humans, the transhuman kind of view.
You know, some people would argue like,
what is, okay, let's stretch the timelines out.
Okay.
So let's not talk about 30 years.
Let's talk about 200 years.
Like, what is this going to look like in 2200?
You tell me, you're smarter than me.
I mean, it's mind blowing.
It's mind blowing.
What is the answer?
We'll have quantum computers by then.
What's a quantum computer?
A quantum computer is a completely different type of computing architecture, which in simple terms, basically allows you to,
those calculations that I described at the beginning, billions and billions of flops,
those billions of flops can be done in a single computation. So everything that you see in the
digital world today relies on computers processing information. And the speed of
that processing is a friction. It kind of slows things down, right? You remember back in the day,
old school modems, 56k modem, the dial up sound and the image pixel loading, like pixel by pixel.
That was because the computers were slow. And we're getting to a point now where the
computers are getting faster and faster and faster and quantum computing is like a whole new leap like
way way way beyond where we where we currently are and so by analogy how would i understand that
so like if my i've got my dial-up modem over here and then quantum computing over here right
what's the how do i what's the difference well i don't know why it's really difficult I dial up modem over here and then quantum computing over here. Right.
What's the, how do I, what's the difference?
Well, I don't know why it's really difficult to explain.
Is it like a billion times faster?
Oh, it's, it's, it's like, it's like billions of billions times faster.
It's, it's, it's much more than that.
I mean, one way of thinking about it is like a floppy disk, which I guess most people remember 1.4 megabytes,
a physical thing back in the day.
In 1960 or so, that was basically an entire pallet's worth of computer that was moved around by a forklift truck, right?
Which is insane. Today, you know, you have billions and billions of times that floppy disk in your smartphone, in your pocket. Tomorrow, you're going to have billions and billions of
smartphones in minuscule wearable devices. There'll be cheap fridge magnets that, you know,
are constantly on everywhere, sensing all the time, monitoring, processing, analyzing, improving, optimizing, and they'll be super cheap.
So it's super unclear what do you do with all of that knowledge and information.
I mean, ultimately, knowledge creates value.
When you know the relationship between things, you can improve them, you know, make it more efficient.
And so more data is what has enabled us to build all the value of, you know, online in the last 25 years.
And so what does that look like in 150 years?
I can't really even imagine, to be honest with you.
It's very hard to say. I don't really even imagine, to be honest with you. It's very hard to say.
I don't think everybody is going to be working.
Why would we?
We wouldn't be working in that kind of environment.
I mean, the other trajectory to add to this
is the cost of energy production.
You know, AI, if it really helps us solve battery storage,
which is the missing piece, I think, to really tackle climate change,
then we will be able to source, basically source and store infinite energy from the sun.
And I think in 20 or so years time, 20, 30 years time,
that is going to be a cheap and widely available, if not completely freely available resource.
And if you think about it, everything in life has the cost of energy built into its production
value. And so if you strip that out, everything is likely to get a lot cheaper. We'll be able to
desalinate water. We'll be able to grow crops much, much cheaper. We were able to grow much
higher quality food, right? It's going to power new forms of transportation. It's going to reduce
the cost of drug production and healthcare, right? So all of those gains, obviously there'll be a
huge commercial incentive to drive the production of those gains, obviously there'll be a huge commercial incentive
to drive the production of those gains, but the cost of producing them is going to go through the
floor. I think that's one key thing that a lot of people don't realize that is a reason to be
hugely hopeful and optimistic about the future. Everything is going to get radically cheaper
in 30 to 50 years so 200 years time we have no idea what the world looks like it's uh this goes back to the point
about being is it did you say transhumanist right what does that mean transhumanism
i mean it's a group of people who basically believe that humans and our soul and our being
will one day transcend or move beyond our biological substrate.
So our physical body, our brain, our biology
is just an enabler for your intelligence
and who you are as a person. And there's a group of kind of
crackbots, basically, I think, who think that we're going to be able to upload ourselves to
a silicon substrate, right, a computer that can hold the essence of what it means to be Steven. So you in 200 in 20,
uh,
in,
in 2200,
well,
could well still be you by their reasoning,
but you'll live on a server somewhere.
Why are they wrong?
I think about all these adjacent technologies like biological,
um,
biological advancements.
Did you call it like biosynthesis or something yeah
synthetic biology synthetic biology um i think about the nanotechnology development right about
quantum computing the the progress in artificial intelligence everything becoming cheaper and i
think why why are they wrong it's hard to say precisely, but broadly speaking, I haven't seen any evidence yet that we're able to extract the essence of a being from a brain, right?
It's that kind of dualism that, you know, there is a mind and a body and a spirit.
That is a, I don't think, I don't see much evidence for that, even in neuroscience,
that actually it's much more one in the same. So I don't think, you know, you're going to be able
to emulate the entire brain. So their thesis is that, well, some of them cryogenically store their
brain after death. So they, they have it, they've, they, they wear these, like, you know, how you
have like an organ donor tag or whatever
so they have a cryogenically freeze me when i die tag and so they there's like a special
ambulance services that will come pick you up because obviously you need to do it really
quickly the moment you die you need to get put into a cryogenic freezer to preserve your you
know brain forever i personally think this is this is nuts but you know their brain forever. I personally think this is, this is nuts, but you know, their belief is
that you'll then be able to reboot that biological brain and then transfer you over. Um, it, it
doesn't seem plausible to me. When you said at the start of this, this little topic here that
you, it must be possible to contain it. So it must be possible. Um, the, the reason why i struggle with that is because in chapter seven
you say a line in your book that ai is more autonomous than any other technology in history
for centuries the idea that technologists is somehow running out of control a self-directed
and self-propelling force beyond the realms of human agency remained a fiction not any more and this idea of autonomous technology that is
acting uninstructed um and is intelligent and then you say we must be able to contain it
it's kind of like a massive dog like a big rottweiler yeah that is you know a thousand times bigger than me and me
looking up at it and going i'm gonna take you for a walk yeah yeah and then it's just looking down
at me and just stepping over me or stepping on me well that's actually a good example because
we have actually contained rottweilers before. We've contained gorillas and, you know, tigers and crocodiles and pandemic pathogens and nuclear weapons. And so, you know, it's easy to be, you know, a hater on what we've achieved. But this is the most peaceful moment in the history of our species. This is a moment when our biggest problem is that people eat too much. Think about that. We've spent our entire evolutionary period running around looking
for food and trying to stop, you know, our enemies throwing rocks at us. And we've had this incredible
period of 500 years where, you know know each year things have broadly maybe each
each century let's say there's been a few ups and downs but things have broadly got better
and we're on a trajectory for you know lifespans to increase and quality of life to increase
and health and well-being to improve and I think that's because in many ways we have
succeeded in containing forces that appear to be more powerful than ourselves. It just requires
unbelievable creativity and adaptation. It requires compromise and it requires a new
tone, right? A much more humble tone to governance and politics and how we run
our world. Not this kind of like hyper-aggressive adversarial paranoia tone that we talked about
previously, but one that is like much more wise than that, much more accepting that we are
unleashing this force that does have that potential to be the Rottweiler that you described, but that we must contain that as our number one priority. That has to
be the thing that we focus on, because otherwise it contains us. I've been thinking a lot recently
about cybersecurity as well, just broadly on an individual level. In a world where there are
these kinds of tools, which seems to be quite close um large
language models it brings up this whole new question about cyber security and cyber safety
and you know in a world where there's um these ability to generate audio and language and videos
that seem to be real um what can we trust and you, I was watching a video of a young girl
whose grandmother was called up by a voice
that was made to sound like her son
saying he'd been in a car accident and asking for money
and her nearly sending the money.
Or this whole, you know,
because this really brings into focus that we,
our lives are built on trust,
trusting the things we see here and watch and in in and now where it feels like a
a moment where we're no longer going to be able to trust what we see on the internet on the phone
what what advice do you do we you have for people who are worried about this? So skepticism, I think is healthy and necessary. And I think that we're going to need it
even more than than we ever did. Right. And so if you think about how we've adapted to
the first wave of this, which was spammy email scams, everybody got them. And over time, people learn to identify them and be
skeptical of them and reject them. Likewise, you know, I'm sure many of us get like text messages,
I certainly get loads of text messages trying to fish me and ask me to meet up or do this,
that and the other. And we've adapted right now, I think we should all know and expect that criminals will use these tools to manipulate us, just as you've described. I mean, you know, the voice is going to be human-like. The deepfake is going to be super convincing. And there are actually ways around those things so for example the reason why the
banks invented otp um one-time passwords where they send you a text message with a special code
is precisely for this reason so that you have a 2fa a two-factor authentication increasingly we
will have a three or four factor authentication where you have to
triangulate between multiple separate independent sources and it won't just be like call your bank
manager and release the funds right so this is where we need the creativity and energy and
attention of everybody because defense the kind of defensive measures have to evolve as quickly
as the potential offensive measures the attacks that are coming i heard you say this that you
think um some people are for many of these problems we're going to need to develop ais to
defend us from the ais right we kind of already have that, right? So we have automated ways of
detecting spam online these days. You know, most of the time there are machine learning systems
which are trying to identify when your credit card is used in a fraudulent way. That's not a human
sitting there looking at patterns of spending traffic in real time. That's an AI that is like flagging that something looks off. Likewise, with data centers or security
cameras, a lot of those security cameras these days are, you know, have tracking algorithms that
look for, you know, surprising sounds or like if a glass window is smashed, that will be detected by an AI
often that is, you know, listening on the security camera. So, you know, that's kind of what I mean
by that is that increasingly, those AIs will get more capable, and we'll want to use them for
defensive purposes. And that's exactly what it looks like to have good, healthy, well functioning,
controlled AIs that serve us.
I went on one of these large language models and said to me, I said to the large language model,
give me an example where an artificial intelligence takes over the world or whatever and results in the destruction of humanity.
And then tell me what we'd need to do to prevent it.
And it said, it gave me this wonderful example of this ai called cynthia that threatens to destroy
the world and it says the way to defend that would be a different ai which had a different name and
it said that this one would be acting in human interests and we'd basically be fighting one ai
with another ai and of course of course of course at that level if cynthia started to wreakak havoc on the world and take control of the nuclear weapons and infrastructure and all that, we would need an equally intelligent weapon to fight it.
Although one of the interesting things that we found over the last few decades is that it so far tended to be the AI plus the human
that is still dominating.
That's the case in chess, in Go, in other games.
In Go, it's still...
Yeah, so there was a paper that came out a few months ago,
two months ago, that showed that a human was actually able
to beat the cutting-edge Go program,
even one that was better than AlphaGo,
with a new strategy that they had discovered.
So obviously it's not just a sort of game over environment
where the AI just arrives and it gets better.
Humans also adapt.
They get super smart.
They, like I say, get more cynical, get more skeptical,
ask good questions, invent their own
things, use their own AIs to adapt. And that's the evolutionary nature of what it means to
have a technology, right? I mean, everything is a technology, like your pair of glasses
made you smarter in a way, like before there were glasses and people got bad eyesight,
they weren't able to read, you know, suddenly those who did adopt those technologies were able to read for, you know, longer in their lives or
under low light conditions, and they were able to consume more information and got smarter. And so
that is the trajectory of technology. It's this iterative interplay between, you know, human and
machine that makes us better over time. You know, the potential consequences if we don't reach a point of containment,
yet you chose to build a company in this space.
Yeah.
Why that? Why did you do that?
Because I believe that the best way to demonstrate how to build safe and contained AI is to actually experiment with it in practice. then we give up that opportunity to shape outcomes to, you know, all of those other
actors that we referred to, whether it's like China and the US going at each other's throats,
you know, or other big companies that are purely pursuing profit at all costs. And so it doesn't
solve all the problems. Of course, it's super hard. And again, it's full of contradictions.
But I honestly think it's the
right way for everybody to proceed. You know, if you're at the front, yeah, if you're afraid,
Russia, Putin, understand, right, what reduces fear is deep understanding, spend time playing
with these models, look at their weaknesses. They're not superhumans yet, they make tons of
mistakes. They're crappy in
lots of ways they're actually not that hard to make the more you've experimented has that has
that correlated with a reduction in fear cheeky question yes and no you're totally right yes it
has in the sense that you know the problem is the more you learn, the more you realize.
Yeah, that's what I'm saying.
I was fine before I started talking about AI.
Now the more I've talked about it.
It's true.
It's true.
It's sort of pulling on a thread.
This is a crazy spiral.
Yeah.
I mean, like, I think in the the short term it's made me way less afraid
because i i don't see that kind of existential harm that we've been talking about in the next
decade or two but longer term that's that's where i struggle to wrap my head around how things play
out in 30 years some people say government regulation will sort it out you discuss this in chapter 13 of your book
where you which is titled containment must be possible i love how you didn't say is yeah
containment must be containment must be possible um what do you say to people that say government
regulation will sort it out i had rishi sunak did some announcement and he's got a COBRA committee coming together
they'll handle it
that's right and the EU
have a huge piece of
regulation called the EU AI Act
President Joe Biden
has gotten his own
set of proposals
we've been working
with both R you know,
Rishi Sunak and Biden and, you know, trying to contribute and shape it in the best way that we
can. Look, it isn't going to happen without regulation. So regulation is essential, it's
critical. Again, going back to the precautionary principle. But at the same time, regulation isn't enough.
You know, I often hear people say, well, we'll just regulate it. We'll just stop.
We'll just stop. We'll just stop. We'll slow down. And the problem with that is that it kind of
ignores the fact that the people who are putting together the regulation don't really understand enough about
the detail today you know in their defense they're rapidly trying to wrap their head around it
especially in the last six months and that's a great relief to me because i feel the burden is
now increasingly shared and you know just from a personal perspective i'm like i feel like i've
been saying this for about a decade and just in
the last six months now everyone's coming at me and saying like you know what's going on i'm like
great this is the conversation we need to be having because everybody can start to see the glimmers
of the future like what will happen if a chat gpt like product or a pie like product really does
improve over the next 10 years and so when when I say, you know, regulation is
not enough, what I mean is, it needs movements, it needs culture, it needs people who are actually
building and making, you know, in like modern creative critical ways, not just like giving it
up to, you know, companies or small groups of people, right? We need lots of different people
experimenting with strategies for containment. Isn't it predicted that this industry is a 15 you know, companies or small groups of people, right? We need lots of different people experimenting
with strategies for containment. Isn't it predicted that this industry is a $15 trillion
industry or something like that? Yeah, I've heard that. It is a lot. So if I'm Rishi, and I know that
I'm going to be chucked out of office, Rishi's the Prime Minister of the UK, if I'm going to be
chucked out of office in two years, unless this gets good i don't want to do anything to slow down that 15 trillion dollar bag that i could
be on the receiving end of i would i would definitely not want to slow that 15 billion
trillion dollar bag and give it to like america or canada or some other country i'd want that
15 trillion dollar windfall to be on my country right so i have i have no other than the long-term you know
health and success of humanity in my four-year election window i've got to do everything i can
to boost these numbers right and get us looking good so i could give you lip service
but but but listen i'm not going to be here unless these numbers look good.
Right. Exactly. That's another one of the problems. Short-termism is everywhere.
Who is responsible for thinking about the 20-year future?
Who is it?
I mean, that's a deep question, right? I mean, we, the world is happening to us on a decade by decade timescale. It's also happening hour by hour. So change is just ripping through us.
And this arbitrary window of governance of like a four year election cycle,
where actually it's not even four years, because by the time you've got in, you do some stuff for
six months. And then by month, you know you know 12 or 18 you're starting to think about
the next cycle and are you going to pull you know it's just like the short-termism is killing us
right and we don't have an institutional body whose responsibility is stability
you could think of it as like a know, like a global technology stability function. What is the global strategy
for containment that has the ability to introduce friction when necessary to implement the
precautionary principle and to basically keep the peace that I think is the missing governance piece, which we have to invent in the next 20 years.
And it's insane, because I'm basically describing the UN Security Council,
plus the World Trade Organization, all these huge, you know, global institutions,
which formed after, you know, the horrors of the Second World War, have actually been incredible. They've
created interdependence and alignment and stability, right? Obviously, there's been a lot
of bumps along the way in the last 70 years, but broadly speaking, it's an unprecedented period
of peace. And when there's peace, we can create prosperity. And that's actually what we're lacking
at the moment is that we don't have an
international mechanism for coordinating among competing nations, competing corporations
to drive the peace. In fact, we're actually going kind of in the opposite direction. We're resorting
to the old school language of a clash of civilizations with like, China is the new enemy,
they're going to come to dominate us, we have to dominate them. It's a, it's a battle between two poles, China's taken over Africa,
China's taken over the Middle East, we have to count, I mean, just like, that can only lead
to conflict, that just assumes that conflict is inevitable. And so when I say regulation is not
enough, no amount of good regulation in the UK or in Europe or in the US
is going to deal with that clash of civilizations language,
which we seem to have become addicted to.
If we need that global collaboration to be successful here,
are you optimistic now that we'll get it?
Because the same incentives are at play with climate change and AI.
You know, why would I want to reduce my carbon emissions when it's making me loads of money? Or why, you
know, why would I want to reduce my AI development when it's going to make us 15 trillion?
Yeah. So the really painful answer to that question is that we've only really ever driven extreme compromise and consensus in two scenarios one off the back of
unimaginable catastrophe and suffering you know hiroshima nagasaki and the holocaust and world
war ii which drove 10 years of consensus and new political structures right and then the second is um we did fire the
bullet though didn't we we fired a couple of those nuclear bombs exactly and that that's why i'm
saying the brutal truth of that is that it takes a catastrophe to trigger the need for alignment
right so that that's one the second is where there is an obvious
mutually assured destruction, you know, dynamic,
where both parties are afraid
that this would trigger nuclear meltdown, right?
And that means suicide.
And when there was few parties.
Exactly.
When there was just nine people.
Exactly.
You could get all nine.
But when we're talking about artificial technology,
there's going to be more than nine people, right?
That have access to the full sort of power of that technology
for nefarious reasons.
I don't think it has to be like that.
I think that's the challenge of containment,
is to reduce the number of actors that have access to the existential threat technologies to an absolute minimum,
and then use the existing military and economic incentives which have driven world order and peace so far to prevent the proliferation of access to these super intelligences or these AGIs. 10 areas of focus for containment. You're the first person I've met that's really
laid out a blueprint for the things that need to be done
cohesively to try and reach this point of containment. So I'm super
excited to talk to you about these. The first one is about safety. And you mentioned there,
that's kind of what we talked about a little bit about there being AIs that are currently being developed to
help contain other AIs to audits um which is being able to from what I understand
being able to audit what's being built in these open source models three choke points what's that
yeah so choke points refers to points in the supply chain where you can throttle who has access to what.
So on the internet today, everyone thinks of the internet as an idea,
this kind of abstract cloud thing that hovers around above our heads.
But really the internet is a bunch of cables.
Those cables are physical things that transmit information under the sea.
And those points, the endpoints, can be stopped.
And you can monitor traffic.
You can control basically what traffic moves back and forth.
And then the second choke point is access to chips. So the GPUs, graphics processing units,
which are used to train these super large clusters.
I mean, we now have the second largest supercomputer
in the world today.
At least just for this next six months, we will.
Other people will catch up soon,
but we're ahead of the curve.
We're very lucky.
Cost a billion dollars.
And those chips are really the raw commodity that we use to build these large language models.
And access to those chips is something that governments can, should, and are, you know,
restricting. That's a choke point. You spent a billion dollars on a computer.
We did. Yeah. A bit more than that actually about 1.3
a couple years time that'll be the price of an iphone
that's the problem everyone's gonna have it
number six is quite curious you say that um the need for governments to put increased taxation
on ai companies to be able to find fund the massive changes in society such as paying for
reskilling and education yeah um you put massive tax on it over here i'm gonna go over here
if you tax it if i'm an ai company and you're taxing me heavily over here i'm going to dubai
yep or portugal yep so if it's that much of a competitive disadvantage,
I will not build my company where the taxation is high.
Right, right.
So the way to think about this is,
what are the strategies for containment?
If we are agreed that long-term we want to contain,
that is close down, slow down, control,
both the proliferation of these technologies
and the way the really big AIs are used,
then the way to do that is to tax things.
Taxing things slows them down.
And that's what you're looking for,
provided you can coordinate internationally.
So you're totally right that some people will move to Singapore
or to Abu Dhabi or Dubai or
whatever. The reality is that at least for the next, you know, sort of period, I would say 10
years or so, the concentrations of intellectual, you know, horsepower will remain the big mega
cities, right? You know, I moved from London in 2020 to go to Silicon Valley, and I started my new company
in Silicon Valley, because the concentration of talent there is overwhelming. All the very best
people are there in AI and software engineering. So I think it's quite likely that that's going to
remain the case for the foreseeable future. But in the long term, you're totally right.
How do you, it's another coordination problem problem how do we get nation states to collectively agree that we want to try and contain that we want to
slow down because as we've discussed with the proliferation of dangerous materials or on the
military side there's no use one person doing it or one country doing it if others race ahead
and that's the conundrum that we face i am I don't consider myself to be a pessimist in my life. I consider myself to be an optimist,
generally, I think. And I always, I think that, as you've said, I think we have no choice but to
be optimistic. And I have faith in humanity. We've done so much, so many incredible things and
overcome so many things. And I also think I'm really logical, as in, I'm the type of person
that needs evidence to change my beliefs. Either way. When I
look at all of the whole picture, having spoken to you and several others on this subject matter,
I see more reasons why we won't be able to contain than reasons why we will, especially when I dig
into those incentives. You talk about incentives at length in your book, at different points,
and it's clear that all the incentives
are pushing towards a lack of containment,
especially in the short and midterm,
which tends to happen with new technologies.
In the short and midterm, it's like a land grab.
The gold is in the stream.
We all rush to get the shovels and the sieves and stuff.
And then we realize the unintended consequences of that.
Hopefully not before it's too late.
In chapter eight, you talk about unstoppable incentives at play here.
The coming wave represents the greatest economic prize in history.
And scientists and technologists are all too human.
They crave status, success, and legacy.
And they want to be recognized as the first and the best.
They're competitive and clever with a carefully nurtured sense of their place
in the world and in history.
Right.
I look at you.
I look at people like Sam from OpenAI,
Elon.
You're all humans with the same understanding of your place in history
and status and success you all want that right right
there's a lot of people that maybe aren't is don't have as good a track record of you at
doing the right thing which you certainly have that will just want the status and the success
and the money incredibly strong incentives i always think about incentives as being the thing
that you look at exactly you want to understand how people will behave all of the incentives on the money incredibly strong incentives i always think about incentives as being the thing that
you look at exactly you want to understand how people behave all of the incentives on a geopolitical
level like on a global level suggest that containment won't happen am i right in that
assumption that all the incentives suggest containment won't happen in the short or midterm until there is a tragic event that makes us,
forces us towards that idea of containment.
Or if there is a threat of mutually assured destruction, right?
So that, and that's the case that I'm trying to make
is that let's not wait for something catastrophic to happen
so it's
self evident that we all have to work towards containment, right? I mean, you would have thought
that the potential threat, the potential idea that COVID-19 was a side effect, let's call it, of a laboratory in Wuhan that was exploring gain-of-function
research, where it was deliberately trying to basically make the pathogen more transmissible.
You would have thought that warning to all of us, let's not even debate whether it was or wasn't,
but just the fact that it's conceivable that it could be, that should really, in my
opinion, have forced all of us to instantly agree that this kind of research should just
be shut down.
We should just not be doing gain-of-function research.
On what planet could we possibly persuade ourselves that we can overcome the containment
problem in biology?
Because we've proven that we can't, because it could have potentially got out.
And there's a number of other examples of where it did get out,
of other diseases like foot and mouth disease,
back in the 90s in the UK.
But that didn't change our behavior.
Right.
Well, foot and mouth disease clearly didn't cause enough harm
because it only killed a bunch of cattle, right?
And the pandemic, we can't seem to, you know, COVID-19 pandemic, disease clearly didn't cause enough harm because it only killed a bunch of cattle right um and the
pandemic we can't see you know covid19 pandemic we can't seem to agree you know that it really was
from a lab and not from a bunch of bats right and so that's where i struggle where you know
now you catch me in a moment where i feel angry and sad and pessimistic, because to me, that's like a
straightforwardly obvious conclusion that, you know, this is a type of research that we should
be closing down. And I think we should be using these moments to give us insight and wisdom about
how we handle other technology trajectories in the next few decades should we should we should that's what i'm
advocating for must that's the best i can do i want to know will will i think the odds are low
i can only do my best i'm doing my best to advocate for it i mean you know like i'll give
you an example like i think autonomy is a type of ai capability that we should not be pursuing really like autonomous
cars and stuff well i autonomous cars i think are slightly different because autonomous cars
operate within a much more constrained physical domain right like you know you you really can
the containment strategies for autonomous cars are actually quite reassuring okay right
they have you know, GPS control.
You know, we know exactly all the telemetry
and how exactly all of those, you know,
components on board a car operate.
And we can observe repeatedly
that it behaves exactly as intended, right?
Whereas I think with other forms of autonomy
that people might be pursuing, like online,
you know know where you
have an an ai that is like designed to self-improve without any human oversight or a battlefield
weapon which you know like unlike a car hasn't been you know over that particular moment in the
battlefield millions of times but is actually facing a new enemy every time you know every
single time and we're just going to go
and you know allow these autonomous weapons to have you know these autonomous military robots
to have lethal force i think that's something that we should really resist i don't think we
want to have autonomous robots that have lethal force you're a super smart guy and i i struggle
to believe that you're you you because
you you demonstrate such a clear understanding of the incentives in your book that i struggle
to believe that you don't think the incentives will win out especially in the short and near
term and then the problem is in the short and near term as is the case with most of these waves
is we we wake up in 10 years time ago how the hell did we get here right and why like
and we and as you say this precautionary approach of we should have rang the bell earlier we should
have sounded the alarm earlier but we waltzed in with optimism right and with that kind of
aversion to confronting the realities of it and then we woke up in 30 years and we're on a leash
right and there's a big rottweiler and we're,
we've lost control.
We've lost,
you know,
I,
I,
I would love to know someone as smart as you.
I don't,
I don't believe can be,
can believe that containment is possible.
And that's me just being completely honest.
I'm not saying you're lying to me but i
just can't see how someone as smart as you and in the know as you can believe that containment is
going to happen well i didn't say it is possible i said it must be right which is this is what we
keep discussing that's an important distinction you know on the face of it look what i i care
about i care about science i care about facts i care about describing the world as I see it.
And what I've set out to do in the book
is describe a set of interlocking incentives
which drive a technology production process
which produces potentially really dangerous outcomes.
And what I'm trying to do is frame those outcomes
in the context of the containment
problem and say, this is the big challenge of the 21st century. Containment is the challenge.
And if it isn't possible, then we have serious issues. And on the face of it, like I've said in
the book, I mean, the first chapter is called containment is not possible, right? The last
chapter is called containment must be possible for all our sakes, it must be possible. But I agree with you that I'm not saying it is.
I'm saying this is what we have to be working on.
We have no choice.
We have no choice but to work on this problem.
This is a critical problem.
How much of your time are you focusing on this problem?
Basically all my time.
I mean, building and creating is about understanding
how these models work,
what their limitations are, how to build it safely and ethically. I mean, we have designed
the structure of the company to focus on the safety and ethics aspects. So for example,
we are a public benefit corporation, right, which is a new type of corporation, which gives us a legal obligation to balance profit making with the
consequences of our actions as a company on the rest of the world, the way that we affect the
environment, you know, the way that we affect people, the way that we affect users that people
who aren't users of our products. And that's a really interesting,
I think, an important new direction. It's a new evolution in corporate structure,
because it says we have a responsibility to proactively do our best to do the right thing.
Right. And I think that if if you were a tobacco company back in the day or an oil company back in the day,
and your legal charter said that your directors are liable if they don't meet the criteria of
stewarding your work in a way that doesn't just optimize profit, which is what all companies are
incentivized to do at the moment, talking about incentives, but actually in equal measure attend to the importance of doing
good in the world. To me, that's a incremental but important innovation in how we organize society
and how we incentivize our work. So it doesn't solve everything. It's not a panacea. But that's
my effort to try and take a small step in the right direction do you ever get sad about
it about what's happening yeah for sure for sure it's intense
it's intense it's a lot to take in this is it's a very real reality
does that weigh on you It's a very real reality.
Does that weigh on you?
Yeah, it does.
I mean, every day, every day.
I mean, I've been working on this for many years now and it's emotionally a lot to take in.
It's hard to think about the far out future
and how your actions today, our actions collectively, our weaknesses,
our failures, that, you know, that irritation that I have that we can't learn the lessons from the
pandemic, right? Like all of those moments where you feel the frustration of governments not working
properly or corporations not listening or
some of the obsessions that we have in culture where we're debating like small things you know
and you're just like whoa we need to focus on the big picture here you must feel a certain
sense of responsibility as well that most people won't carry because you've spent so much of your
life at the very
cutting edge of this technology and you understand it better than most you can speak to it better
than most so you have a greater chance than many at steering that's a responsibility
yeah i embrace that i try to treat that as a privilege. I feel lucky to have the
opportunity to
try and do that. There's this wonderful thing
in my favourite
theatrical play called Hamilton where he says
history has its eyes on you.
Do you feel that?
Yeah.
I feel that.
It's a good way of putting it.
I do feel that. It's a good way of putting it. I do feel that.
You're happy, right?
Well, what is happiness?
I don't know.
What's the range of emotions that you contend with
on a frequent basis, if you're being honest?
I think it's kind of exhausting
and exhilarating in equal measure
because for me, it is beautiful
to see people interact with AIs
and get huge benefit out of it.
I mean, you know, every day now,
millions of people have a super smart tool in their pocket
that is making them wiser and healthier and happier,
providing emotional support,
answering questions of every type,
making you more intelligent.
And so on the face of it, in the short term,
that feels
incredible. It's amazing what we're all building. But in the longer term, it is exhausting to keep
making this argument and, you know, have been doing it for a long time. And in a weird way,
I feel a bit of a sense of relief in the last six months because after ChatGPT and, you know,
this wave feels like it started to arrive
and everybody gets it.
So I feel like it's a shared problem now.
And that feels nice.
And it's not just bouncing around in your head.
A little bit.
It's not just in my head and a few other people
at DeepMind and OpenAI and other places
that have been talking about it for a long time ultimately human beings may no longer be the
primary planetary drivers as we have become accustomed to being we are going to live in an
epoch where the majority of our daily interactions are not with other people but with, page 284 of your book.
The last page.
Yeah.
Think about how much of your day you spend looking at a screen.
12 hours.
Pretty much, right?
Whether it's a phone or an iPad or a desktop versus how much time you spend looking into the eyes of your friends and your loved ones.
And so to me, it's like we're already there in a way.
You know, what I meant by that was, you know, this is a world that we're kind of already in.
You know, the last three years,
people have been talking about metaverse,
metaverse, metaverse.
And the mischaracterization of the metaverse
was that it's over there.
It was this like virtual world
that we would all bop around in
and talk to each other as these little characters.
But that was totally wrong.
That was a complete misframing.
The metaverse is already here.
It's the digital space that exists in parallel time to our everyday life. It's the conversation
that you will have on Twitter or, you know, the video that you'll post on YouTube or this podcast
that will go out and connect with other people. It's that meta space
of interaction, you know, and I use meta to mean beyond this space, not just that weird
other over there space that people seem to point to. And that's really what is emerging here. It's
this parallel digital space that is going to live alongside
with and in relation to our physical world your kids come to you you got kids no i don't have
kids your future kids if you ever have kids a young child walks up to you and says asks that
question that elon was asks what should i do about with my future what should i pursue in the light
of everything you know about how our artificial intelligence is going to change the world and computational power and all of these things feels scary and do everything you can to understand and participate and shape because it is coming
and if someone's listening to this and they want to do something to help
this battle for which i think you present as the solution containment
what can the individual do
read listen use the tools try to make the tools understand the current state of regulation
see which organizations are organizing around it like like, you know, campaign groups, activism groups,
you know, find solidarity, connect with other people, spend time online, ask these questions,
mention it at the pub, you know, ask your parents, ask your mom how she's reacting to,
you know, talking to Alexa or whatever it is that she might do, pay attention. I think that's already enough. And
there's no need to be more prescriptive than that, because I think people are
creative and independent and it will be obvious to you what you as an individual
feel you need to contribute in this moment, provided you're paying attention.
Last question. What if we fail and what if we succeed?
What if we fail in containment
and what if we succeed in containment
of artificial intelligence?
I honestly think that if we succeed,
this is going to be the most productive
and the most meritocratic moment in the history of our species.
We are about to make intelligence widely available to hundreds of millions, if not billions of people.
And that is all going to make us smarter and much more creative and much more productive.
And I think over the next few decades, we will solve many of our biggest social challenges.
I really believe that. I really believe we're going to reduce the cost of energy production,
storage and distribution to zero marginal cost. We're going to reduce the cost of producing
healthy food and make that widely available to everybody and I think the same trajectory
with healthcare, with transportation, with education, I think that ends up producing radical abundance over a 30 year period.
And in a world of radical abundance, what do I do with my day?
I think that's another profound question. And believe me, that is a good problem to have.
If we can, absolutely.
But we don't need meaning and purpose.
Oh man, that is a better problem to
have than what we've just been talking about for the last like 90 minutes yeah and i think that's
wonderful isn't that amazing i don't know i don't know the reason i i'm unsure is because
everything that seems wonderful has a has a unintended consequence i'm sure it does we
live in a world of food abundance in the west and our biggest problem is obesity right so i'll take that problem in the grand scheme of everything
humans not need struggle do we not need that kind of meaningful voluntary struggle i think we'll
create new times other you know opportunities to quest okay you know i i think that's an easier
problem to solve and i think that's an easier problem to solve
and I think it's
an amazing problem.
Like many people
really don't want to work,
right?
They want to pursue
their passion
and their hobby
and, you know,
all the things
that you talk about
and so on.
And absolutely,
like we're now,
I think,
going to be heading
towards a world
where we can liberate
people from the shackles
of work
unless you really want to.
Universal basic income?
I've long been an advocate of UBI.
Very long time.
Everyone gets a check every month?
I don't think it's going to quite take that form.
I actually think it's going to be
that we basically reduce the cost of producing basic goods
so that you're not as dependent on income.
Like imagine if you did have basically free energy.
And food.
You could use that free energy to grow your own food.
You could grow it in a desert
because you would have adapted seeds and so on.
You would have desalination and so on.
That really changes the structure of cities.
It changes the structure of nations.
It means that you really can live in quite different ways
for very extended periods without
contact with the kind of center i mean i'm actually not a huge advocate of that kind of libertarian
you know wet dream but like i think if you think about it in theory it's kind of a really
interesting dynamic that's what proliferation of power means power isn't just about access to
intelligence it's about access to these tools, which allow you to take control
of your own destiny in your life and create meaning and purpose in the way that you, you know,
might envision. And that's incredibly creative, incredibly creative time. That's what success
looks like to me. And well, in some ways, the downside of that, think there's the failure is not achieving a world of radical abundance
in my opinion and and more importantly failure is a failure to contain right what does that lead to
i think it leads to a mass proliferation of power and people who have really bad
you know intentions what does that lead to?
Will potentially use that power to cause harm to others.
This is part of the challenge, right?
A small, in this networked, globalized world,
a tiny group of people who wish to deliberately cause harm are going to have access to tools that can instantly,
quickly have large scale impact on
many, many other people. And that's the challenge of proliferation is preventing those bad actors
from getting access to the means to completely destabilize our world. That's what containment
is about. We have a closing tradition on this podcast where the last guest leaves a question for the next
guest not knowing who they're leaving the question for the question left for you is
what is a space or place that you consider the most sacred
well i think one of the most beautiful places I remember going to as a child was Windermere Lake in the Lake District.
And I was pretty young and on a dinghy with some family members.
And I just remember it being incredibly serene and beautiful and and calm I actually
haven't been back there since but that was a pretty beautiful place seems like the antithesis
of the world we live in right maybe I should go back there and chill out
maybe thank you so much for writing such a great book it's wonderful to to read a book
on this subject matter that does present solutions because not many of them do.
And it presents them in a balanced way
that appreciates both sides of the argument,
isn't tempted to just play to either,
what do they call it, playing to like the crowd?
No, what do they call it, like playing to the orchestra?
I can't remember.
But just, it doesn't attempt to play to either side
or ponder to either side in order to score points.
It seems to be entirely nuanced, incredibly smart,
and incredibly necessary because of the stakes
that the book confronts that are at play
in the world at the moment.
And that's really important.
It's very, very, very important.
And it's important that I think everybody reads this book.
It's incredibly accessible as well.
And I said to to jack who's
the director of this podcast before we started recording that there's so many terms there's so
many terms like nanotechnology and um all the stuff about like biotechnologies and quantum
computing that reading through the book suddenly i understood what they meant and these had been
kind of exclusive terms and technologies and i also had never understood the
relationship that all of these technologies now have with each other and how like robotics
emerging with artificial intelligence is going to cause this whole new range of possibilities that
again have a good side and a potential downside um it's a wonderful book and it's perfectly timed
it's perfectly timed wonderfully written perfectly timed i'm so
thankful that i got to read it and i highly recommend that anybody that's curious on this
subject matter goes and gets the book so thank you mustafa really really appreciate your time
and hopefully it wasn't too uncomfortable for you thank you this was awesome i loved it it was
really fun and uh thanks for such a amazing wide-ranging conversation