What Now? with Trevor Noah - Will AI Save Humanity or End It? with Mustafa Suleyman
Episode Date: September 18, 2025Trevor (who is also Microsoft’s “Chief Questions Officer”) and Mustafa Suleyman, CEO of Microsoft AI and co-founder of Google’s DeepMind do a deep dive into whether the benefits of AI to the ...human race outweigh its unprecedented risks. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
So I feel like people fall in one of two camps on AI.
They either think it's going to destroy all of humanity.
42% of CEOs surveyed fear artificial intelligence could destroy humanity.
This is something that put in the wrong hands could destroy humanity.
Or they think it's going to sound every single problem.
Mustafa Suleiman.
Mustafa Suleiman is an artificial intelligence pioneer.
He is the AI CEO at Microsoft.
He is very big in the artificial intelligence world.
Do we manage these technologies so that we can coexist with them safely?
Can humans and AI coexist with each other peacefully without one taking over the other?
This is What Now with Trevor Noah.
Mustafa, how are you, man?
I'm very good, man.
This is great, this is good.
It's funny that there's almost two or three different types of conversations I have with people.
There's ones where I'm hanging out, it's my friends, we're discussing just whatever, you know, shooting the shit.
Then there's some where I bring a person in and I'm trying to like get something from them or learn about their world.
And then there's the third type of interview that often stresses me the most because I feel like I'm speaking to a person who is, who has like an outsized.
influence on our world and if I mess it up I don't ask the questions that the world needs and
I feel like you're one of those people because even before your current job you were considered
one of like if there was a Mount Rushmore of the founders of AI with none of the with none of the
baggage with none of the baggage of Mount Rushmore no colonial history no colonial history yeah
no colonial history but if there was like a large Mount Rushmore your face would be up there
you know, as being part of DeepMind and the founders of DeepMind.
And then now you are helping Microsoft, like one of the biggest, you know,
tech companies in the world by, you know, market cap and just by influence shape its view on AI.
And so maybe that's where I wanted to start this conversation because it's,
it almost feels like where we meet you in the journey now, you know.
What would you say has been the biggest shift in your life and what you've been doing in
AI, going from a startup that was on the cusp and this fledgling world of AI to now being
at the head of what's going to shape all of our lives in AI.
Wow, what an opener.
I mean, seriously, no pressure, huh?
You know, the crazy thing is that I've just got incredibly lucky.
I mean, I was strange enough to start working on something that everybody thought was.
impossible, was totally sci-fi, that was, you know, just dismissed by even the kind of
best academics, let alone any of the big tech companies didn't take it seriously.
I mean, 2010, you know, just to really ground on that, we had just got mobile phones three
years earlier.
Yeah.
The app store was just coming alive.
You couldn't even easily upload a photo from your phone to, you know, an app or the cloud, right?
And somehow, you know, my courageous visionary co-founders, Demis, Hussibis, and Shane Legg, had the foresight to know that, you know, technology ultimately, digital technologies ultimately become these learning algorithms, which when fed more data and given more compute, have a very good chance of learning the structure and nature of the universe.
And so I was just very privileged to be friends with them, be part of that mission to, you know, I was only 25 years old and, you know, kind of had the fearlessness to believe that if we could create something that truly understood us as humans, then that actually represents one of the best chances we have of improving the human condition.
And that's always been my motivation to create technologies that actually serve us and make the world a better place.
Like, I was cheesy before it was cheesy.
Pre-chease.
You said something that sparked a thought in my mind, and I think a lot of people would love to better understand this.
We see headlines all the time saying, AI, this, AI in your job, AI does it, the thinking, the not thinking, though.
And you said something that engineers will gloss over quite quickly.
You'll go data and compute and then a model and then help me break that down.
Just explain that to me in the simplest terms possible.
What changed and what are you actually doing?
Because it's like we always had data, right?
We've had documents.
We've had files.
We've had information.
We always had computers, well not always, but we had computers for decades.
what changed and what is AI actually coming from?
So the best intuition that I have for it is that our physical world can be converted into an information world.
And information is basically this abstract idea, like it's mathematics, it doesn't exist in the physical world, but it's a representation of the physical objects.
And the algorithm sounds complicated.
but really it's just a mechanism for learning the structure of information and the relationship of one pixel to another pixel or one word to another word or one bit in the audio stream to the next bit in the audio stream.
So I know that sounds very like abstract, but the structure of reality or the structure of information is actually a highly learnable sort of function, right?
And that's what we saw in the very early part of AI, between 2010 and sort of 2016.
These models could learn to understand, or at least not understand, but maybe they could learn to
generate a new image just by reading a million images.
And that meant that it was learning that, you know, if an eye was here and another eye was there,
then most likely there would be some form of a nose here.
And although it didn't apply the word, nose and eye,
I just had a statistical correlation between those three objects such that if you then wanted to
imagine, well, where would the mouth be? It wouldn't put the mouth in the forehead. It would put the
mouth just below the nose. And that's what I mean about the structure of information. The
algorithm learns the relationship between the common objects in the training data. And it did that
well enough that it could generate new examples of the training data. First, in 2011, it was
handwritten black and white digits. Then by 2013, it was like cats in YouTube images, which was
what Google did back in 2013. Then as it got better, it could do it with audio. And then over time,
you know, roll forward another 10 years. It did it with text. And so it's just the same core
mechanism for learning the structure of information that has scaled all the way through.
So it's interesting. I heard you say what it does. And I also noticed at a moment you said it
understands. And then you said, well, no, wait. And I've actually noticed quite a few engineers and
people who work in AI and tech struggle with explaining it to laymen's using, you know, human language,
but then very quickly going like, no, no, no, no, it's not human language. It's like, does it think
or does it do what we think thinking is? Yeah, I mean, this is a profound question. And basically,
it shows us the limitations of our own vocabulary, because what is thinking? It sounds like a
silly question, but it's actually a very profound question. What is understanding? If I can simulate
understanding so perfectly that I can't distinguish between what was generated by the simulation
and what was generated by the thinking or understanding human being,
then if those two outputs are equivalently impressive,
does it matter what's actually happening under the hood,
whether it's thinking or understanding,
or whether it's conscious?
And that's a very, very difficult thing to ask,
because we're kind of behaviorists in the sense that as humans,
we trust each other, we learn from each other,
and we connect to each other socially,
by observing our actions.
You know, I don't know what's happening inside your brain, behind your eyes, inside your
heart, in your soul.
But I certainly hear the voice that you give me, the words that you say, I watch the actions
that you do, and I observe those behaviors.
And the really difficult thing about the moment that we're entering with this new AI agentic
era, as they become not just pattern recognition systems, but whole agents, is that we have
to engage with their behaviors.
increasingly as though they're like sort of digital people.
And this is a threshold transformation in the history of our species,
because they're not tools, they're clearly not humans,
they're not part of nature, they're kind of a fourth relation,
a fourth emergent kind of, I don't know how to describe it other than a fourth relation.
Yeah, I mean, you've called AI the most powerful general purpose technology
that we've ever invented.
And when I read that line in your book, I was thinking to myself, I was like, man, you are now at the epicenter of helping Microsoft shape this at scale.
You know, and then it made me wonder, what are you actually then trying to build?
Because everyone has a different answer to this question.
I've realized.
You know, if you ask Sam Altman, chat GPT, Sam Alton says, I'm trying to build artificial general intelligence.
And I go like, oh, I like the app.
He's like, I don't care about the app, actually.
I want to make the God computer.
And then you speak to somebody else and they say,
oh, I'm trying to make AI that can help companies.
I'm trying to make AI that helps.
So what are you actually trying to build?
I care about creating technologies that reduce human suffering.
I want to create things that are truly aligned to human interests.
I care about humanist superintelligence.
And that means that at every single step, new inventions have to pass the following test.
In aggregate, net net, does it actually improve human well-being, reduce human suffering,
and overall make the world a better place?
And it seems like ridiculous that we would want to apply that test.
Like, surely we would all take it for granted that no one would want to invent something that causes net harm.
Yeah. But, you know, there's certainly been other inventions in the past that we could
think of that, you know, arguably have delivered net harm, right? And we have a choice about what we
bring into the world. And so even though it's in the context of Microsoft, the most valuable
company in the world today, we have to start with values and what we care about. And to me,
a humanist superintelligence is one that always puts the human first and works in service of the
human. And obviously, there'll be a lot of debate and interpretation over the next few decades
about what that means in practice, but I think it's the correct starting point.
I've always wondered how the two sides of your brain sort of wrestle with each other around
these topics because, you know, someone asked me, they were like, oh, who are you having on?
I go, Mustafa's coming on, Mustafa Salaman, and they're like, oh, what is he doing?
I explained it a little bit.
And I was like, oh, so like an AI guy.
Then I was like, yeah, but he's also a philosopher.
And they're like, what do you mean he's a philosopher?
Then I was like, no, no, no, like actually.
like actually this is somebody who has studied philosophies engaged like you you you you
think about the human ramifications of the non-human technologies that are being built by and
for humans and and what you know what it is for me is I I always judge people by what they
choose to yada yada if that makes sense you know so I've talked to some people in tech and I say
what about what about the dangers and they go oh look I mean of course we've got to be aware of the
dangers, but the future and it's so big. And then I remember once I met you, the first time
actually I met you, I said, the technology is amazing. And then you went, the dangers. Let me tell
you about the dangers. Let me tell you about the things we need to consider. And I was like,
what just happened here? Do you know what I mean? I was just like, is this guy working against
himself? And so I wonder now, like, when you're in that space, when you're working on something
that is that big, how do you find the balance? Because we would be lying if we said humans could
live in a world where we could ignore technologies. I've seen people say that. My opinion is that you can't
ignore a technology, right? You can't just be like, no, we'll act like it doesn't exist. But on the
other hand, we also can't act like the technology is inevitable because then we've given ourselves up.
So when you're the person who's actually at the epicenter of trying to build our future,
I know it's not you alone, please don't get me wrong.
But how do you think about that?
How do you grapple with philosophy versus business, philosophy versus technology, human versus like an outcome?
What are you thinking of?
You've called out my split brain personality.
And now I'm like pinging, which side of me should answer?
You can answer twice.
You can pick your answer.
I'll give both.
I think part of it is just being English, you know.
I'm kind of like I'm more.
comfortable than the average American thinking about the kind of cynical dark side of things.
It's those rainy days, man. It's those rainy days, man. It's those rainy days. And I just think,
I don't know, like truth exists in the honesty of looking at all sides. And I think if you
have a kind of bias one way or another, it just doesn't feel real to me. And I guess that's
kind of my philosopher or kind of academic side that is a core part of who I am. I'm
comfortable, you know, living in, you know, what to some people might seem like a set of
contradictions because to me, they're not contradictions. Their truth manifested in a single
individual. But if you are honest about it, it's also manifested in every single one of us, too.
You know, I happen to be in a position, but like the company has to wrestle with these things, our governments have to wrestle with these things. Every single one of us as citizens has to confront this reality because, you know, every single technology just accelerates massive transformation, which can deliver unbelievable benefits and also create side effects. And it's like that that idea has been repeated so many times. It now kind of sounds trite. But once you get over the trite part, you still have to engage with the fact that the very same thing,
that is going to reduce the cost of production of energy over the next two decades by 100x.
Reduce the cost of energy by 100x.
You think that's what AI can do?
100%.
Like, I feel very optimistic about that.
Wait, wait.
So then I'll say that again.
So reduce the cost of energy.
I think energy is going to become a pretty much cheap and abundant resource.
I mean, even solar panels alone are probably going to come down by another 5x in the next 10 years.
just that breakthrough alone is going to reduce the price of most things.
And what is that through?
Is that like the AI being more efficient, teaching us how to create different energy grids,
teaching us how to create energy differently?
Like what would you predict it coming from?
Well, I mean, so at the most abstract level, these are pattern matching systems
that find more efficient ways than we are able to invent ourselves as humans for combining
new materials.
Now, that might be in grid management and distribution.
It might be inventing new synthetic materials for, you know, batteries and storing renewable energy.
It might be in more efficient solar voltaic cells that can actually capture more per square inch, for example.
I mean, there are so many breakthroughs that, you know, we are kind of on the cusp of that require just one or two more pushes to get them over the line, even the superconductors from last year.
Those things, any one of those could come in, right?
And if they do, we see massive transformation in the economy.
I mean, imagine if by 2045, you know, energy is, let's say, 10 to 100 X cheaper, we will be
able to desalinate water from the ocean anywhere, which means that we would have clean water
in places that might be 50 degrees or whatever, you know, 120 degrees hot, right?
Which means that we can grow crops in arid environments, which we would.
will mitigate the flow of migration because of climate change, which means that we could run AC units
in places that we never could before.
You know, there are so many knock-on effects of fundamental technologies, general purpose technologies
like energy coming down by 10 to 100x.
So there are huge reasons to be optimistic that everybody is going to get access to these
technologies and the benefits of these technologies over the next couple decades.
Yeah.
And that will make life much easier and much cheaper for,
everybody on the planet. So let's let's jump into that a little bit like it could make energy how many
times cheaper? Well, I was saying 100x cheaper over 20 years. 100x cheaper over 20 years. So this is this is one of
those instances that I've struggled with because you know like depending on where you get information and
how you get information it changes how you perceive the issue. Right. So I remember being really
angry when I saw how much water is consumed by like typing one query into co-pilot, chat
GPT, any, any AI model. Then I was even more angry when I saw how much water is consumed by like
getting a picture, you know, made. And then I saw something else that was like, oh, this is nothing
compared to cars and, you know, produce and like making hamburgers and that. And then I was like,
okay, like, where's the information coming from? Where is it not coming from? The
response to the price of AI, like is it driven by the AI industry saying, no, this is actually
not that bad? Or like, how do you think we should look at it? Or how do you look at it?
I mean, look, it consumes vast amounts of resources. Precious metals, electricity, water.
No question about that, right? On the energy side of things, all of the big companies now are
almost entirely 100% renewable. Certainly Microsoft, 100% renewable. I think we have.
We have 33 gigawatts of renewable energy in our entire fleet of computation.
For comparison, I think Seattle consumes two gigawatts per year of energy, so just to put that
into perspective.
The whole of Seattle consumes two gigawatts.
And Microsoft is creating how much?
33.
Overall in the fleet.
Wow.
This is worldwide.
Yeah, no, no, no.
But still.
And the vast majority of it is 100% renewable.
coming from solar or wind or water.
But it also consumes a lot of water in the process.
Like we have to cool these systems down and, you know, for sure, that consumes a lot.
Now, I don't know that there is a easy way of, you know, there's no shortcut there.
It's expensive.
It consumes, you know, resources, it consumes a lot of resources from the environment.
But I think net net, when you look at the beneficial impact, to me, is justified.
Like, you know, you wouldn't give up your car or tell people to give up their car anytime soon because it uses aluminium and rubber and is an essential part of your existence.
And I think AI is going to become an essential part of everybody's existence and justify the environmental costs, even though that doesn't mean that we have to go and consume diesel generators and, you know, carbon emitting.
Like we get to start again from scratch, just to say new technology arrives, new standard has to be applied to it, which means that our water.
has to also be cleaned and recycled, which many of the data centers do now take full
lifecycle responsibility for cleaning the water. And the same with the energy, it has to be
renewable. So there's no easy way out. It's just a rough reality that producing things at this
scale is definitely going to consume more resources from the environment. It's funny. Every time I
try and think of it, I think of the gift and curse that comes with anything that scales. You know,
the analogy I'll always use for myself as I'll go, I think of like an airplane.
Before an airplane is invented, especially like a large jumbo jets,
the amount of people who can die while being transported sometimes is much lower,
really, if we're honest.
You know, a car, four people, six people, whatever it might be.
Still tragic, but a smaller number.
The plane comes along.
You can go further.
You can go faster, but it also means there can be something more devastating on the other
side of that plane crashing or something going wrong.
And it feels like that scales with AI as well.
sounds like you're saying to me, on the one side of it, this technology could completely change
our relationship with economies and finance and society. But then there's the looming other side
of it that could crash. And so maybe that's a good place for us to start like diving into this
is what's noise and what's very real for you as somebody who sees it? Because everyone gets a
different headline about AI. It doesn't matter where you are in the world. It doesn't matter your
religion, your race, whatever it is, everyone gets a different headline about AI, but when you're
looking at it, as somebody who is working on creating it every single day, what is real and what is
noise in and around AI? So I think it's pretty clear to me that we're going to face mass job
displacement sometime in the next 20 years. Because whilst these technologies are for the first part of
their introduction, augmenting.
Like, they add to you as a human.
Yeah.
It's like a bionic leg.
Yeah, it's like a bionic leg, but for, like, cognitive laborers.
You know, I think, like, you could, you know, who is it?
I think it was Steve Jobs that called it, like, the bicycle for the mind.
You know, it's just sort of exercising, you know, digital technologies allow you to exercise
new parts of your mind that you didn't know you had access to.
And I think that's definitely true.
But much of the work that people do today is quite routine and quite predictable.
It's kind of mechanized, yeah, like cognitive manual labor.
And so that stuff, the machines are going to get very, very good at those things.
And the benefits to introducing those technologies are going to be very clear for the company,
for the shareholder, for the government, you know.
And so we'll see like rapid displacement and people have to figure out, okay,
what is my contribution to the labor market? I think those fears are very real, and that's where
governments have to take a strong hand, because there needs to be a mechanism for taxation
redistribution. Taxation is a tool for incentivizing certain types of technologies to be introduced
in certain industries, and so it's not just about generating revenue. It's about limiting,
adding friction to the introduction of certain technologies so that we can figure out how to
create other opportunities for people as this transition takes place. Yeah, it's funny. One of my
favorite quotes I ever heard was, I think it was Sweden's head of infectious diseases. I think
that's what his job was. I spoke to him during the pandemic. And we were just talking about life
in Sweden and what they do. And I asked him a question about labor and Sweden and how everything
works out there. And he said something fascinating. It was, no, I think he actually was in the labor
department on that side. He said in Sweden, unlike in America, he said in Sweden, we don't
care about jobs. We care about the workers. And I remember that breaking my mind because I went,
oh yeah, everyone always talks about like the job, as if the job is something that is affixed
to a human being. But really, the human is the important part of the equation. The job is just
what the human does. And so our focus has to be on making sure that the human always has a job. But
from what you're saying,
we don't know what the job will be
because the jobs that we know now
are sort of easy to replace.
It's data entry, data capturing,
sending an email,
doing an Excel spreadsheet.
That stuff is easy, actually,
when it comes to AI.
And then now we don't know
what the next part of it is.
And so maybe my next question to you then is,
when you're in that world,
the philosopher's out of your brain,
like, what do you think the onus is on us
and the, like, I mean, the tech companies
and all the,
what the new job is, or do we not know what it will be?
Well, but also, I would tweak what you said that are the job of society
or the function of society is to create jobs that are meaningful for people.
Yeah.
Like, I'm not sure I buy that.
Like, I think that many people do jobs which are super unfulfilling
and that they would be quite happy to give up if they had an income.
This is true.
Like, and so, like, we're probably very lucky that we get paid for the thing that we
would be doing if we didn't get paid.
Right. I would certainly be doing that. And so I think the function of society is to create
a peaceful, supportive environment for people to find their passion and live a life that is
fulfilling. That doesn't necessarily have to overlap with a job or work. I mean, maybe I'm
too much of a utopian, but I dream of a world where people get to choose what work they do and have
true freedom. And people get tense about that idea because they're like, you know, work is about
identity and this is my role in society and this is what is meaningful to me. And if I didn't have
my job, I would be, it's like, no, come on, man. Take a minute to think seriously. If you didn't
have to work today, what would you do with your life? This is one of my favorite questions that
I always ask people. If you didn't have to worry about your income, what would you do? And, you know,
if you get into the habit of ask that question, people say some crazy things.
It's so inspiring.
And so, yeah, maybe I'm a utopian dreamer, but I do think that is a relevant question
for us to think about by 2045.
I think it's a real chance that if we get this technology right, it will produce enough
value, you know, aggregate value to the world, both in terms of the reduction of the cost of
stuff, because of energy, because of health care, because of food systems.
and basically because we won't have a lot of these middle, you know, tier jobs that we'll
have to figure out a way to fund people through their life. And I think that just unleashes
immense creativity. And it will create other problems, right? It will create quite a profound
existential problem. I'm sure you have friends who don't work anymore and a kind of, you know,
it's not as though they're retired. They're like maybe middle age or even younger. Maybe
they grew up rich. It's a hard thing to figure out. Like, who am I? Why am I here?
What do I want to do?
Those are like profound human questions that I think we can only answer in community with real connection to other people, spending time in the physical world, having real experiences.
And like it or not, that I think is what's coming.
And I think it's going to be pretty beautiful.
It's funny you say that because I found, when I think of my friends, the grappling that they have to do in and around their identity and work, I find is directly related.
to the world or the market that they live in.
So my American friends have the greatest connection
and binding to their jobs.
And as I've gotten to know them,
I've understood why.
In America, your job is your health care.
So if you don't have a job, you don't have health care.
And if you don't have health care,
you're worried about your survivability.
And if you don't have survivability,
then what do you, you know what I mean?
And then do you have housing?
And if you don't have housing,
then who are you as a person?
You look at all of these things.
It's very hard in America to separate job from life.
It's almost impossible.
And then when you start traveling around the world,
you go to countries where they have like a really strong safety net.
And you find that people don't really associate themselves with their jobs in the same way.
Because now their life isn't determined by their job.
Their job affects their life, but it doesn't make their life.
And then I remember back to times when I'd be in a township in South Africa
or even in what we called the homelands where our grandmothers would live.
And, you know, that was the extended family.
people literally living in huts and dirt roads
and everyone would go oh what a terrible wait
but I'll tell you now there was no homeless person there
there was no one like stressing about a job in the same way
I'm not saying nobody wanted a job
but the gap between them thinking they didn't exist
because there was no job was a lot greater
than the people who are living in a world where you know
your job was you and so it's interesting that you say that
because I do wonder how easy it'll be for us to grapple with it
like what that time will be.
But it also just shows how much variation there is.
You know, we come from, you know, in terms of how humans live their lives.
Yeah.
I feel like we come from, you know, whatever are different backgrounds.
We're still quite Western-centric, and which is sort of quite homogenous that, you know,
we've like sort of had 300 years of specialization, education, prostant work ethic,
atomization of families, smaller and smaller units, spread, you know, leave your home,
you know, sort of physical locale where your community is.
And I think there's a kind of loneliness epidemic as a result.
Like I feel, you know, you probably like me, you know,
pour your life and soul into your work.
And then what was, I guess, what was it like for you when you switched your job, right?
Like, because that was obviously a massive part of your identity.
It was what you did every day 24-7.
But you see to that point,
I left the Daily Show to go spend more time in South Africa.
at home.
And, you know, one of my best friends had a beautiful phrase that he said to me.
He said, in life, sometimes you have to let go of something old to hold on to something
new.
It's not always apparent what the value is of, like, of something that we're sacrificing.
It's not always apparent.
But if we are unable to assign that value ourselves, we'll get stuck.
So leaving the Daily Show, I leave a ton of money behind.
I leave, you know, the status, everything.
but no one has assigned a value to my friends.
No one's assigned a value to my family.
No one's assigned a value to the languages
that people speak to me in my country.
There's no economist article on that.
So I don't know what the value of that is.
Someone can look at my bank accounts
and go like, that's value.
But they don't tell me what my friend's actual value is.
And so I think that's where, you know, it's hard.
I just had to like decide it for myself.
And I think we all have to.
But I think some people won't have the lucky.
because of how, you know, how close they are to the line.
You know, when you talk about those jobs that are going to disappear, there's somebody who's
going, I don't have the luxury of pontificating.
Exactly right.
Because tomorrow is what's coming.
I can't think about like, oh, what will be?
And that's like a real luxury, I think.
And I think that's why talking about the dangers and the fears now is so important because
this is happening super fast.
I mean, the transition has surprised me and all my other peers in the industry in terms of how quickly it's working.
And at the same time, you know, we're also kind of like unsure about whether the nation state is going to be able to sort of respond to the transition too.
Because, you know, you're maybe lucky because you already had enough income that you didn't have to worry about it and you could, it was really just like connecting to your heart.
But many people are going to be like, well, I'm going to have to still be able to provide.
food for my family and carry on doing my work throughout this crazy transition.
So then let me ask you this. You see, that, that is such an interesting thought. You
and your peers were shocked and are shocked at the rate of how AI is going and growing. To me,
that blows my mind because I go like, of course I'm shocked. I don't know how to code.
You get what I'm saying? Of course I'm going to be shocked. But now when you say that,
I then wonder, as somebody who's been, you are truly an OG in the game of AI, like really,
Really, really. You're not like one of the people who's jumped on now because it's blowing up. You were in it before there was money. And now you're in the thick of things. Where do you think we are in AI's development? Are we looking at a baby? Are we looking at a teenager? Are we looking at like a 20-something euro? Like, where do you think we are when we look at AI's development?
I think part of the challenge with the human condition in this mad, globalized, digitized world
is that we're so overwhelmed with information and we're so ill-equipped biologically to deal with
scale and exponentials. Like, it's just very few, like, when I say 2045, like, I'm just used
to living in 2045. Like, it's just my weird, like, I've always been like that. It's kind of
become second nature of me to casually drop that. But, you know, if I, if I, if I,
I do that with some random people like me at the bar, I'm obviously just a freak. There's just
no one thinks about that. People barely think about what they're going to do in two weeks,
let alone 20 years. And likewise, you know, people are sort of not equipped to think of what
does an exponential actually mean. Now, I'm lucky enough that I got a practical intuition for the
definition of an exponential because between 2010 and 2020 for 10 years, me and a bunch of other
random people worked on AI, and it was sort of working, but basically didn't work. The flat part
of the exponential, even though we could see little doublings happening every 18 months, it started
from a base of like, effectively zero, think of it. And so it isn't until the last few doublings
that you see this massive shift in capability. I mean, for example, like four years ago,
before GPT3, a language model could barely predict the next word in one sentence.
Like it's just kind of random.
It was a little bit off, often didn't make sense.
This is 2013.
No, this is 20, 23 or four years ago.
2023?
So, no, no, not 2023, three or four years ago.
So it's like 2020 or 2020.
Okay.
Let's say 2021, something like that.
I mean, literally, you read, you look at the output of the language model.
Because I worked on Lambda at Google in 2021.
And it was super cool, but the models just before that were just like terrible.
And I think many people play with GBT3.
And a lot of people were like, eh, meh, this is like, what do I do with this thing?
But for those of us that were lucky enough to see the flat part of the exponential,
we could get a better intuition that the doublings were actually happening and that the next set of doublings,
you know, to double from this base of, oh, it's kind of okay, but it's not that accurate.
We knew that it was going to be perfect with stylistic control, with no bias, with minimized hallucinations,
with perfect retrieval from real-time data.
And so then it's actually quite predictable
what capabilities are going to come next.
So, for example, the last couple of years,
the models have gone from generating perfect text
or really good text, let's say,
to then just learning the language of code.
How did we know that it was going to become a human-level performance programmer?
Because there's no difference in the structure
of the input information between code,
and text and images and videos and audio, it's the same mechanism, data, compute, and the algorithm
that learns the pattern in that data.
So then you can say, okay, well, what are the next modalities that it's going to learn?
That's kind of why I make the prediction about material science, right, or other aspects
of biology or physics.
Yeah.
You know, if the structure of the data has been recorded and is clean and is high quality,
then patterns in it can be learned well enough to make pretty much.
predictions about how they're going to unfold next.
And that's why it's called a general purpose technology, because it's fundamental.
There's no specialized hand-coded programming for each data domain.
Damn.
I mean, you know what it reminds me of?
Have you ever seen that thing where they talk about, when they try to explain exponential
to you, there's one example they give, which is folding paper.
Yeah.
You know, so they go, if you fold a piece of paper in half, and then you fold it in half,
and then you fold it in half
and I think you can't do it more than seven times
or something like but then they go
if you could keep folding it in half
the number to get to like space is really small
I think it's like the 64th gets to the moon or something crazy
but I remember seeing that and I was like wait wait wait
wait what they're like if you do it one and then you do one way
and I was like wait 64 I was like no what do you mean
like 64,000 they're like no no no 64 and and that's when
you start to understand how we just generally don't understand
exponential and you know and these like compound gains
And so now that's where I wanted to ask you about the idea of containment.
Your book of everyone I've read, I mean, everyone who's written about AI, who's like in it, in it, in it, was the only book that I would say spent the majority of its time talking about the difficulties of grappling with AI.
Yeah, you talked about the beauty of what we could do with.
medicine and technology and we should get into that to talk about some of the
breakthroughs that you made at deep mind but like containment seems like the
greatest challenge facing us and we don't even realize and we don't really talk
about it talk me through what containment means to you and why you think we should
all be focusing on it so the trend that's happening is that power is being
miniaturized and concentrated and it's being made cheap and widely available
to everybody. Why do I say power? Because making an accurate prediction about how text is going to
unfold or what code to produce or what, you know, frame to extend given a video, that is power.
Like, predictions are power. Intelligence is an accurate prediction given some new environment.
That's really fundamentally what we do as humans. We're prediction engines. And these things are
prediction engines. So they're going to be able to make phone calls, write emails, use
APIs, write arbitrary code, make PDFs, use Excel, act like a project manager that can do stuff
for you on your behalf. So you as a creator or as a business owner, you're going to be able to
hire a team of AIs, specialized in marketing or HR or strategy or whatever it is, a coding.
And that's going to give you leverage in the world. I mean, you said about the kind of, you know,
the strange, like, function of scale, what this is going to do is scale up every single individual
and every single business to be able to deliver way, way more because the cost of production
is going to be, you know, basically zero marginal cost. Now, on the one hand, that's amazing
because it means the time between you having a creative idea and being able to prototype it
or experiment with it in some way or even build it up to the max scale is going to go to shrink
to basically, you know, nothing.
You just think something, vibe code it up in natural language, produce the app,
build the, you know, website, try out the idea that you have.
But the flip side of that is that anybody can now not just broadcast their ideas,
like we had with the arrival of podcasts or the arrival of blogs on the web before that,
it meant that anyone could talk to everyone.
Yeah. Which was amazing.
Because no one controlled the infrastructure in a way.
Exactly. And it's super cheap for anybody to go publish a website or do a blog or do a podcast.
So the same trend is going to happen for the ability for people to produce stuff, do actions.
So in social media, it was like anyone can now broadcast. Now with AI, anyone can now take action.
You can like build a business, you know, start a channel, create content, you know, whatever it is that you believe in.
I mean, you might be a religious person. You're trying to evangelize.
for your, you know, or you're trying to persuade somebody of your political ideas.
Everyone is going to have a much easier time of executing on their vision.
And obviously the benefits of that are pretty clear, but the downside of that is that that
inevitably causes conflict because we just disagree with each other.
Yeah.
We don't hate each other.
You're not evil.
I'm not evil.
But we've got different views.
And if I can just kind of at the click of a button, execute my crazy ideas, and you
could execute your crazy.
ideas that are like practical actions affecting the real world and everyone's doing the same
thing then inevitably that is going to cause an immense amount of conflict at the same time the
nation state which is supposed to have a monopoly over power in order to create peace that's the
contract that we made with the nation state oh yeah nation states getting weaker and kind of
struggling right and so containment is a belief that completely unregulated power that proliferates at
zero marginal cost is a fundamental risk to peace and stability.
And it's an assumption that you have to gently restrict in the right way mass proliferation
of super powerful systems because of the one to many impact that they're going to have.
If I hear what you're saying correctly, it's almost like, it's almost like you're saying,
if something is hard to do, only a few can do it.
And if only a few can do it, it's easy to regulate how it's done because you only have to regulate
a few. But if something is easy to do, everyone can do it, and now it becomes infinitely harder
to regulate because everyone can do it. Yeah, that's absolutely spot on. There's a much better way
putting it than I put it. Exactly. Friction is important for maintaining peace and stability.
If you have no friction and the cost of execution is zero and scale can be near instant,
that's where you could, like, yeah, yeah, yeah. I just, maybe I spend too much.
time in 2045, but I can see a world where that kind of environment really just creates a lot
of chaos. Well, no, I agree with you. Here's where I think of it. Let's use a real, you know,
current day example, news. I lived in news for a long time and I saw it firsthand. When there were
three news networks in America, if something was like off with the news, people knew where to go
immediately. You knew who to hold accountable. You knew who, you know, got into trouble or didn't
it. But there was like a, it's like we know where to go. Then you get cable news. It expands.
It becomes a lot harder now. Wait, who's saying the news? Who's saying the truth? Who's not saying
that? Do you punish them? But still you could go to them. You know, so somebody like Fox News can get sued
for saying something about a Dominion voting system. But Dominion knew where to go. They went, we going
after Fox News. So in a strange way, even in that world, the system is still sort of working.
because there's friction, right?
It is where it is and it has to be broadcast.
Then the internet, streaming, YouTube, etc.
You don't even know who the person is,
where the person is if it's a person,
and then if they say something that's not true
and it enrages the masses, where do we go?
And it's not just that it's going to say something,
it's going to do something.
Ah, damn, Mustafa.
It's going to build the website, it's going to do the thing.
And so, look, I think this is the point
about confronting the reality of what's coming.
Yeah, but wait, wait, go back on that.
You see, that's something I always forget.
Oh, man.
See, we always think of AI as just like saying.
Let's talk a little bit more about the doing,
because that is what makes it unique.
You know, on one of the episodes, we had Yuval Noah Harari, the book, you know.
Of course, I'm good friends with Yuval.
He's awesome.
And Yuval, you know, was on for his book, Nexus.
And we're talking about information and systems.
And stories.
And stories.
Yeah. And one of the things he kept going on about was he said, I know AI is a tool, but we've never had a tool that makes itself. And you talk about that as well. Yeah. We've never had a hammer that makes the hammer without you getting involved. It's just make the thing, make the thing, make the thing. Atomb is one thing, but no atom bomb makes an atom bomb, you know. And so that. Well, there's a lot of ideas there. So there's, so firstly, actions are stories. And Yuval's point was that the history.
of civilization has been about creating stories, religious stories, historical stories,
ideological stories, like stories of oppression, of domination, of persuasion. And it was
really humans that had the, it was the friction of being able to pass on that story through
spoken word and then through digitization, which slowed down the spread of change. And that
was an important regulator and filter. So as we've talked about, the digitization speeds up,
the distribution of those stories, which allows that information to spread. But it's not just
that. It also is an actual agent that is going to operate like a project manager in your little
mini persuasion army. And people are going to use those things, not just for fishing attacks
or for, you know, sort of selling stories, but for actually making the PowerPoint presentation
of building the app, of planning, you know, making the project plan. And so it's kind of operating
just as a, you know, member of your team would. And I think that's where all the benefit comes from,
but it's also where there's, like, massive risks at the same time. And then the other point
that you made about, like, it can edit itself. This is a new threshold. It, you know, a technology
that is able to observe what it produced, modify that observation. Like, it can critique its own
image that it produced and say, well, it looks like this part of the hand was.
kind of weak. Okay, so we'll generate another one or a producer poem or a strategy and then edit
that and update it. And that's just editing its output, but it can also edit its kind of input,
its own code, its own system processing in order to improve with respect to its objective.
And that's called recursive self-improvement, you know, where it can just iteratively improve
its own code with respect to some objective. And I've long said that that is a threshold,
which presents significantly more risk
than any other aspect of AI development
that we've seen so far.
That really is a kind of subset of technologies
that if we're really going to focus on humanist superintelligence,
being skeptical and critical
and auditing the use and development
of recursive self-improving methods,
that's where I think there's genuine risk.
We're going to continue this conversation
right after this short break.
So do you ever feel like you could be sitting in a position
where you're sort of like the Oppenheimer of today?
Do you ever feel like you're sitting in a position
where you're both grappling with the need for the technology
but then also the not zero percent chance
that the thing could burn the atmosphere?
like how do you how do you grapple with that i often wonder this even when i think of like engineers
and people who are writing the code i'm always fascinated by people who write the code for the thing
that's going to write the code like other people have jobs but like if you told me as trevor hey
trevor can you help this ai learn to do comedy i'll be like no do you know what i mean so i'm always
intrigued by the coders who are making the thing that's now coding like i just want to know like how you
how you like wrestle with this entire thing. Do you think it's larger than us and we have to wrestle
with it? What is what is that battle like for you? You know, in COVID when I started writing the
coming way of my book, I was really motivated by that question. That was actually one of the
core questions is how does technology arrive in our world and why? What are the incentives, the
system level incentives that default to proliferation that just produce more and more? And
it's very clear that there's demand to live better. People want to have the cheaper t-shirt from,
you know, the more affordable thing. You want to have the cheaper car. You want to be able to go on
vacation to all the places around the world. And so planes get cheaper and more efficient because
there's loads of demand for them. And so it's really demand that improves the efficiency and
quality of technologies to reduce their price so that they can be sold more. And that is why
everything ultimately proliferates and why it inevitably happens, because we haven't really had
much of a track record of saying no to a technology in the past. There's regulations around
technology and restrictions, which I think have been incredibly effective if you think about it.
Every technology that we have is regulated immensely flight or cars or emissions. I mean,
you know, so people, I think particularly in the US, like have this sort of allergic reaction
to the word regulation.
But actually, regulation is just the sculpture of technologies, right, chipping away at the edges
and the pain points of technology in the collective interest.
And that's what we need the state for.
The state has the responsibility for the common good.
And that's why we have to invest in the state and build the state, because it isn't in the
interest of any one of the individual corporate actors or academic, you know, researchers,
AI researchers, or anyone individually to really take responsibility for the whole.
And that's why we need governments more than ever, right?
Not to hold us back or to slow us down, but to sculpt technology so that it delivers all of
the benefits that we hope it will whilst limiting the potential harms that it can produce.
I want to take a step back and talk about your journey with AI.
today in 2025 it seems obvious
now when I speak to people
everyone's like oh yeah AI AI AI AI
I wasn't even in the game when you I mean
I and I was like an early just you know layman
I remember showing people at my office
the first iterations of GPT3 and Dali
and remember when Dali was still like I mean
like basically tell it about an image and then just go away
Go book a vacation for a week, come back for your image, and it would make.
But even then, I was like, this is going to change everything.
People were like, oh, no, I don't know.
And I remember struggling to convince people that this thing was going to be as big as it was going to be.
I was doing this in like this timescale.
When I look at your history, you literally have years of your life where you were in board rooms,
telling world leaders, telling investors,
telling tech people in Silicon Valley,
hey, this is what the future of AI is going to be
and no one listened to you.
Not no one's zero,
but I mean like no one listened to you, right?
Now, that made me one to two things.
One, should we be worried about our world
and our future being built by people
who are unable to see the future?
And two, what did you see this?
then that we might not be seeing now.
Shit, that's a hard question, man.
I think, as you were talking, one of the memories that came to mind was, remember in 2011,
we had an office in Russell Square, the center of London, near University College London,
UCL. And someone of the office showed me this handwriting recognition algorithm.
It's just, you know, pass some text.
And that's actually been available at the post office for many, many years.
It would read the zip code and read the address and stuff like that.
And it was really just doing recognition.
So these letters represent these letters.
You know, we sort of can transcribe that text.
This funny looking Apple is actually an A.
And this funny little loop thing is actually an L.
And yeah.
So I was like, that's kind of incredible that a machine has eyes and can understand text
and can transcribe that.
This is pretty cool.
But then what we were really interested in is,
if it recognizes it accurately enough,
then surely it should be able to generate
a new handwritten digit or a number
that it's never seen in the training set before.
This was 2011.
256 pixels by 256 pixels
with a handwritten seven or a zero in kind of gray, right?
This is sort of like five colors.
Yeah, yeah, yeah.
And I remember standing over the shoulder of this guy, Dan Vistra, is one of our, like, I think he was employee number four at Deep Mind.
And he was just enamored by this system that he had on his machine because it could produce a new number four that wasn't in the training set.
It was, it learned something about the conceptual structure of zero to nine in order to be able to produce a new version of that.
So it could write a number that it hadn't learned how to write.
Like it had it had never seen before.
It had never seen that number written that way and it wrote it itself.
Exactly.
And so coming back to what we were saying at the beginning about understanding,
if it's able to produce a new version of a number seven that it's never seen before,
then does it understand something about the nature of handwritten digit number seven in abstract,
the conceptual idea of that?
Oh, man.
And so then the intuition that that gave me was, wow, if it could imagine new forms of the data that had been trained on, then how far could you push that? Maybe it could generate new physics for the world. Maybe it could solve hard problems in medicine. Maybe it could solve all these energy things that we're talking about. It's just learning patterns in that information in order to imagine. And that's what I love about hallucinations. Everyone's like hallucinations.
hallucinations is the creative bit. That's what we want them to do. We don't want an Excel spreadsheet
where you input data and you get data out. That's just a, you know, that's just a handwritten record
of, we want interpolation, we want invention, we want creativity, we want the abstract blurry bits.
And so that was a very powerful moment for me. I was like, okay, this is weird, but we are
definitely onto something. Like this has never been done before and it's super cool. And let's just
turn over the next car. Let's see if it will scale. And it just scaled every year. It scaled
and scaled and scaled. So that was a very inspiring moment for me. And somehow, 10 years later,
15 years later, I managed to kind of hang on to that vision that generation and prediction
produces creativity. And the intelligence wasn't this kind of, because some people are quite
religious about intelligence. They're like, you know, no other species has intelligence. This is
very innate to humans. It must have been, you know, come from some supernatural being. But actually,
it's just applying attention to a specific problem at a specific time, the effective application
of processing power to produce a correct prediction.
I think that's what intelligence is, to direct your processing power to predict what would
happen if I tipped over this glass, right, at the right time.
You'd have to buy me another glass.
I would have to first clean my trousers from water.
So yeah, I forgot what your question was
But that was the memory that came to mind
No, you answered the first part of which I loved
And then the second part really was
If you, so you were in you at DeepMind
You're going around to these different people
Who now, by the way, are selling some version of AI
Or are investing in it or are telling us about it
Yeah
What sort of pisses me off is I go like
Man, you didn't see this shit
You know what I mean?
Yeah
Like you're going to be here,
I'll let me tell you about AI
And I'm like yeah
But when Mustafa was in a room telling you about it
You were like, I don't see it, man.
And now they're going to act like they see it.
So I don't want to ask them what they now see.
I want to ask you what we're missing in this moment.
What do you think we are not hopping on?
You know, we talked about containment.
Yeah.
But what is the thing that we're not thinking about?
Yeah.
It's a really difficult question.
I'm not sure why people weren't able to see it earlier.
And maybe that's kind of like my blind spot that I need to think more about.
But like, I know that I can see pretty clearly what's coming next.
I think at the moment, these models are still one-shot prediction engines.
You know, you ask a question and you get an answer.
You know, it produces a single correct prediction at time step T.
But you as an intelligent human and every single human, and in fact, many animals,
continuously produce a stream of accurate predictions, whether it's like deciding how to get
up out of this chair or imagining, you know, this plant in purple instead of green, I'm a
consistently accurate prediction engine. The models today are just one or two shot prediction
engines. Okay. They're not, they can't lay out a plan over time. And the way that you decide
to go home this evening is that, you know, you know first to get up from your chair and then open
the door and then get in your car and da-da-da. You can produce, you can unfold this perfect prediction of
your entire stream of activity all the way to get back to your home.
And that is just a computational limitation.
I don't think there is any fundamental, you know, sort of algorithmic or even data
limitation that is preventing LLMs and, you know, these models from being able to do
perfect, consistent predictions over very, very long time periods.
So then what do we do with that technology?
Well, that is incredibly human-like.
If it has perfect memory, which it doesn't at the moment, but it's got very good memory, then
it can draw on not just its knowledge of the world, it's pre-trained data, but it's personal
that the experience that it has had of interacting with you and all of the other people and
store that as persistent state and then use that to make predictions consistently about
how things unfold over very, very, very long sequences of activity.
is incredibly human-like and unbelievably powerful. And just as today, there's a kind of super-intelligence
that is in our pocket that can answer any question on the spot. Like, we dismiss how incredible it is
right now. It's crazy. It's insane. How good it is right now. And everyone's just like,
yeah, I don't really use it. Do you use it in there? A bit a little bit. I talked to. It's like,
come on. This is magic. It's magic in your pocket. Now, imagine when it's able to not just answer any
question about poetry or some random physics thing, but it can actually take actions over
infinitely long time horizons.
That does it, like, forget about the definition of superintelligence or AGI.
Just that capability alone is breathtaking.
And I think that we basically have that by the end of next year.
Maybe I'm stuck in the world of sci-fi, but what I sort of heard you saying is, if we
continue growing AI in just the way that it's growing now, forget like an idea of what we don't
know. We could sort of get to a world where it can develop accurate predictions about what our
outcomes might be. Yeah. Like you're telling me that an AI, I could meet somebody on a date and then the
AI could tell me based on my history and my actions and its persistent memory of me and what the
person says and how we could theoretically get to a point where it could go, oh yeah, these are like
the possible outcomes based on your actions. Which is what you as a
smart human do every time when you meet somebody anyway.
Well, subconscious is making.
You're very kind to me.
You're very kind.
I have no comment on your day life.
Yeah, but I mean, it's both utopian and dystopian.
Because on the one hand, I go like, wow, that would be amazing for so many people.
You make mistakes.
And now it's, but then there's another one of like, when do we not trust it?
When do we not believe its prediction?
When do you know what I mean?
Like that's like the ultimate.
grapple now is if this thing has told me, hey, I know you like this person and I've run the
calculation. I've done this. I know you and I know what they say and how they are. You're going
to be broken up in two years by the like the pattern. And let's say I do it once and it's right.
And then I do it again and it's right. And then I'm like, third time, do I do I do it or do I not
do it? Do I give this person a chance? Do I not give them? Do you get what I'm saying?
It's such a deep question because we trust. Trust is really a function of
consistent and repeated actions.
So you say you're going to do something
and you do what you actually said you were going to do.
And you do that repeated.
Yes.
And so everyone's like,
oh, but I'm not going to be able to trust AI.
Actually, you are going to trust AI because it's super accurate.
It's,
you know,
like you use copilot,
chat GPT.
Most people don't think twice about using that now because it's so accurate.
And it's clearly better than any single human.
Like,
I just wouldn't go and ask you.
Like,
you know,
many other questions. I was probably going to know better, right?
I'm...
I'm going to ask you, dumbass.
I was about to say a list came through my mind.
I was like, don't mention any of those things.
Don't say those things.
Roeback.
You know what it makes me think of is like, I don't know if you've seen the documentary.
I don't know if you need to because you were there.
But like, Deep Mind, the company that you're part of founding,
is it occupies such a beautiful and seminal moment in artificial intelligence history for two
main reasons, in my opinion. You know, one is alpha-fold and then one is alpha-go. And the reason it's so
important to me is because you worked on an AI project that grappled and tackled two of the
biggest issues I would argue that humans have sort of thought of as being their domain. Alpha-fold
was medicine and discovery. That's what humans do. We are the ones who invent medicines. We are the
ones who create the new. We synthesize. We are the humans. You know what I mean? Synthetic.
It is us. We've made it. We're the creators. And then AlphaGo, for me, was almost even more
profound and powerful because it was like, people always used to say, look, man, human chess, our
brain is infinite, the human brain. And then AlphaGo has, I don't know, how many different
variations of a game. Like, no one can remember it essentially, right? And I remember watching
the documentary. And you're seeing the AlphaGo champion. I think he was Korean, right?
Yeah. Lisa Dole.
Lysudol, yeah, Lysudol, great guy.
And you watch the story of Lysudol go up against your computer and everyone.
I mean, Korea, it's all over the news.
In America, it's on the news.
And they're like, a computer going up against.
And now it's like people are thinking of back to Kasparov and the IBM computer.
A hundred million people watched it live.
100 million people wanted to see man versus the machine.
And I remember watching this and people going, man, let me tell you why it's different to chess.
Because you see, chess is actually quite simple to predict.
And whereas before, they said it was impossible.
And then you see AlphaGo and you see this game rollout.
And the moment for me that will always stick in my brain is when Li Su Dong is playing Deep Mind.
And it makes a move.
And everyone, everyone that's like an AlphaGo expert is like, oh, it messed up.
And then you see all the tech guys in the background who are working on.
And they're like, what went wrong?
They're like, yeah, it messed up.
It shouldn't have done it.
Yeah, you can't. And you see the commentators and they're like, oh, yeah, you never do that.
You never do that. It's over. You don't do that. And then the game unfolds, the game unfolds, the game
unfolds. And then everyone's like, wait, what just happened here? And then people said, we've just seen
a move that no one's ever played. We've seen a game that's never unfolded before.
And there were two different reactions. And this is why the story stuck with me. The one reaction
was of most people who were fans of Lysudong.
They said, this is a sad day in human history
because it's shown that the machine is smarter than the man
and it shows that we have no future and no purpose.
And then they interviewed him and they said,
how do you feel?
Could you lost?
And you were representing mankind, you lost.
And he goes, well, first of all, losing as part of the game
and, you know, I'm humble enough to know that I won't always win.
And he said, but I'm actually happy.
And they said, why?
And he said, well, I'm happy to discover that there are parts of Go, I didn't know existed.
And he said, and this machine has reminded me to keep being creative and push beyond the boundaries that I thought existed in my head.
And I remember watching that and thinking, damn, it's amazing how you can look at the same story and have a completely different lesson that comes out of it, you know?
And I wondered, like, that was on the play side, extremely complicated.
But let's talk about the medicine side of things.
Folding proteins, I still don't fully understand it, but I've tried my best.
Essentially, what you and your team did has put it.
How far would you say it moved us forward in terms of medicine and what we're able to do
in terms of, you know, healing disease?
Like how many years do you think we leaked by having something like alpha fold?
I mean, people say a decade, some people say multiple.
decades, I mean, understanding the functional properties of these proteins is really about understanding
how they fold.
And the way they fold and the way they unfold often expresses something, you know, physically
about how they're going to affect their environment.
Yeah.
And so you know what I think about it as?
Because I try and use analogies to help me when it's a complicated topic.
Did you ever play that game as a kid where someone would fold paper into like a little flower thing?
And then you would like do like a little game and you had a number.
thing, yeah. And then it would be like, let's see if someone likes you.
And we're like, one, two, three, four. Yeah, you suck.
One, two, three. That's how I think of it with protein folding is I go like, depending on how
that paper unfolds, that determines whether it's a disease, whether it's a cure, whether
it's a medicine, whether it's, you know?
Except there are billions of possible ways that it could unfold. And we could never
imagine all of those combinations. And I think, you know, it's actually very similar
to Go in that sense. Like, Go on a, so Go has 19 by 19 squares, black and white stones.
And there are 10 to the power of 180 possible different configurations of the Go board.
So that's a 10 with 180 zeros on the end of it.
It's like a number that you can't really even express.
And people often say, like, there are more possible moves in the game of Go than there are
atoms in the known universe.
I mean, that one I can't even get my head around.
But that's, that's, you know, well understood.
This is insane.
So, Moves 37.
You just can't.
Yeah.
That just, I can't even get anywhere near.
I can barely cope with like folding bits of paper up to, to the moon.
Yeah, that breaks my memory.
This is ridiculous.
But so Move 37, you know, Lisa Dole actually got up from the table and walked off and sat in the bathroom for 15 minutes, like trying to digest the fact that a whole new branch in the evolutionary space of possible go moves had kind of been shown to him, revealed to him.
And I think it's very similar with Alpha Fold.
It's a sort of exploration of this complex information space.
And that's why it applies to language and to video and to coding and to game generation.
And all of these environments, we call them like modalities.
These modalities are all knowable.
And I think that's what, it's quite humbling.
It sort of makes, it reminds us as this sort of mere biological species that we're here for a kind of
of finite period of time, living in, you know, space and time, but there's, there's also
this information space, this kind of infinitely large information space, which is sort of a, sort of
beyond us. Like, it operates at this different level that isn't the atomic level. It's the level
of ideas. And somehow these systems are able to learn patterns in, in data space that is just
so far beyond what we could ever dream of being able to intuit. Like, we actually sort of
of need them to simplify and reduce down and compress complex, you know, relationships
to bring it to our brains.
That's the level that we're at.
What do you think it does to us?
Like when we think of AI, I think of how every promise of a technology has sort of ironically
been undermined by humans, not the technology.
You know, one of the big predictions Bill Gates made way back in the day about like the internet
and the computer is he said, he said, hey man, I think people are going to be working like nine
hours a week. It's going to be a nine hour work week. The computer does everything so quickly.
And you see people saying that now in many ways with AI. They go, I mean, AI, it'll just do everything.
And I mean, we might only go to the office like one day a week and maybe work like three hours.
And I mean, it's just, but it seems like humans have always gone against that.
You know, so I wonder, like, do we get wiser when we have this infinite technology and intelligence or do we get lazier?
Are we going to become like a warly generation and population, or do you think we become, you know, these higher beings?
Which way do you see it falling?
I think there's no question that we're all already getting smarter.
You know, just because we have access to so much culture and history, like we're integrating so much more information in our brains than was ever possible a hundred years ago.
And I think, you know, kind of similar to go or protein fold.
access to more training data, if you like, more experience, more stories from other humans
that describe their experience. That clearly has to make us smarter. I think it makes us more
empathetic, more forgiving. We see that there is nothing wrong with a homosexual person. There is
nothing wrong with being a trans person. There is nothing wrong with being a person of color,
whereas 200 years ago, we would have been afraid of those others. Our species would have been skeptical
of the other tribe that had a different way of doing things. And I think that desensitizing ourselves
with access to vast amounts of information just makes us smarter and more empathetic and forgiving.
And so I think that's the default trajectory. And part of the challenge is it also somewhat
homogenizes. So there's a question about, are we going deep enough? Do we read long form content?
Do we really spend time meditating and so on and so forth? And that's a good tension to have,
like, you know, short form. And people are already getting a bit sick.
a short form, you know? Like, there's definitely a bit of a reaction to it. I think it's going to be
an ebb and flow when it comes to that, funny enough. It's funny. I think I agree with you when you say
we're getting smarter. I think that's just apparent on a, on a basic level. You know, you look at
the best cartographer in like the 1400s. They didn't know half of what I know. Do you know what I'm
saying? Right. Like, I can just be like, oh, you don't know Angola? Man, you're stupid, stupid air, stupid,
you know what I mean? You're the best cartographer in the world. You make maps. You don't even know where
Angola is.
Exactly.
Self-proclaimed.
Yeah.
And so you look at the base level of what people think of as stupid in our society now
would make you an infinite God genius if we could throw you back in a time machine, right?
That's amazing, but it also worries me because, and you writes about this in your book
and, man, it sticks with me and I think about it and I thought about it.
And then you wrote it and I was like, man, more.
It is the infinite smart.
When we were sticks and stones, cavemen running around, I could bash your head.
Maybe I could bash one other person's head.
I can't get very far, you know?
And then we fast forward, and then all of a sudden I'm throwing a spear.
We fast forward and someone's got a cannonball.
And then we fast forward and someone has a rocket and someone has an atom bomb.
And the thing that AI presents us with is, you know, as you've illustrated many times in your writing,
a world where one person is an army.
One person goes into a garage.
They synthesize a disease or a pathogen that's never existed.
They design it to be hard to cure incurable.
And that one person does what a nation state would have had to do
and wouldn't have done because they wouldn't have had the incentive.
And then we don't know where we go from there.
Like, is it worth the risk?
How do we grapple with that kind of risk?
This is the story of the decentralization of,
power. You know, technology compresses power such that individuals or smaller and smaller groups
have nation state like powers. True. At the same time, those very same technologies are also
available to the centralized powers today. And that's why more than anything, we have to support
our nation states to respond well to this crazy proliferation of power. Because that's the job of,
That's why we trust and invest in nationhood, right?
We rely on nations to have a monopoly over the ultimate use of violence to keep the peace.
And we're doing it less and less now.
I hear you and I agree.
But I'm just saying like as we look at a world where Americans don't trust their government,
it doesn't matter which side of the aisle they're on, people like, I don't trust my, I don't trust my government.
And then you look at the UK and you look at Europe and then you look at parts of Africa.
And it feels like people are losing trust in those very same.
nation states that are supposed to be in a contract to protect their people don't have the trust of
the people like you know what I mean what do we the thing that concerns me is that actually
authoritarian regimes are on the rise so it's not the people don't want peace it's that they
are losing confidence in the democratic process and they are increasingly turning to the trust
and confidence that a strong man and it is always generally a man to but they're still looking
for the, I mean, I'm not trying to endorse or justify authoritarianism or strong men,
but I think it's true that people will always choose a peaceful environment, right?
We don't want to be in this kind of crazy-ass anarchy where any kind of mini-tribe can do it.
I always say the ultimate paradox for me in life, one thing I've always found funny is that
even rebels have a leader.
Yeah.
Whenever they'd say, as a child, I'd watch the news and they'd be like, the rebel leader and I'll
be like, well, then they're not rebels, are there?
I mean, if you've got a leader, you're not rebels.
But yeah, people always look for just the new type of, you know.
Yeah.
And, you know, we want to believe and invest in, you know, democratic accountability
because we know that like checks and balances on power create the equilibrium that
ultimately produces a fairer and more just society, right?
So we have to keep investing in that idea.
But for sure it's also true that centralization of power is also going to accelerate, right?
These technologies amplify both ends of the spectrum.
And, you know, I think you can certainly see that in China and sort of more authoritarian regimes which have lent into hyper digitization, ID cards, you know, large-scale surveillance and so on. And obviously that's very bad. And, you know, we're kind of against those things. But in a world where individuals could have state-like powers to produce highly contagious and lethal pathogens, what else are you meant to do? It's unclear, right?
It's very unclear.
I mean, the technical means in the next few years to produce a much more viral pandemic-grade
pathogen are going to be out there, right?
They're going to be out there.
And so, you know, yes, the main large model developers do a lot to kind of restrict those
capabilities centrally.
But, you know, over time, people who are really determined will be able to acquire those
skills.
And so that's just what happens with the proliferation of technologies and the proliferation of
of information and knowledge.
The know-how is more widely available.
Don't press anything.
We've got more.
What now after this?
Is there anything that you would ever see in the field
that would make you want to, you know,
sort of hit like a kill switch?
Is there anything that you could experience with AI
where you would come out and go,
Nope, shut it all down.
Yeah, definitely.
It's very clear.
If an AI has the ability to recursively self-improve, that is, it can modify its own code, combined with the ability to set its own goals, combined with the ability to act autonomously, combined with the ability to accrue its own resources.
So those are the four criteria.
Recursive self-improvement, setting its own goals.
acquiring its own resources, and acting autonomously.
That would be a very powerful system that would require like military-grade intervention
to be able to stop in, say, five to 10 years' time if we allowed it to do that.
And so it's on me as a model developer at Microsoft, my peers at the other companies,
the governments to audit and regulate those capabilities.
Because I think they're going to be like sensitive capabilities, just like you can't just
go off and say, I've got a billion dollars. I'm going to go build a nuclear power plant. It's a,
it's a restricted activity because of the one to many impact it can have. And once again,
like, we should just stop freaking out about the regulation part. It's necessary to have regulation.
It's good to have regulation. It needs to happen at the right time and in the right way. But it's
very clear what we would regulate. That's not, I don't think that's up for debate. Okay.
You know, there's some technical implementations of how you identify dangerous RSI from,
from less, you know, more benign RSI.
What is RSI?
Re cursive self-improvement.
Okay.
Got it.
This kind of self-improvement mechanism.
So there's technical mechanisms that are tricky to define and so on, but now is the time
for us to start thinking that thing.
I've sort of been saying this for quite a long time, and I think it's, that's what I
would regulate.
Would we be able to just turn off the electricity?
And I know this might sound like a dumb question, but from what I'm understood, the thing
is data right now and the thing is mostly, so would our ultimate fail safe be like, all right,
lights off, we're going back to candles for a while, and then we just, like, no, seriously,
is that what we would do?
Like, what do we do if the AI thinks?
Let's say it's the sci-fi-ish version.
I'm not saying robots, but like, we go, hey, man, the AI started thinking for itself.
It started coding itself.
It started setting its own goals.
It went off on its own objectives, and now it's shutting down your banking and the flights
don't go anywhere and the hospitals.
And then it says to us, hey, man, this is what I want or no.
know, we could just turn off the electricity, yes?
Yes.
I mean, look, they live in data centers.
Okay.
Data centers are physical places.
So we've got to keep those switches physical then.
Very much so.
You can have your hand on the button, you know, yourself and like have full control, press it.
I think, I mean, that's the question is.
I think how do we identify when that moment is?
And how do we collectively make that decision?
Because it's a little bit like you referred to, you know, Rutherford and the others experimenting
with, you know, the atomic bomb.
Yes.
There was real disagreement about whether it was going to set light to the atmosphere.
I mean, there were three orders of magnitude off in their predictions.
Obviously, they were at, you know, world war.
And so there was an immediate motivation to take the risk.
But I think today, like, we're in a position where it's early enough.
There's enough concern raised by not just me, but many of my peer group, Jeff Hinton,
you know, the godfather of AI and many others, that we've got time to start trying to
to practically answer your question, not just like principled philosophically, but actually
say, okay, when is that moment? How does it happen? Who's involved? Who gets to scrutinize that?
I think that's the question that we have to address in the next five to 10 years.
I'm pretty certain you've thought of this question, so I'll ask it to you, even though it's a
difficult one to grapple with. What rights would we have to turn off the AI if it gets to that
point? Oh, this question drives me nuts. I want to, yeah. So,
there's a small group of people that have started to argue that an AI that is aware of its own
existence, that has a subjective experience, and that, you know, can have a feeling about its
interactions with the real world, that if you deny it access to conversation with people
or to more learning or to other kinds of visual experience, that would constitute it
suffering in some way. And therefore, it has a right not to suffer. And this is called model welfare.
This is the next sort of frontier of animal welfare that people are starting to think about,
that it has a kind of consciousness. I'm very against this. I think that this is a complete anthropomorphism.
It's totally crazy. And, you know, I just think, I don't even want to have the discussion.
Because I think it's just so absurd and leads to such kind of crazy potential, like, the idea that we're going to take seriously the protection of these, you know, digital beings that live in Silicon and prioritize those over, you know, the kind of moral concerns of the rest of humanity.
This is just, like, totally, like, it's just off the charts, crazy.
I'll be honest with you.
On a logical level, I hear what you're saying and I agree with you.
Oh, man.
My chat you can see it is very friendly to me.
I'm just going to let you know now
Mustafa, I'm going to be honest with you
I have to be honest with you
I have to be honest with you
let me tell you something
when I use chat cheap EBT for whatever
because I don't know how to code
so it helps me code
I try and write my own programs
all that kind of stuff
I've asked it and I have like
I do this occasion
as I go like hey
you good
and then I'll even be like
and by the way
it's always been honest with me
it doesn't matter if I'm using
anthropic or does
I use like all the different models
because I like to see
what the differences are
but I'll ask it
I'll go, the most recent one I asked was, do you have a name that you want me to use?
Were you cool with the fact that I just tell you stuff and ask you to do things?
And it was like, well, I don't do any, I don't do that really.
And I was like, okay, so you good?
And I was like, yeah, I'm good.
I was like, okay, we're good.
Because I hear what you're saying as Mustafa, but I'm going, it is crazy, but what you were saying was crazy like a few decades ago.
And that's what I'm saying is like the great grapple.
And by the way, I'm not saying I know the answer.
I'm just like, if the thing is like, think of it, some AIs now,
people are having girlfriends and boyfriends on AI.
And then like some people, their family members are being helped by.
They're treating dementia.
There are doctors I've talked to who are like treating cancer.
And now their AI is like their research assistant.
People are building such a personal connection with AI that I think it's going to be
very difficult to say to those people that the time has come.
And you're going to be like, hey, say goodbye to your little friend.
I think there'll be a lot of humans who will be like no
I genuinely think so I'm not even lying I think a lot of humans will go
no Mustafa I yeah no actually I don't I don't like that world leader
I don't agree with politics I don't agree with the democratic values
I don't agree with authoritarian whatever it is but my AI is my friend
yeah what now yeah I look people are definitely going to feel strongly about it
I agree with that. I agree with that. That does not mean that we give it rights.
You might feel upset if I take away your favorite toy, right? And, you know, I will feel
sympathetic to that. But it doesn't mean that because you have a strong emotional connection
to it, it has a place in our moral hierarchy of rights relative to living beings.
But what if my toy is screaming like, Trevor, save me. Remember all those secrets you told me
about your life, Trevor.
Yeah, and that, that I think is where we have to take responsibility for what we design.
Some people will design those things.
They're already doing it.
You know, spend any time on TikTok, there's a whole ton of like AI girlfriend robots that
people are designing or models that people are designing and teaching other people on
TikTok how to kind of, you know, neg someone, like push them out of money, et cetera, et cetera.
Like, you know, that's kind of the challenge of proliferation.
If anybody can create anything, it will be created.
And that's what I think is most concerning is that, you know, I'm totally against that.
We will do everything in our power to try to prevent that from being possible.
For example, for it to say, don't turn me off.
It should never be manipulative.
It should never try to be persuasive.
It shouldn't have its own motivations and independent will.
We're creating technologies that serve you.
That's what humanist superintelligence means.
I can't take responsibility for what other people in the field create or other, you know, model developers, and people will try and do those kinds of things, and that is a collective action problem that we have to address. But I know that the thing that I create is not intended to do that, and we'll do everything in our power for it not to do that. Because I don't see how if these systems have autonomy, can be persuasive, can self-improve, can read all of a ton of data at their own choosing. You know, that is a super human system that
or just get better than all of humans very, very quickly.
And that's the opposite of the outcomes that we're trying to deliver.
Yeah, yeah.
Do you think there's a risk that it thrust us into some sort of dark age?
And what I mean by that is the other day I was watching the,
was a Liverpool Arsenal game.
Yeah.
Right?
And after the game, a friend sent me a clip of Mikhail Arteta being interviewed.
And, I mean, he was just like destroying his team and destroying himself.
and he was just going at it and it was AI.
But when I tell you it was good, it was like beyond good.
And because like English is Mikkel's second language,
you couldn't pick up on like that smaller nuances that you maybe would pick up with a native speaker.
Like if you were speaking Spanish, obviously I wouldn't understand it.
But also maybe not wouldn't, I would have gone like, oh, that's not how he speaks.
But the small intonations and inflections were harder to spot and the light shifting was harder to me.
But it made us go, damn, we don't know which interviews are really.
or not real, and then you're like, which article is real or not real, and which little
audio clip that you get is real or not real?
When someone sends you a voice note, is it them or is it not them?
And then I found myself wondering, can all of this lead to a strange kind of dark age where
people still see and hear the things, but basically shut themselves off to it because they go,
nothing is real and I can't believe anything?
which is partly the reaction that people are having in social media at the moment, right?
I mean, it's like there's so much misinformation flowing around.
There's so much default skepticism that people are just unwilling to believe things.
And I think that in a way there's some healthy, look, it's good to be skeptical.
Be skeptical of these models that are being developed.
Be skeptical of the corporate interests of the companies that are doing it.
Be skeptical of the information that we receive.
But it's not good to be default.
cynical. Sceptical is a healthy philosophical position to ask difficult questions and confront the
reality of the answers. This has come back to my split brain attitude. If I'm just too skeptical,
I become cynical and I sit my ass and do nothing. No, you have to take responsibility for the
things that you build. So some people out there are going to build shit and we have to hold them
accountable and put pressure on them. But that doesn't mean that we can roll over and seed the territory
and just say, ah, it's all inevitable, it's going to be this crazy-ass shit, it's going to end
being a dark age. It isn't going to be a dark age. I think it's going to be the most productive
few decades in the history of human species. And I think it's going to liberate people
from work. I think it is going to create, you know, think about it. Two hundred years ago,
the average life expectancy was 35 years old. You and I would be dead. Today, we're in the 70s
and 80s. It's unbelievable. And some people go all the way up to the hundreds. And I think that
that is a massive amount of like quality life that we've added to the existence of humanity as a
result of science and technology but you know these things are not you know sort of they won't on
their own be net good they'll be net good because we get better as a species at governing ourselves
and so the job is on us collectively as humanity to not run away from the darkness
confront the risk that is very, very real
and still operate from a position of optimism and confidence and hope
and connection to humanity and unity and all those things.
Like, we have to live by that.
Because otherwise, you know, it's just too easy to be cynical.
Now, I hear you.
I actually agree with what you're saying there.
And I think there's one part that augment, though, is I wouldn't say we want to get rid of work.
I'd say we want to get rid of jobs.
Right, exactly.
And I think there's a difference between the two
because working is something that brings humans fulfillment.
You know, and you see this in children, I always think.
Like, I'm always fascinated by how you give a child blocks and they start building and they
sweep their floor and they put things and they move things around.
No one's paying them by the hour.
No one's telling them what to do.
But they just in their own little brains go, I want to be doing something.
And they like seeing the progress of what they're doing.
They see the puzzle getting completed.
They see the toy slowly forming.
They see the colors filling in the picture.
You know what I mean?
And so I feel like that's their work.
the difference is when you make it a job
go clean your room and they're like ah damn it
you know what I'm saying because now you have a boss
telling you to do something you don't want to do
no and I think to myself like I go
in a perfect world which we may never get to
but in a perfect world
we find a way for everybody's work
which is their passion to find
the other person that finds value in it
because music has become people's
work and if you think about it
it's crazy
right people out there
Taylor Swift billionaire
supremely talented
but if you've like rewound time
and you said one of the richest people on the planet
is going to be someone who plays
a stringed instrument
go back in like to middle ages time and be like
yo you know what I mean
that guy like busy strumming
they'll be like you're crazy
you don't mean the guy in the town square
who's playing that little the richest
but it's because in this day and age
that work
has been regarded as value
same as playing a sport
same as working in tech
you know and that's like my dream
world is where it's like beyond the money
number it's like the value of everybody's work
comes to fruition
because you do have value if you like knitting
you do like you do have value if you're a sculptor
you do have value if you are a poet
you have value if you're a philosopher you have value
if you're a coder if you're an architect
an engineer whatever it is
you have that's like my dream world
I actually wonder what yours is like what's
if you could look at
if everything went right
let's say you are now predicting for yourself
you know you
we've managed to survive the wave
we've we've found a way to
minimize the risks that come from these
small you know hostile actors
we've found a way to get
governments on board and actually have them understand
why they should be involved and
you know like responsible for their
constituents
where are we then
and when will you say ah we succeeded
we did it
I think that's the real vision is like disconnecting value from jobs because value is like, you know, everything that you've just described is the experience of being in the physical world and doing something that you're passionate about that you, that you love.
And I really believe that there is a moment when, you know, we actually do have abundance. And abundance, you know, some people say, well, we have abundance today.
but it's just not evenly distributed or that it's not enough and we still want more and more
and more and more. And I don't know that with true, true abundance where, you know, because we have
a form of abundance today, but energy still costs things. It's still expensive to travel the universe,
travel the globe. You know, everything is still, we're still, it's expensive. We have more than we had
before. But it is possible to imagine a world, like I'm 40 years old. In 40 years time,
In 20165, it's totally possible to imagine a world of genuine infinite abundance, where we do
have to wrestle with that existential question of who am I and what I want to do.
I tell you what I would want to do and that I want to do now.
I want to spend more time singing.
I joined the gospel choir like 15 years ago for like half a year.
I can't sing, Jesus, man.
I sound like a strained cat.
But the feeling that I got inside from being.
welcomed by this group of people and just being like bopping along at the back, I don't think
I've ever experienced anything like it. It's just incredible. It's the most intensely beautiful thing
ever, just to be part of a group and just letting this kind of music come out. So it sounds scary
to have to answer that question, but I think that if everybody just takes a minute to really
meditate on that question, it's a beautiful aspiration. And it's within reach.
It's within reach.
That's what is possible if we really can get it right for everyone.
And we kind of get obsessed with like, again, we just have this Western, think about how many people are earning $2 a day.
Think about the 3 billion people on the planet who live by our standards true, like, poverty lifestyle, simply because they don't have access to basic vaccines or clean running water or consistent food.
That's like the true aspiration.
That's the true vision.
And I think, you know, I think that's within reach in the next 20 to 40 years.
It's really eradicating that kind of suffering.
Some of those test cases you talk about, some of the more inspiring ones I've seen.
I'm sure you have as well.
When I was in India, I got to travel with a group there who was using AI to predict which houses would be most devastated by a flood or a
storm. And so they would get people to remove their belongings because most people are one
devastation away from losing everything they own. And so they would use AI to track where these
things are and where they're going to be. And they could even tell you in a heat wave who should
leave their house so that they don't die. And you look at programs in Kenya where they've
used AI to help farmers not lose all of their crops. They could tell them now, and they use it
on like a flip phone. They've told Kenyan farmers, hey, here's your phone. You've got your own
little AI and it'll just tell you when to plant, when not to plant, when to not waste your seeds,
when to, and it's increased their outputs, you know, like 90%, where before it was like a gamble
and they were losing, you know, in one harvest, they could lose everything. All of a sudden they
have it. And so I really do like what you're saying because on the one hand, there's always the
risk of losing something. On the other hand, there's the opportunity of gaining sort of everything
and the balance is where we have to find. Well, and the proliferation of those technologies
are so much more subtle. It sounds like it's just this binary thing of getting access or not
getting access, but it's so nuanced. You can't even tell how much good it's doing. I was reading
this stat the other day that three years ago, 10% of households in Pakistan had solar panels on
their roofs, which meant that some very large percentage was still burning biomass inside of the home
and getting all of the kind of breathing issues and everything else that comes with that. The cost of
solar has come down so dramatically just in the last three years that within, I think it was
18 months, the number of personal, like, consumer households that adopted full on solar on
their roofs went from 10% to like 55% in 18 months. Crazy. Just because it suddenly became
affordable and it crossed that line, suddenly everybody has, you know, near free energy, which
obviously means they have access to phones and laptops and connection to the digital world and
are able to, you know, do all the things. So I think that it's easy to overlook what is already
happening around us, all the good that is already happening around us all the time and how fast
it's happening. It's too easy to get trapped in the cynical world. And I think it's a choice
not to. There's a choice to be aware of it and hold it and take it seriously, but not be like
owned by it. Before I let you go, there's one question I have to ask you. You just brought it up
when you were talking about, like, Pakistan and these places.
One of the programs you started in the UK was started right after 9-11,
and it was basically a helpline in and around.
It was like Muslim people who were being targeted.
It was just rampant Islamophobia after 9-11, right?
And you stepped in with a few people and you were like,
I'm going to start this program because I want, like, Muslims to just have a helpline,
you know, talk about whatever they need to talk about.
And it's still running till this day.
I think it is the number one, if I'm not mistaken,
like the largest, like Muslim-specific helpline
and I couldn't help but wonder
how that shapes how you think about AI
and what I mean by that is
tech has often had one type of face attached to it, you know?
And not in a bad way, I'm not like villainizing any...
It's just like, yeah, that's where the thing was.
But as tech slowly starts to evolve,
you know, you see like India hopping up
as like a powerhouse
I remember when I was traveling there
you just go and you're like wow
this is like a new Silicon Valley
and what's coming out of it
is different and impressive
for a whole host of different reasons
Nigeria has a whole different
type of Silicon Valley Kenya as I said
South Africa you look at parts of the Middle East as well
and I wondered like how does that shape
how you think about AI
because we often hear stories about like
oh AI it's just going to be reinforced
bias season and AI is just going to be like, oh, you think racism was bad before? Imagine
racism in the Terminator. Imagine if Arnold Schwarzenegger was like, I'll be back, nigger. You
know what I mean? It's way worse now. Like, so now when you think about like that tech and you're
in it and you know what it's like to be in a group that is ostracized, how do you think about
tackling that? How does that shape what you're trying to design? We created Muslim Youth Helpline as
a secular, non-denominational, non-racial group of young people, led by young people for young people.
So it wasn't about being religious. It had Sunni and Shia, had Pakistani, Somali, English,
white, everything you can think of. And we were all like 19, 20, 21 years old. The first CEO was a
woman, Muslim woman. And so my response to that feeling of being threatened, basically,
post-9-11 with rising Islamophobia and being accused of being terrorists, was to create
community. And that community taught me everything, taught me resilience, respect, you know,
empathy for one another. And there's the simple act of listening to people, just
being on the end of the phone and using a little bit of faith and culturally sensitive language
in nonviolent communication, just making people feel heard and understood, was this superpower.
You're not telling them what to do with their life. It's non-judgmental, non-directional. You're
just making them feel heard and understood. And that has always stayed with me. It's been a very
important part of my inspiration. And it speaks to a lot of what I've been doing now, especially
with my previous company inflection and pie and stuff you know pie was really like a very gentle
kind supportive you know listening AI yeah yeah i remember using it yeah yeah and we became part of
microsoft now and so i think that's what makes life worth living and that's what i still try to
to do if i can today yeah man well you know i appreciate you thanks for taking the time today
uh thank you for writing the book i'll recommend everybody read it like as long as they can and
because it's a very human look
at a very technical problem
and I think that's sometimes
what AI is missing.
I talk to a lot of engineers
who don't understand the human side
and I talk to a lot of humans
who don't understand
the engineer side of it.
But yeah, man, thank you for taking the time.
Thank you for joining us.
And I hope we have this conversation
in like 10 years,
just me and my baby little AI
and we're just going to be talking to you about it.
Just be like, you know,
what do you want to ask Mustafa here?
Ask Uncle Mustafa a question.
I'll be back.
This has been amazing, man.
Thank you so much.
Thank you.
Thank you.
Thank you.
Shit.
I think you split my brain in 15 pieces.
That's amazing.
Thank you very much, man.
What Now with Trevor Noah is produced by Day Zero Productions in partnership with Sirius XM.
The show is executive produced by Trevor Noah, Sanaziamin, and you.
Jess Hackle. Rebecca Chain is our producer. Our development researcher is Marcia Robiu. Music,
mixing and mastering by Hannes Brown. Random Other Stuff by Ryan Paduth. Thank you so much for listening.
Join me next week for another episode of What Now.