Hard Fork - Google DeepMind C.E.O. Demis Hassabis on Living in an A.I. Future
Episode Date: May 23, 2025This week, we take a field trip to Google and report back about everything the company announced at its biggest show of the year, Google I/O. Then, we sit down with Google DeepMind’s chief executive... and co-founder, Demis Hassabis, to discuss what his A.I. lab is building, the future of education, and what life could look like in 2030.Guest:Demis Hassabis, co-founder and chief executive of Google DeepMindAdditional Reading:At Google I/O, everything is changing and normal and scary and chillGoogle Unveils A.I. Chatbot, Signaling a New Era for SearchGoogle DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I.We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Transcript
Discussion (0)
Now, there's a very large, what looks like a circus tent over there.
What do you think's going on in there?
That is Sherwin Amphitheater.
Oh, that's the amphitheater?
Yes, you were under that tent yesterday.
I thought that was just some carnival that they were setting up for employees.
Okay, my mistake.
I thought Ringling Brothers had entered into a partnership with the Google Corporation.
No, it's a revival tent.
They're bringing Christianity back.
I'm Kevin Reuss, a tech columnist at the New York Times.
I'm Casey Newton from Platformer.
And this is Hard Fork.
This week, our field trip to Google
will tell you all about everything
the company announced at its biggest show of the year.
Then Google DeepMind CEO Demis Hassabis
returns to the show to discuss the road
to AGI, the future of education, and what
life could look like in 2030.
Kevin being very old for starters.
He won't be that old.
Well, Casey, our decor is a little different this week.
It's all, say it looks better.
Yes, we are not in our normal studio in San Francisco.
We are down in Mountain View, California,
where we are inside Google's headquarters.
I'm just thrilled to be sitting here surrounded
by so much training data.
That's what they call books here at Google.
So we are here because this week is Google's annual Developer
Conference, Google I.O. There were many, many announcements
from a parade of Google executives
about all the AI stuff that they have coming.
And we are going to talk in a little bit with Demis Asabis,
who is the CEO of Google DeepMind, essentially
their AI division, who's been driving a lot of these AI
projects forward.
But first, let's just sort of set the scene for people,
because I don't think we have ever been together at an I.O.
before.
So what is it like?
So Google I.O. has a bit of a festival atmosphere. It takes place at the Shoreline Amphitheater,
which is a concert venue. But once a year, it gets transformed into a sort of nerd concert,
where instead of seeing musicians perform, you see Google employees vibe coding on stage.
Yes, there's a vibe coding demo. There were many other
things I did actually see as I was leaving the Google acapella
group. Google apella was like sort of doing their warmups in
anticipation of doing some concert. So you've got some
like old school Google vibes here, but also a lot of
excitement around all the AI stuff.
So I didn't see Google Pella perform. Where was his
performance?
I didn't see them perform either. I just saw them warming up. They were sort of doing their scales. They sounded great. You know what, I didn't see Google Pella perform. Where was his performance? I didn't see them perform either.
I just saw them warming up.
They were sort of doing their scales.
They sounded great.
You know what?
I bet it was a classic acapella situation
where they warmed up and someone came up to them
and they said, please don't perform.
All right, Kevin.
Well, before we get into it, shall we say our disclosures?
Yes, I work for the New York Times,
which is suing OpenAI and Microsoft
over copyright violations related to training of AI systems.
And my boyfriend works in Anthropic, a Google investment.
Oh, that's right.
Yeah.
So let's talk about some of what was announced this week.
There was so, so much.
We can't get to all of it.
But what were the highlights from your perspective?
Well, so look, I wrote a column about this, Kevin.
I felt a little bit like I was in a fever dream
at this conference.
I think often it is the case at a developer conference where they'll sort of
try to break it out into one, two, three big bullet points.
This one felt a little bit like a fire hose of stuff.
And so by the end, I'm looking at my notes saying,
okay, so email's gonna start writing in my voice
and I can turn my PDFs into video TED Talks?
Sure, why not?
So I had a little bit of fever of dream mentality.
What was your feeling?
Yeah, I told someone yesterday that I
thought the name of the event should
have been Everything Everywhere All At Once.
That didn't actually feel like what they were saying
is like every Google product that you use
is going to have more AI.
That AI is going to be better.
And it is all going to make your life better in various ways.
But it was a lot to keep track of.
Yeah, I mean, look, if we were going
to try to pull out one very obvious theme from everything
that we saw, it was AI is coming to all of the things.
And it's probably worth drilling down a little bit
into what some of those things are.
Yeah, so the thing that got my attention,
and then I was sitting right next to you,
the one time when I really noticed you perking up
was when they started talking about this new AI mode
in Google Search, their core search product.
So talk about AI mode and what they announced yesterday.
So, Kevin, this gets a little confusing
because there are now three different kinds
of major Google searches, I would say.
There is the normal Google search,
which is now augmented in many cases by what they call AI overviews, which is
sort of AI answer at the top.
Yeah, that's the little thing that will tell you like what the
meaning of phrases like you can't like a badger twice is
right. That's right. And if you don't know the meaning of that,
Google it. So that's sort of the thing one thing to is the
Gemini app, which is kind of like a one for one like chat GPT
competitor, that's in its own, you know, standalone app, standalone website.
And then the big thing that they announced this week
was AI mode, which has been in testing for a little while.
And I think this sort of lands in between
the first two things, right?
It is a tab now within Search, and this is rolling out
to everybody in the United States and a few other countries.
And you sort of tap over there, and now you can have the sort of longer, you know, multi-step
questions that you might have with a Gemini or a ChatGPT, but you can do it right from
the Google search interface.
Yeah.
And I've been playing with this feature for a few weeks now.
It was in their labs section, so you could try it out if you were enrolled in that.
And it's really nice.
Like, it's a very clean thing.
There's no ads yet.
They will probably appear soon.
It does this thing called the fan out,
which is very funny to me.
You ask it a question, and it dispatches
a bunch of different Google searches
to crawl a bunch of different web pages
and bring you back the answer.
And it actually tells you how many searches it is doing
and how many different websites it's doing.
So I asked it, for example, how much does a Costco membership
cost?
It searched 72 websites for the answer to that question.
So AI mode is very, very eager to answer your question,
even if it does verge on overkill sometimes.
Yeah, well, so you and I had a chance
to meet with Robby Stein, who is one of the people who
is leading AI mode.
And I was surprised by how enthusiastic about it
you were.
Like, you said that you've really actually
found this quite useful in a way that I think I have not so far.
So what are you noticing about this?
I mean, the main thing is it's just such a clean experience.
Like, on a regular Google search results page,
you and I have talked about this.
Like, it has just gotten very cluttered.
There's a lot of stuff there.
There's ads.
There's carousels of images. There's sometimes a shopping module, there's sometimes a
maps module. Like it's just, it's hard to actually like find the blue links sometime. And I imagine
that AI mode will become more cluttered as they try to make more money off of it. But right now,
if you go to it, it's like a much simpler experience. It's much easier to find what you're
looking for. Yeah. And at the same time, they're also trying to do some really interestingly complex stuff.
One of the things that they showed off during the keynote was somebody asked a question
about baseball statistics that required finding three or four different kind of tricky to
locate stats and then combining them all together in an interactive chart. That was just a demo.
We don't have access to that yet, but that is one of those things where it's like, well,
if that works, that could be a meaningful improvement
to Search.
Yeah, it could be a meaningful improvement to Search.
And we should also say, it's a big unknown
how all of this will affect the main Google Search product.
For now, it's a tab.
They have not sort of merged it into the main core Google
Search, in part because it's not monetized yet.
And it costs a lot more to serve those results
than a traditional Google search.
But I imagine over time, these things
will kind of merge, which will have lots of implications
for publishers, people who make things on the internet,
the whole sort of economic model of the internet.
But before we get dragged down that rabbit hole,
let's just talk about a few other things
that they said on stage at Google I.O.
So I was really struck by the usage numbers
that they trotted out for their products.
Gemini, according to them, the app now
has 400 million monthly users.
That is a lot.
That is not quite as many as ChatGPT,
but it is a lot more than products like Cloud
and other AI chatbots.
They said that their tokens that are being output by Gemini
has increased 50 times since last year
and is just like way,
so people are using this stuff, in other words.
This is not just like some feature
that Google is shoving into these products
that people are trying to sort of navigate around.
Like people are really using Gemini.
I think that that's right.
And I think it's the Gemini number in particular
is the one that struck me like 400 million
is a lot of people.
And I don't see that many obvious ways
that Google could be like faking that stat.
You know, in contrast to, for example,
they said one and a half billion people
see AI overviews every month.
It's like, well, yeah, you just put them
in Google search results.
Like that's an entirely passive phenomenon.
But like Gemini, you got to go to the website,
you got to download the app.
So that tells me that people actually
are finding real utility there.
So that's Gemini, but they also released
a bunch of other stuff like new image and video models.
Do you want to talk about those?
Yeah, so like the other companies,
they're working on text to image, text to video.
And while OpenAI's models have gotten
most of the attention in this regard, Google's really are quite good.
I think the the marquee feature for this year's IO is that the
video generating model VO3 can also generate sound. So it
showed us a demo, for example, of an owl flapping its wings,
you hear the wings flap, it comes down to the ground,
there's this sort of nervous badger character.
And they exchanged some dialogue,
which was basically incomprehensible, just pure
slop.
But they were able to generate that from scratch.
And I guess that's something.
Yeah.
They also announced a new Ultra subscription
to Google's AI products.
Now, if you want to be on the bleeding edge of Google's AI
offerings, you can pay $250 a month for Gemini Ultra.
And Casey, I thought to myself, no one is going to do this.
Who is going to pay $250 a month?
That's a fortune for access to Google's leading AI products.
And then I look over to my right,
and there's Casey Newton in the middle of the keynote,
pulling out his credit card from his wallet
and entering it into buy a subscription
to this extremely expensive AI product.
So you might've been the first customer of this product, why?
Well, and I hope that they don't forget that
when it comes time to feed me into the large language model.
Look, I want to be able to have the latest models.
And one, I think clever thing that these AI companies are
doing is they're saying, we will give you the latest
and greatest before everyone else,
but you have to pay us a ridiculous amount of money.
And if you're a reporter and you're reporting
about this stuff every day, I do think
you sort of want to be in that camp.
Now, is it true that I now spend more
on monthly AI subscriptions than I
paid for my apartment in Phoenix in the year 2010?
Yes, and I don't feel great about it,
but I'm trying to be a good journalist, Kevin.
Please, your family is dying.
Another thing that made me perk up
was they talked a lot about personalization, right?
This is something we've been talking about for years.
Basically, Google has billions of people's email,
their search histories, their calendars,
all their personal information,
and we've been
sort of waiting for them to start weaving that stuff in so that you can use Gemini to do things
in those products. That has been slow, but they are sort of taking baby steps. And they did show
off a few things, including this new personalized smart replies feature that is going to be
available for subscribers later this year in Gmail so that instead of just getting
the kind of formulaic suggested replies
at the bottom of an email, it'll actually
kind of learn from how you write.
And maybe it can access some things in your calendar
or your documents and really suggest a better reply.
You'll still have to hit Send, but it'll
pre-populate a message for you.
Yeah, I have to say, I'm somewhat bearish on this one, Kevin,
only because I think that if this were easy,
like it would just sort of be here already, right?
Like when you think about how formulaic so much email is,
it doesn't seem to me like it should be that hard
to figure out like what kind of email are you?
Like I'm basically a two sentence emailer, you know,
that doesn't seem like that, that's hard to mimic.
So that's just kind of an area where I've been a little bit
surprised and disappointed.
We also know large language models
do not have large memories.
So one thing that I would love for Gmail to do, but it cannot,
is just sort of understand all of my email
and use that to inform the tone of my voice.
But it can't do that.
It can only take a much more limited subset.
Is that going to make it sort of difficult to accurately mimic
my tone?
I don't know.
So what I'm trying to say here is,
I think there's a lot of problems here,
and my expectations are pretty low on this one.
Yeah, that was the part where I was like,
I will believe that this exists and is good when I can use this.
But as with other companies like Apple,
which demoed a bunch of AI features at its developer
conference last year and then never launched half of them,
I have become a little bit skeptical until I can actually
use the thing myself.
Yeah, it really is amazing how looking back last year's
WWDC was just like a movie about what a competent AI company
might have done in an alternate future.
It had very little bearing on our reality,
but it was admittedly an interesting set of proposals.
So that is the software AI portion of I.O.
There was also a demo of a new hardware product that Google is
working on which are these Android XR glasses basically their version of what
Meta has been showing off it's Orion glasses where you have a pair of glasses
they have like sort of chunky black frames they've got like sort of a
hologram lens in them and you can actually like see a little thing
overlaid on your vision telling you you you what the weather is or what time it is
or that you have a new message.
Or they have this integration with Google Maps
that they showed off where you can like,
it'll like show you the little miniature Google map
right there inside your glasses
and it will sort of turn as you turn
and tell you where to go.
They did say this is a prototype,
but what did you make of this?
Well, I think a lot of it looked really cool.
Like probably my favorite part of the demo was
when the person who was demonstrating
looked down at her feet
cause she was getting ready to walk to a coffee shop
and the Google map was actually projected at her feet.
And so she know, okay, go to the left, go to the right.
If you've ever been walking around a sort of foreign city
and desperately wanted this feature,
I think you would see that and be pretty excited.
What did you think?
Yeah, I thought to myself, Google Glass is back.
It was away for so long in the wilderness
and now it's back and it might actually work this time.
Absolutely.
I did get to try the glasses.
There was a very long line for the demo, but I-
And let me guess, you said,
I'm Kevin Roos, let me know the front of the line.
No, they made me wait for two hours.
I mean, I didn't literally wait for two hours.
I went and did some stuff and then came back.
But I got my demo, it was like five minutes long
and it was pretty basic, but it is cool.
Like you can now look around and you can say,
hey, what's this plant?
And it'll sort of, Jem and I will kind of like
look at what you're seeing and tell you what the plant is.
Totally, I did a demo a few months back
and also like really enjoyed it.
So I think there's something here.
And I think more importantly, Kevin, consumers now,
when they look at Google and Meta,
they finally have a choice.
Whose advertising monopoly do I want
to feed with my personal data?
And you have consumer choice now.
And I think that's beautiful.
And that's what capitalism is all about.
Exactly.
So OK, those are some of the announcements.
But what did you make of the sort of overall tenor
of the event?
What stuck out to you as far as the vibe?
So the thing that stuck out to me the most
was just contrasting it with last year's event
because last year they had this phrase
that they kept repeating,
let Google do the Googling for you,
which to me put me in the mind of somebody sort of
leaning back into your like floating chair
from the WALL-E movie and just sort of letting the AI like run roughshod over your life.
This year, Google talked about AI very differently.
This time, they want you to sit up, they want you to lean in,
they want you to pay them $250, and they want you to get to work, you know?
AI is your superpower, it's your bionic arm,
and you're going to use it to get sort of further and farther than ever before.
But even while presenting that vision, Kevin,
they were also very much like, but it's gonna be normal.
It's gonna be chill.
It's gonna be kind of like your life is now.
You're still gonna be in the backyard
with your kids doing science experiments.
You're still gonna be planning a girls weekend
in Nashville, right?
There was not really a lot of science fiction here.
There was just a little bit of like,
oh, we put a little bit of AI in this.
So that was interesting to me.
Yeah, so I had a slightly different take,
which is that I think Google is being AGI-pilled.
For years now, Google has sort of distanced itself
from the conversation about AGI.
It had DeepMind, which was sort of its AGI division,
but they were over in London
and they were sort of a separate thing.
And people at Google would sort of not laugh exactly,
but kind of chuckle when you asked them about AGI.
It just didn't seem real to them,
or it was so remote that it wasn't worth considering.
They would say, what does this have
to do with search advertising?
Exactly.
So now, it's still the case that this
is a company that wants you to think about it as a product
company, a search company.
They're not like going all in on AGI. But once you to think about it as a product company, a search company, they're not going all in on AGI.
But once you start looking for it,
you do see that the sort of culture of AI
and how people at Google talk about AI
has really been shifting.
It is starting to seep into conversation here in a way
that I think is unusual and maybe indicative
that the technology is just getting better faster
than even a lot of people at Google were thinking it would.
So I don't totally agree with you, Kevin,
because while I'm sure that they're having more conversations about
AGI here than they were a year ago,
when you look at what they're building,
it doesn't seem like there's been a lot of rip it up and start again.
It seems a lot like how do we plug AI systems into Google shape holes? And maybe that will eventually ladder up to something like AGI,
but I don't think we've seen it quite yet.
The other observation I would make is that I think the Google of 2025 has a lot more
swagger and confidence when it comes to AI than the Google of 2024 or 2023.
I mean, two years ago, Google was still trying to make Bard a thing.
And I think they were feeling very insecure
that OpenAI had beaten them to a consumer chat bot that
had found some mass adoption.
And so they were just playing catch up.
And I don't think anyone would have said that Google was
in the lead when it came to generative AI just a few years
ago.
But now, they feel like there is a race
and that they are in a good position to win it.
They were talking about how Gemini stacks up well
against all these other models.
It's at the top of this leaderboard, LM Arena,
for all these different categories.
I don't love the way that AI is sometimes covered
as if it were like sports.
Who's up, who's down, who's winning, who's losing.
But I do feel like Google has the confidence now
when it comes to AI of a team that like knows
it's gonna be in the playoffs at least.
And that was evident.
Oh yeah, I mean, well, when you look at the competition,
just what's happened over the past year,
you have Apple doing a bunch of essentially fictional demos
at WWDC and you have Metta cheating to win at at LM arena making 27 different versions of a model just to come
up with one that would be good at one thing, right. So I think
if you're Google, you're looking at that and you're thinking I
could be those guys, right.
So that is what it felt like inside Google IO. What was the
reaction from outside I noticed that, for example, the company
stock actually fell like not, not by a lot, but like, you
know, to a degree that suggested that Wall Street was kind of meh on a lot of what was announced.
But what was the reaction like outside of Google?
I think the external reaction that I saw was just struggling a little bit to connect the
dots, right?
Like that is the issue with announcing so many things during a two hour period is sometimes
people don't have that one thing that they're taking away saying, I can't wait to try that.
And when you're just looking at a bunch of Google products
that you're already using, I think if you're an investor,
it's probably hard to understand, well,
I don't understand why this is unlocking
so much more value at Google.
Now, maybe millions of people are gonna spend $250 a month
on Gemini Ultra, but unless that happens,
I can understand why some people feel like, hmm,
this feels a little like the status quo.
Yeah, I see that.
I also think there are many unanswered questions
about how all of this will be monetized.
And Google has built one of the most profitable products
in the history of capitalism in the Google search engine
and the advertising business that supports it.
It is not clear to me that whatever AI mode becomes or whatever AI features it can jam into
search, if search as a category is just declining across the
board, if people are not going to Google.com to look things up
in the way they were a few years ago, I think it's an open
question, like what the next thing is and whether Google can
seize on it as effectively as they did with search. Well, I think that they gave us question, like what the next thing is and whether Google can seize on it as effectively as they did with Search.
Well, I think that they gave us one vision of what
that might be, and that is shopping.
A significant portion of the keynote
was devoted to one executive talking about a new shopping
experience inside of Google where
you can take a picture of yourself, upload it,
and then sort of virtually try things on.
And it will sort of use AI to understand your proportions and you know accurately map a garment onto you and
there was a lot of stuff in there that would just sort of let Google take a cut
right obviously you can advertise the individual thing to buy maybe you're
taking some sort of like cut of the payment there's an there's an affiliate
fee that is in there somewhere so one of the things I'm trying to do is I cover
Google going forward
is understanding that yes, search is the core,
but Gemini could be a springboard
to build a lot of other really valuable businesses.
Yeah, an important question I know
that I always ask you when I go to these things,
how was the food?
Let's see, I think the food was really nice.
So here's the thing,
last year it was a purely savory experience at breakfast
and I am shamefully an American
who likes a little sweet treat when I woke up.
This year, they had both bagels
and an apple cinnamon coffee cake.
And so when I was heading into that keynote,
I was in a pretty good mood.
I had some of that,
they have like little bottles of cold brew
and I'm like a huge caffeine addict.
So I took two of them.
And boy, I was on rocket fuel all day.
I was just hopping around.
I was bouncing off the walls.
I was doing parkour.
I was feeling great.
I thought I saw you warming up with the Acapella team.
Now it all makes sense.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
When we come back, we'll talk with Demis Isabis,
CEO of Google DeepMind,
about his vision of the AI future. Well Casey, I guess we behaved ourselves at I.O. because Google has made Demis Hassabis,
the CEO of Google DeepMind, available for us to interview today.
We talked to him last February, but of course a lot has happened since then, starting with
his Nobel Prize, but continuing on through a slew of announcements that he just made
on stage.
What kind of Nobel Prize would you want to win?
Probably just for being handsome.
Yeah, my peace for me.
Yeah.
Let's bring him in.
Demis Asabas, welcome back to Hard Fork.
Thanks for having me again.
A lot has happened since the last time you were on the show.
Most notably, you won a Nobel Prize.
Congrats on that.
Thank you.
Ours must be still in the mail.
Can you put in a good word for next year with the committee?
I will do.
I will do.
I imagine it's very exciting to win a Nobel Prize.
I know that had been a goal for a long time of yours.
I imagine it also leads to a lot of people giving you crap like during everyday activities
Like if you're you know struggling to work the printer and people are just like oh, Mr. Nobel
Laurie, like does that happen?
A little bit. I mean look I tried to say look I can't you know that maybe it's a good excuse to like not have to fix
Those kinds of things right? It's more shield
So of things, right? So it's more shield. So you just had Google I.O. and it was really the Gemini show. I mean, I think
Gemini's name was mentioned something like 95 times in the keynote. Of all the
stuff that was announced, what do you think will be the biggest deal for the
average user?
Wow, well, I mean, we did announce a lot of things. I think for the average user, I think it's the new powerful models and I hope
this astro-type technology coming into Gemini Live.
I think it's really magical actually when people use it for
the first time and they realize that actually AI is
capable already today of doing much more than what they thought.
Then I guess VO3 was the biggest announcement of the show
probably and seems to be going viral now
and that's pretty exciting as well I think. Yeah. One thing that struck me about And I guess VO3 was the biggest announcement of the show, probably, and seems to be going viral now.
And that's pretty exciting as well, I think.
Yeah.
One thing that struck me about I.O. this year compared to previous years is that it seems
like Google is sort of getting AGI-pilled, as they say.
I remember interviewing people, researchers at Google, even a couple years ago.
And there was a little taboo about talking about AGI.
They would sort of be like, oh, that's
like Demis and his DeepMind people in London.
That's sort of like their crazy thing
that they're excited about.
But here, we're doing like real research.
But now you've got senior Google executives
talking openly about it.
What explains that shift?
I think the sort of AI part of the equation becoming more and more central,
like I sometimes describe Google DeepMind now as the engine room of Google.
And I think you saw that probably in the keynote yesterday,
really, if you take a step back.
And then it's very clear,
I think you could sort of say AGI-pilled is maybe the right word,
that we're quite close to this human level general intelligence,
maybe closer than people thought even a couple of years ago, and it's going to have broad cross-cutting impact. And I think there's another thing that you saw at the keynote,
it's sort of literally popping up everywhere because it's this horizontal layer that's going
to underpin everything. And I think everyone is starting to understand that and maybe a bit of
the deep mind ethos is bleeding into the general Google, which is great.
You mentioned that Project Astra is powering some things that maybe people don't even realize
that AI can yet do. I think this speaks to a real challenge in the AI business right now, which is
that the models have these pretty amazing capabilities, but either the products aren't
selling them or the users just sort of haven't figured them out yet. So how are you thinking about that challenge
and how much do you bring yourself to the product question
as opposed to the research question?
Yeah, it's a great question.
I mean, I think one of the challenges
I think of this space is obviously the underlying tech
is moving unbelievably fast.
And I think that's quite different
even from the other big revolutionary techs, internet
and mobile.
At some point, you get some sort of stabilization of the tech stack so that then the focus can
be on product or exploiting that tech stack.
And what we've got here, which I think is very unusual but also quite exciting from
a researcher perspective, is that the tech stack itself is evolving incredibly fast,
as you guys know.
So I think that makes it uniquely challenging, actually,
on the product side, not just for us at Google and DeepMind,
but for startups, for anyone, really,
any company, small and large, is what do you bet on right now
when that could be 100% better in a year, as we've seen.
And so you've got this interesting thing
where you need kind of fairly deeply technical
sort of product people, product designers and managers,
I think, in order to sort of intercept
where the technology may be in a year.
So there's things it can't do today,
and you wanna design a product
that's gonna come out in a year.
So you've got a pretty deep understanding of the tech
and where it might go to sort of work out
what features you can rely on.
And so it's an interesting one.
I think that's what you're seeing,
so many different things being tried out.
And then if something works,
we've got to really double down quickly on that.
Yeah, during your keynote,
you talked about Gemini as powering both
sort of productivity assistant style stuff
and also fundamental science and research challenges.
And I wonder in your mind, is that the same problem
that sort of like one great model can solve
or are those sort of very different problems
that just require different approaches?
I think, when you look at it,
it looks like an incredible breadth of things,
which is true and how are these things related other than the fact I'm interested in all of them,
but that was always the idea with building general intelligence truly generally and in
this way that we're doing, it should be applicable to almost anything.
That being productivity, which is very exciting, help billions of people in their everyday
lives to cracking
some of the biggest problems in science.
90%, I would say of it, is the underlying core general models, in our case, Gemini,
especially 2.5.
In most of these areas, you still need additional applied research or a little bit of special
casing from the domain, maybe it's special data or whatever, to tackle that problem.
And maybe we work with domain experts in the scientific areas.
But underlying it, when you crack one of those areas,
you can also put those learnings back into the general model.
And then the general model gets better and better.
So it's a kind of very interesting flywheel.
And it's great fun for someone like me,
who's very interested in many things.
You get to use this technology and sort of go
into almost any field that you find interesting.
I think that a lot of AI companies
are wrestling with right now is how many resources
to devote to sort of the core AI push on the foundation models,
making the models better at the basic level,
versus how much time and energy and money
do you
spend trying to spin out parts of that and commercialize it and turn it into products.
And I imagine this is both like a resources challenge, but also like a personnel challenge
because say you join DeepMind as an engineer and you want to like build AGI and then someone
from Google comes to you and says like, we actually want your help like building the
shopping thing that's going to to let people try on clothes.
Is that a challenging conversation
to have with people who joined for one reason
and maybe asked to work on something else?
Yeah, well, it's sort of self-selecting internally.
That's one of ours you've been quite large.
There are enough engineers on the product teams
and the product areas that can deal with the product
development, prod-eng, and the researchers, if they want to stay in core research, that's fine and we need that.
But actually, you'll find a lot of researchers are quite motivated by real-world impact,
be that in medicine, obviously, and things like isomorphic, but also to have billions
of people use their research.
It's actually really motivating.
There's plenty of people that like
to do both. So yeah, there's no need for us to sort of have to pivot people to certain
things.
You did a panel yesterday with Sergey Brin, Google's co-founder, who has been working
on this stuff back in the office. And interestingly, he has shorter AGI timelines than you.
He thought AGI would arrive before 2030
and you said just after.
He actually accused you of sandbagging,
basically like artificially pushing out your estimates
so that you could like under promise and over deliver.
But I'm curious about that because you will often hear people
at different AI companies arguing about when the timelines are,
but presumably you and Sergey have access to all the same information and the same roadmaps,
and you understand what's possible and what's not.
So what is he seeing that you're not or vice versa that leads you to different conclusions about when AGI is going to arrive?
Well, first of all, there wasn't that much difference in our timelines if he's just before 2013 and I'm just after.
Also, my timeline's been pretty consistent
since the start of DeepMind in 2010.
So we thought it was roughly a 20-year mission,
and amazingly, we're on track.
So it's somewhere around then, I would think.
And I feel like between,
I actually have obviously a probability distribution
of where the most massive that is
between five and 10 years from now.
And I think partly it's to do with predicting anything precisely five to 10 years out is
very difficult.
So there's uncertainty bars around that.
And then also there's uncertainty about how many more breakthroughs are required, right?
And also about the definition of AGI.
I have quite a high bar, which I've always had, which is it should be able to do all
of the things that the human brain can do, right, even theoretically.
And so that's a higher bar than say what the typical individual human could do, which is
obviously very economically important.
That would be a big milestone, but not in my view enough to call it AGI.
And we talked on stage a little bit about what is missing from today's systems, sort
of true out of the box invention and thinking, sort of inventing a conjecture rather than
just solving a maths conjecture. Solving one is pretty good, but actually inventing the
Riemann hypothesis or something as significant as that that mathematicians agree is really
important is much harder. And also consistency. So the consistency is a requirement of generality really and
it should be very very difficult for even top experts to find flaws, especially
trivial flaws in the systems which we can easily find today and you know the
average person can do that. So there's a sort of capabilities gap and there's a
consistency gap before we get to what I would consider AGI. And when you think about closing that gap,
do you think it arrives via incremental 2, 5% improvements
in each successive model just kind of stacked up over a long period of time?
Or do you think it's more likely that we'll hit some sort of technological breakthrough
and then all of a sudden there's lifto off and we hit some sort of intelligence explosion.
I think it could be both and I think for sure both is going to be useful, which is why we
push unbelievably hard on the scaling and the, you know, what you would call incremental,
although actually there's a lot of innovation even in that to keep moving that forward in
pre-training, post-training, inference time compute, all of that stack.
So there's actually lots of exciting research
and we showed some of that,
that diffusion model, the deep think model.
So we're innovating at all parts of that,
the traditional stack, should we call it.
And then on top of that, we're doing more greenfield things,
more blue sky things like Alpha Evolve,
maybe you could include in that, which-
Is there a difference between a green field thing and a blue sky thing?
I'm not sure.
Maybe they're pretty similar.
So some new area let's call it.
And then that could come back into the main branch.
And we've, I mean as you both know, I've been fundamental believer in sort of foundational
research.
We've always had the broadest, deepest research bench,
I think, of any lab out there. And that's what allowed us to do past big breakthroughs,
obviously transformers, but AlphaGo, AlphaZero, all of these things, distillation. And if,
to the extent any of those things are needed again, another big breakthrough of that level,
I would back us to do that. And We're pursuing lots of very exciting avenues that
could bring that step change as well as the incremental.
Then they of course also interact because the better you have
your base models, the more things you can try on top of it.
Again, like Alpha Evolve,
add in evolutionary programming in that case on top of the LLMs.
We recently talked to Karen Howe who's a journalist, evolutionary programming in that case on top of the LLMs.
We recently talked to Karen Howe, who's a journalist,
just wrote a book about AI.
And she was making an argument essentially against scale,
that you don't need these big general models that
are incredibly energy intensive and compute intensive
and require billions of dollars and new data centers
and all kinds of resources to make happen, that instead of doing
that kind of thing, you could build smaller models.
You could build narrower models.
You could have a model like AlphaFold
that is just designed to predict the 3D structures of proteins.
You don't need a huge behemoth of a model to accomplish that.
What's your response to that?
Well, I think you need those big models.
We love big and small models. So you need the big models often to train the smaller
models.
We're very proud of our kind of flash models, which are the most, we call them our workhorse
models, really efficient, some of the most popular models.
We use a ton of those types of size models internally, but you can't build those kinds
of models without distilling from the larger teacher models.
And even things like AlphaFold, which obviously I'm a huge advocate of more of those types of
models that can tackle right now. We don't have to wait to AGI. We can tackle now really important
problems in science and medicine today. And that will require taking the general techniques,
but then potentially specializing it,
in that case around protein structure prediction.
And I think there's huge potential
for doing more of those things.
And we are largely in our science work,
AI for science work.
And I think we're producing something pretty cool on that
pretty much every month these days.
And I think there should be a lot more exploration on that.
Probably a lot of startups could be built combining some kind of general model that exists today
with some domain specificity. But if you're interested in AGI, you've got to
push the, again, both sides of that. It's not an either-or in my mind. I'm an
and, right? Like, let's scale. Let's look at specialized techniques combining
that in hybrid systems. Sometimes they're called. And let's look at specialized techniques combining that in hybrid systems, sometimes they're called,
and let's look at new blue sky research
that could deliver the next transformers.
We're betting on all of those things.
You mentioned Alpha Evolve, something that Kevin and I
were both really fascinated by.
Tell us what Alpha Evolve is.
Well, at a high level, it's basically taking
our latest Gemini models, actually two different ones to generate sort of ideas, hypotheses about programs and other mathematical functions.
And then they go into sort of evolutionary programming process to decide which ones of
those are most promising.
And then that gets sort of ported into the next step.
And tell us a little bit about what evolutionary programming is.
It sounds very exciting. Yeah.
So it's basically a way for systems to kind of explore new space, right?
So like, you know, what things should we, you know, in genetics like mutate to give
you a kind of new organism.
So you can think about the same way in programming or mathematics, you know, you change the program
in some way, and then you compare
it to some answer you're trying to get, and then the ones that fit best according to a
sort of evaluation function, you put back into the next set of generating new ideas.
And we have our most efficient model, sort of flash model generating possibilities, and
then we have the pro model critiquing that, right, and deciding which one of those is
most promising
to be selected for the next round of evolution.
So it's sort of like an autonomous AI research
organization almost, where you have some AIs coming up
with hypotheses, other AIs testing them
and supervising them.
And the goal, as I understand it,
is to have an AI that can kind of improve itself over time
or suggest improvements to existing problems?
Yes.
So it's the beginning of, and that's
why people are so excited about it, and we are excited about it.
It's the beginning of a kind of automated process.
It's still not fully automated.
And also, it's still relatively narrow.
We've applied it to many things like chip design,
scheduling, AI tasks on our data centers more efficiently,
even improving matrix multiplication,
one of the most fundamental units
of training algorithms.
So it's actually amazingly useful already,
but it's still constrained to domains that are kind of
provably correct, right?
Which obviously, maths and coding are.
But we need to sort of fully generalize that.
But it's interesting because I think for a lot of people,
the knock they have on LLMs in general is,
well, all you can really give me is the
statistical median of your training data.
But what you're saying is, we now have a way of going beyond that to potentially generate
novel ideas that are actually useful in advancing the state of the art.
That's right.
But we already had these types.
This is another approach, Alpha Evolve, using evolutionary methods.
But we already had evidence of that even way back in AlphaGo days.
So, AlphaGo came up with new Go strategies, most famously Move37 in game two of our big
Lissadol World Championship match. And okay, it was limited to a game, but it was a genuinely
new strategy that had never been seen before, even though we've played Go for hundreds of years.
So that's when I kicked off our sort of Alpha Fold projects and science projects because
I was waiting to see evidence of that kind of spark of creativity, you could call it,
or originality, at least within the domain of what we know.
But there's still a lot further that has to...
We know that these kinds of models paired with things like Monte Carlo tree search or
reinforcement learning planning techniques
can get you to new regions of space to explore.
And evolutionary methods is another way
of going beyond what the current model knows
to explore force it into a new regime
where it's not seen it before.
I've been looking for a good Monte Carlo tree
for so long now.
So if you could help me find one,
it would honestly be a huge help.
One of these things could probably help.
Yeah, okay, great.
So I read the Alpha Evolve paper,
or to be more precise, I fed it into Notebook LM
and had it make a podcast that I could then listen to
that would explain it to me
at a slightly more elementary level.
And one fascinating thing that stuck out to me
is a detail about how you were able
to make Alpha Evolve more creative.
And one of the ways that you did it
was by essentially forcing the model to hallucinate.
I mean, so many people right now are obsessed
with eliminating hallucinations,
but it seemed to me like one way to read that paper
is that there is actually a scenario
in which you want models to hallucinate
or be creative, whatever you want to call it.
Yes, well, I think that's right.
I think, you know, hallucination when you want factual things, obviously,
you don't want.
But in creative situations where you can think of it
as a little bit like lateral thinking in an MBA course
or something, right, is just create some crazy ideas.
Most of them don't make sense.
But the odd one or two may get you
to a region of the search space that is actually quite valuable,
it turns out, once you evaluate it afterwards. And so you can substitute the word hallucination
maybe for imagination at that point, right? There's obviously two sides of the same coin.
Yeah. I did talk to one AI safety person who was a little bit worried about alpha evolve,
not because of the actual technology
and the experiments, which this person said,
they're fascinating, but because of the way it was rolled out.
So Google DeepMind created Alpha Evolve
and then used it to optimize some systems inside Google
and kept it hidden for a number of months
and only then released it to the public.
And this person was saying, well, if we really are getting to the point where
these AI systems are starting to become
recursively self-improving and they can build a better AI,
doesn't that imply that if Google DeepMind does build
AGI or even super intelligence,
that it's going to keep it to itself for a while,
rather than doing the responsible thing and informing the public.
Well, I think it's a bit of both, actually.
You need to, first of all, AlphaVolve is a very nascent self-improvement thing, right?
And it's still got human in the loop and it's only shaving off, albeit important percentage
points off of already existing tasks, that's valuable, but it's not creating any kind of
step changes.
And there's a trade-off between carefully evaluating things internally before you release that's valuable, but it's not creating any kind of step changes.
There's a trade-off between carefully evaluating things internally before you release it to
the public out into the world, and then also getting the extra critique back, which is
also very useful from the academic community and so on.
Also, we have a lot of trust-a-tester type of programs that we talk about where people
get early access to these things and then give us feedback and
stress test them, including sometimes the safety institutes as well.
But my understanding was you weren't just like red teaming this internally within Google. You
were actually like using it to make the data centers more efficient, using it to make the
kernels that train the AI models more efficient. So I guess what this person is saying is like,
it's just we want to start getting good habits
around these things now before they become something
like AGI and they were just a little worried
that maybe this is gonna be something that stays hidden
for longer than it needs to.
So I don't like, I would love to hear your response to that.
Yeah, well, look, I mean, I think that that system
is not anything really that I would say, you know,
has any risk on the AGI type of front.
I think as we get, and I think today's systems still are not, although very impressive, are
not that powerful from any kind of AGI risk standpoint that maybe this person was talking
about.
And I think you need to have both.
You need to have incredibly rigorous internal tests of these things.
And then we need to also get collaborative inputs from external.
So I think it's a bit of both.
I actually don't know the details of the AlphaVolve process
for the last few, you know, the first few months.
It was just function search before, and then it become more general.
So it's sort of evolved, it's evolved itself over the last year
in terms of becoming this general purpose tool.
And it still has a lot of way to go
before we can actually use it in our main branch,
which is at that point, I think then becomes more serious,
like with Gemini, it's sort of separate from that currently.
Let's talk about AI safety a little bit more broadly.
It's been my observation that it seemed like
if the further back in time you go
and the less powerful AI systems you have,
the more everyone seemed to talk about the safety risk.
And it seems like now as the models improve,
we hear about it less and less,
including at the keynote yesterday.
So I'm curious what you make of this moment in AI safety,
if you feel like you're paying enough attention
to the risk that could be created
by the systems that you have.
And if you are as committed to it,
as you were say three or four years ago,
when a lot of these outcomes seem less likely.
Yeah, we're just as committed as we've ever been.
I mean, we've from the beginning of DeepMind,
we plan for success.
So success meant something looking like this
is what we kind of imagined.
I mean, it's sort of unbelievable still
that it's actually happened,
but it is sort of in the overton window
of what we thought was going to happen
if these technologies really did develop the of what we thought was going to happen if these technologies
really did develop the way we thought they were going to.
And the risk and attending to mitigating those risks was part of that.
And so we do a huge amount of work on our systems.
I think we have very robust red teaming processes, both pre and post launches.
And we've learned a lot.
And I think that's what's the difference now between having
these systems, albeit early systems, contact with the real world. I think that's actually
been, I'm sort of persuaded now that that has been a useful thing overall. I wasn't
sure. I think five years ago, 10 years ago, I may have thought maybe it's better staying
in a research lab and collaborating with academia and that. But actually, there's a lot of things
you don't get to see or understand unless millions of people try it. So it's this weird
trade off again between you can only do it when there's millions of smart people try
your technology and then you find all these edge cases. So however big your testing team
is, it's only going to be 100 people or 1,000 people or something.
So it's not comparable to tens of millions of people using your systems.
But on the other hand, you want to know as much as possible ahead of time so you can
mitigate the risks before they happen.
So this is interesting and it's good learning.
I think what's happened in the industry in the last two, three years has been great because
we've been learning when the systems are not that powerful or risky, as you were saying earlier.
I think things are going to get very serious in two, three years' time when these agent
systems start becoming really capable.
We're only seeing the beginnings of the agent era, let's call it, but you can imagine, and
I think hopefully you understood from the keynote, what the ingredients are, what it's
going to come together with.
And then I think we really need a step change in research on analysis and understanding, controllability. But the other key
thing is it's got to be international. That's pretty difficult. And I've been very consistent
on that because it's a technology that's going to fit everyone in the world. It's been built
by different countries and different companies in different countries. So you've got to get some
international kind of norm, I think, around what we want to use
these systems for and what are the kinds of benchmarks that we want to test safety and
reliability on.
But there's plenty of work to get on with now.
We don't have those benchmarks.
We and the industry and academia should be agreeing to consensus on what those are.
What role do you want to see export controls play in doing what you just said?
Well, export controls is a very complicated issue. And obviously geopolitics today is extremely
complicated. And there, you know, I can I see both sides of the arguments on that, you know,
there's proliferation, uncontrolled proliferation of these technologies. Do you want different
places to have frontier modeling, training capability?
I'm not sure that's a good idea, but on the other hand, you want Western technology to
be the thing that's adopted around the world.
So it's a complicated trade-off.
Like if there was an easy answer, I think we'd all, you know, I would be shouting from
the rooftops, but I think there's, it's nuanced like most real world problems are.
Do you think we're heading into a bipolar conflict with China over AI if we aren't in one
already? Just recently we saw the Trump administration making a big push to make
the Middle East countries in the Gulf like Saudi Arabia and the UAE into AI
powerhouses have them you know use American chips to train models that will
not be sort of accessible to to China and its AI powers.
Do you see that becoming sort of the foundations
of a new global conflict?
Well, I hope not.
But I think short term, I feel like AI is getting caught up
in the bigger geopolitical shifts that are going on.
So I think it's just part of that.
And it happens to be one of the most topical new things that's appearing.
But on the other hand, what I'm hoping is as people, as these technologies get more
and more powerful, the world will realize we're all in this together.
Because we are.
And so, you know, and the last few steps towards AGI, hopefully we're on the longer timelines,
actually, right?
The more the timelines I'm thinking about,
then we get time to sort of get the collaboration we need,
at least on a scientific level, before then.
Would be good.
Do you feel like you're in sort of the final home stretch
to AGI?
I mean, Sergey Brin, Google's co-founder,
had a memo that was reported on by my colleague at the New York
Times earlier this year that went out to Google employees
and said, you know, we're in the sort of the home stretch
and everyone needs to get back to the office
and be working all the time
because this is when it really matters.
Do you have that sense of finality
or sort of entering a new phase or an end game?
I think we are past the middle game, that's for sure.
But I've been working every hour there is
for the last 20 years because I felt the, how important and momentous this technology would be and we thought it
was possible for 20 years. And I think it's coming into view now, I agree with that. And
whether it's five years or 10 years or two years, they're all actually quite short timelines
when you're discussing what the enormity of the transformation of this technology
is going to bring.
But none of those timelines are very long. We're going to switch to some more general questions about the AI future.
A lot of people now are starting to, at least in conversations that I'm involved in, think
about what the world might look like after AGI.
The context in which I actually hear the most about this is from parents who want to know
what their kids should be doing, studying.
Will they go to college?
You have kids that are older than my kid.
How are you thinking about that?
So I think that when it comes to kids, and I get asked this quite a lot, is university
students.
I think first of all, I wouldn't dramatically change some of the
basic advice on STEM, getting good at even the things like coding, I would still recommend,
because I think whatever happens with these AI tools, you'll be better off understanding
how they work and how they function and what you can do with them. I would also say immerse
yourself now. That's what I would be doing as a teenager today in trying to become a sort of ninja
using the latest tools.
I think you can almost be sort of superhuman in some ways
if you got really good at using
all the latest coolest AI tools.
But don't neglect the basics too,
because you need the fundamentals.
And then I think teach sort of meta skills really
of like learning to learn.
And the only thing we know for sure is there's going to be a lot of change over the next 10 years.
So how does one get ready for that? What kind of skills are useful for that?
Creativity skills, adaptability, resilience.
I think all of these sort of meta skills is what will be important for the next generation.
And I think it'd be very interesting to see what they do
because they're gonna grow up AI native,
just like the last generation grew up mobile and iPad
and sort of that kind of tablet native
and then previously internet and computers,
which was my era.
And they always, I think the kids of that era
always seem to adapt to make use
of the latest, coolest tools.
And I think there's more we can do on the AI side to make the tools actually, if people
are going to use them for school and education, let's make them really good for that and sort
of provably good.
And I'm very excited about bringing it to education in a big way and also to, you know,
if you had an AI tutor, to bring it to poor parts of the world
that don't have good educational systems.
So I think there's a lot of upside there too.
Another thing that kids are doing with AI
is chatting a lot with digital companions.
Google DeepMind doesn't make any of these companions yet.
Some of what I've seen so far seems pretty worrying.
It seems pretty easy to create a chat bot
that just does nothing but tell you how wonderful you are. And that can sort of like lead into some dark and weird places.
So I'm curious what observations you've had as you like look at this market for AI companions
and whether you think I might want to build this someday or I'm going to leave that to
other people.
Yeah, I think we've got to be very careful as we as we start entering that domain. And
that's why we we haven't yet. And that's why we haven't yet and we're
being very thoughtful about that.
My view on this is more through the lens
of the Universal Assistant that we talked about yesterday,
which is something that's incredibly useful
for your everyday productivity.
Gets rid of the boring, mundane tasks
that we all hate doing to give you more time to do the things
that you love doing.
I also really
hope that they're going to enrich your lives by giving you incredible recommendations, for example, on all sorts of amazing things that you didn't realize you would enjoy, you know, sort of delight
you with surprising things. So I think these are the ways I'm hoping that these systems will go.
And actually, on the positive side, I feel like if this assistant becomes really useful
and knows you well, you could sort of program it with you, obviously, with natural language
to protect your attention.
So you could almost think of it as a system that works for you, as an individual, it's
yours, and it protects your attention from being assaulted by other algorithms that want your attention,
which is actually nothing to do with AI.
Most social media sites,
that's what they're doing effectively.
Their algorithms are trying to gain your attention.
And I think that's actually the worst thing.
And it'd be great to protect that
so we can be more in creative flow
or whatever it is that you want to do.
That's how I would want these systems to be useful to people.
If you could build a system like that,
I think people would be so incredibly happy.
I think right now people feel assailed by the algorithms in their life,
and they don't know what to do about it.
Well, the reason is because you have to use your...
You've got one brain, and you have to,
let's say, whatever it is, a social media stream,
you have to dip into that torrent
to then get the piece of information you want. But then you've already, but you're doing it with the same brain. So you've already
affected your mind and your mood and other things by dipping into that torrent and to
find the valuable piece of information that you wanted. But if an assistant, digital assistant
did that for you, you would only get the useful nugget and you wouldn't need to break your
mood or what is it you're doing the day or your
Concentration with your family whatever it is. I think that would be wonderful. Yeah, Casey loves that idea. You love that idea
I love this idea of an AI agent that protects your attention from all the forces trying to assault it
I'm not sure how the the ads team at Google is gonna feel about this
But we can ask them when the show comes
team at Google is going to feel about this. But we can ask them when the time comes.
Some people are starting to look at the job market,
especially for recent college graduates,
and worry that we're already starting
to see signs of AI-powered job loss.
Anecdotally, I talked to young people
who a couple years ago might have been interested in going
into fields like tech or consulting or finance or law,
who are just saying, I don't know
that these jobs are going to be around much longer.
A recent article in The Atlantic wondered
if we're starting to see AI competing with college graduates
for these entry-level positions.
Do you have a view on that?
I haven't looked at that.
I don't know.
I haven't seen the studies on that.
But maybe it's starting to appear now. I don't think there's any hard numbers on that yet. At. I haven't seen the studies on that, but maybe it's starting to appear
now. I don't think there's any hard numbers on that yet. At least I haven't seen it. I
think for now, I mostly see these as tools that are augmenting what you can do and what
you can achieve. I think the next era, maybe after AGI, things will be different again.
But over the next five to 10 years, I think we're going to find what normally happens
with big new technology shifts, which
is that some jobs get disrupted, but then new, more valuable,
usually more interesting jobs get created.
So I do think that's what's going to happen in the nearer
term.
So today's graduates and the next five years,
let's say, I think it's very difficult to predict after that.
That's part of this sort of more societal change
that we need to get ready for. I mean I think the the tension
there is that you're right these tools do give people so much more leverage but
they also like reduce the need for big teams of people doing certain things. I
was talking to someone recently who said you know they had been at a data science
company in their previous job that had had 75 people working on some kind of
data science tasks and now they're at a startup that has one person doing the
work that used to require 75 people and so I guess the question I'd be curious
to get your view on is what are the other 74 people supposed to do? Well look
I think these tools are going to unlock the ability to create things much more
quickly so you know that I think there'll be more people that will do startup things.
I mean, there's a lot more surface area one could attack and try with these tools
that was possible before. So let's take programming, for example.
You know, so obviously these systems are getting better at coding,
but the best coders, I think, are getting differential value out of it
because they still understand how to pose the question and architect the whole code base and check
what the coding does.
But simultaneously, at the hobbyist end, it's allowing designers and maybe non-technical
people to vibe code some things, whether that's prototyping games or websites or movie ideas.
So in theory, it should be those other 70 people
or whatever could be creating new startup ideas.
Maybe it's gonna be less of these bigger teams
and more smaller teams are very empowered by AI tools.
But that goes back to the education thing then,
which skills are now important.
It might be different skills,
like creativity, sort of vision and design sensibility could
become increasingly important.
Do you think you'll hire as many engineers next year as you hire this year?
I think so.
Yeah, that's the, I mean, there's no plan to hire less, but you know, we, again, we
have to see how fast the coding agents improve.
Today they're not, you know, they can't do things on their own. They're
just helpful for the best, you know, for the best human coders.
Last time we talked to you, we asked you about some of the more pessimistic views about AI
in the public. And one of the things you said to us was that the field needed to demonstrate
concrete use cases that were just clearly beneficial to people to kind of shift things.
My observation is that I think there are even more people now
who are like actively antagonistic toward AI.
And I think maybe one reason is they hear folks
at the big labs saying pretty loudly,
eventually this is going to replace your job.
And most people just think, well, I don't want that.
So I'm curious, like looking on from that past conversation,
if you feel like we have seen some use cases, enough use
cases, to start to shift public opinion?
Or if not, what some of those things
might be that actually change views here?
Well, I think we're working on those things.
They take time to develop.
I think a kind of universal assistant
would be one of those things if it was really yours and working for you effectively, so technology that works for you.
I think this is what economists and other experts should be working on is, does everyone have
manager a suite of fleet of agents that are doing things for you and including potentially
earning you money or building you things?
Does that become part of the normal job process? I could imagine that in the next four or five years. I also
think that as we get closer to AGI and we make breakthroughs, and we probably talked
about last time, material sciences, energy, fusion, these sorts of things helped by AI,
we should start getting to a position in society where we're getting towards what I would call
radical abundance, where there's a lot of resources to go around. And then again, it's more of a political question of
how would you distribute that in a fair way, right? So I've heard this term like universal
high income, something like that, I think is going to probably be good and necessary.
But obviously, there's a lot of complications that they need to be thought through. And
then in between, there's this know, between now and whenever we have
that sort of situation where what do we do about the change
in the interim and depends on how long that is too.
What part of the economy do you think
AGI will transform last?
Well, I mean, I think the parts of the economy where,
you know, involves human to human interaction
and emotion and those things I think, you know, will probably be the hardest things
for AI to do.
So you know, there's a lot of-
Are people already doing AI therapy and talking with chat bots for things that they might
have paid someone, you know, $100 an hour for a couple years ago?
Well, therapies are very narrow domain and I'm not sure exactly. There's a lot of hype about
those things. I'm not actually sure how many of those things are really going on in terms of
actually affecting the real economy rather than just more toy things. And I don't think the AI
systems are capable of doing that properly yet. But just the emotional connection that we get from
talking to each other and
doing things in nature in the real world, I don't think that AI can really replicate
all of those things.
So if you lead hikes, it would be a good job.
Yeah.
Yeah.
On a climb Everest.
My intuition on this is that it's going to be some heavily regulated industry where there
will just be like a massive pushback on the use of AI to displace labor or take people's
jobs like healthcare health care or
education or something like that. But you think it's going to be an easier lift in those heavily
regulated industries? Well, I don't know. I mean, it might be, but then we have to weigh that up as
society, whether we want all the positives of that, for example, curing all diseases or,
I think there's a lot of finding new energy sources. So I think
these things would be clearly very beneficial for society and I think we need for our other
big challenges. It's not like there's no challenges in society other than AI, but I think AI can
be a solution to a lot of those other challenges, be that energy resource constraints, aging,
disease, you name it, and water access, et
cetera.
There's a ton of problems facing us today.
Climate, I think AI can potentially help with all of those.
I agree with you.
Society will need to decide what it wants to use these technologies for.
But then what's also changing is what we discussed earlier with products is the technology is
going to continue advancing and that will open up new possibilities like a kind of radical abundance,
space travel, these things, which are a little bit out of scope today unless you read a lot of
sci-fi, but I think rapidly becoming real. During the Industrial Revolution, there were lots of
people who embraced new technologies, moved from farms to cities to work in the new factories,
were some early adopters on that curve.
But that was also when the transcendentalists
started retreating into nature and rejecting technology.
That's when Thoreau went to Walden Pond
and there was a big movement of Americans
who just saw the new technology and said,
I don't think so, not for me.
Do you think there'll be a similar movement
around rejection of AI?
And if so, how big do you think it'll be?
I don't know if it'll be, I mean,
there could be a get back to nature.
And I mean, I think a lot of people will want to do that.
And I think this potentially will give them the room
and space to do it, right?
If you're in a world of radical abundance,
I fully expect that's what a lot of us will want to do,
is use it to, you know, I think again,
I'm thinking about it sort of spacefaring and more, you know, kind of maximum human flourishing. But I think
there will be that will be exactly some of the things that a lot of us will choose to
do. And but I have time and the space and the resources to do it.
Are there parts of your life where you say I'm not going to use AI for that, even though
it might be pretty good at it, for some sort of reason, wanting to protect your creativity or your thought process or something else?
I don't think AI is good enough yet to impinge on any of those sorts of areas where I would,
you know, it's mostly I'm using it for things like you did with Notebook LM, which I feel
fine great like breaking the ice on a new topic, scientific topic, and then deciding
if I want to get more deep into it.
That's one of my main use cases, summarization, those things.
I think those are all just helpful.
But we'll see.
I haven't got any examples of what you suggested yet,
but maybe as AI gets more powerful, there will be.
Yeah.
When we talked to Dario Amadei of Anthropic recently,
he talked about this feeling of excitement mixed with a kind of melancholy about the progress
that AI was making in domains
where he had spent a lot of time trying to be very good,
like coding.
Where it was like, you see a new coding system
that comes out, it's better than you.
You think that's amazing.
And then your second thought is like,
ooh, that stings a little bit.
Have you had any experiences like that?
Of course.
So maybe one reason it doesn't sting me so much
is I've had that experience when I was
very young with chess.
So chess was going to be my first career and I was playing pretty professionally when I
was a kid for the England Junior teams and then Deep Blue came along.
And clearly the computers were going to be much more powerful than the world champion
forever after that.
But yeah, I still enjoy playing chess. People
still do. It's different, you know, but it's a bit like I can, you know, Usain Bolt, we
celebrate him for running the hundred meters incredibly fast, but we've got cars, but we
don't care about that, right? Like it's, we're interested in other humans doing it. And I
think that'll be the same with robotic football and all of these other things. So, and that
maybe goes back to what we discussed earlier
about what I think in the end,
we're interested in other human beings.
That's why even like a novel,
maybe AI could write one day a novel
that's sort of technically good,
but I don't think it would have the same soul or connection
to the reader that if you knew it was written by an AI,
at least as far as I can see for now.
You mentioned robotic football.
Is that a real thing? We're not sports fans, so I just want to far as I can see for now. You mentioned robotic football.
Is that a real thing?
We're not sports fans,
so I just wanna make sure I haven't missed something.
I was meaning soccer.
Yeah, no, no, no, I don't know.
I think there are RoboCup sort of soccer type little robots
trying to kick balls and things.
I'm not sure how serious it is,
but there is a field of robotic football.
You mentioned the, you know,
sometimes a novel written
by a robot might not feel like it have a soul.
I have to say for as incredible as the technology is
in VO or imagine, I sort of feel that way with it.
Where it's like, it's beautiful to look at,
but I don't know what to do with it.
You know what I mean?
Exactly.
And that's what I was, you know,
that's why we work with great artists
like Darren Aronofsky and Shankar on the music.
I totally agree.
I think these are tools and they can come up with technically good things.
I mean, VO3 is unbelievable.
I don't know if you've seen some of the things that are going viral being posted at the moment
with the voices.
Actually, I didn't realize how big a difference audio is going to make to the video.
I think it just really brings it to life.
But it's still not,
as Darren would say yesterday when we were discussing on an interview, he brings the
storytelling. It's not got deep storytelling like a master filmmaker would do or a master novelist,
you know, the top of their game. And it might never do, right? It's just always going to feel something's missing. It's a sort of a soul,
for a better word, of the piece, you know, the real humanity, the magic, if you like, the great
pieces of art, you know, art too. When I see a Van Gogh or Rothko or, you know, why does that
touch your, you know, I was spilt, you know, sort of, you know, hairs gone back my spine because of, I remember,
you know, and you know about what they went through and the struggle to produce that,
right?
And every brushstroke of Van Gogh's brushstrokes, his sort of torture.
And I'm not sure what that would mean, even if the AI mimicked that and you were told
that, it was like, so what?
Right?
And so I think that is the piece that, at least as far as I can see, out to five,
ten years, the top human creators will always be bringing. And that's why we've done all of our
tools, VO, Lyria, in collaboration with top creative artists. The new pope, Pope Leo,
is reportedly interested in AGI. I don't know if he's AGI-pilled or not, but that's something that he's spoken about before.
Do you think we will have a religious revival
or a renaissance of interest in faith and spirituality
in a world where AGI is forcing us to think about
what gives our lives meaning?
I think that potentially could be the case.
And I actually did speak to the last pope about that.
And the Vatican's been interested,
but even prior to this pope, haven't spoken to him yet.
But on these matters, how does AI and religion and technology
in general and religion interact?
And what's interesting about the Catholic Church is,
and I'm a member of the Pontifical Academy of Sciences,
is they've always had, which is strange for a religious body,
a scientific arm, you know,
which they like to always say Galileo was the founder of and all those interesting...
It didn't go great for them.
But then they really... And it's actually really separate. And I always thought that
was quite interesting and people like Stephen Hawking and avowed atheists were part of the
academy. And that's partly why I agreed to join it, because it's a fully scientific body. And it's very interesting. And I was fascinated. They've
been interested in this for 10 plus years. So they were on this early in terms of like how,
from a philosophical point, I think, this technology will be. And I actually think we
need more of that type of thinking and work from philosophers
and theologians actually would be really, really good. So I hope the new pope is genuinely
interested.
We'll close on a question that I recently heard Tyler Cowan ask Jack Clark from Anthropic
that I thought was so good and decided to just steal it whole cloth. In the ongoing
AI revolution, what is the worst age to be?
Oh, wow. Well, I don't I mean, you know, gosh, I haven't thought about that. But I mean, I think any age where you can
live to see it is a good age, because I think we are going to
make some great strides with things like, you know,
medicine.
And so I think it's going to be incredible journey.
None of us know exactly how it's going to transpire.
It's very difficult to say, but it's going to be very interesting to find out.
Try to be young if you can.
Yes, if you can.
Young is always better.
Yeah.
I mean, in general, young is always better.
All right.
Demis Isabes, thanks so much for coming.
Thank you, Demis.
Thank you very much. We're edited this week by Matt Collette. We're fact checked by Ana Alvarado. Today's show was engineered by Alyssa Moxley.
Original music by Pat McCusker, Rowan Nimisto, and Dan Powell.
Our executive producer is Jen Poyon.
Video production by Chris Schott, Soya Roquet, Pat Gunther, and Anne Roth.
You can watch this whole episode on YouTube,
but we're going to be doing a live show on the show.
So stay tuned. We'll be back in a minute. Dan Powell. Our executive producer is Jen Poyon. Video production by Chris Schott, Sawyer Rokay, Pat Gunther, and
Anne Roth. You can watch this whole episode on YouTube at
youtube.com slash hard fork. Special thanks to Paula Schuman,
Pui Wing Tam, Dalia Haddad, and Jeffrey Miranda. You can email
us at hardfork at ny times.com. What would you tell the Pope
about AI?