Big Technology Podcast - Google DeepMind CEO Demis Hassabis: AI's Next Breakthroughs, AGI Timeline, Google's AI Glasses Bet
Episode Date: January 21, 2026Demis Hassabis is the CEO of Google DeepMind. Hassabis joins Big Technology Podcast to discuss where AI progress really stands today, where the next breakthroughs might come from, and whether we’ve ...hit AGI already. Tune in for a deep discussion covering the latest in AI research, from continual learning to world models. We also dig into product, discussing Google’s big bet on AI glasses, its advertising plans, and AI coding. We also cover what AI means for knowledge work and scientific discovery. Hit play for a wide-ranging, high-signal conversation about where AI is headed next from one of the leaders driving it forward. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Google DeepMind CEO, Demis Asabas, joins us to talk about the path from here to AGI.
When Google's AI glasses are coming and whether the pace of AI progress can keep up at this rate, that's coming up right after this.
Welcome to a special edition of Big Technology podcast from Davos. I'm Alex Cantorwitz, and I'm joined today by a special guest, Demis, the CEO of Google DeepMind, Demis. Welcome back to the show.
Great to be here. A year ago, there were real questions about whether AI progress was tailors.
It was in fashion to ask whether LLMs were going to hit a wall.
And those questions seem like they've been settled.
There's been a tremendous amount of progress over the past year.
Can you tell us what specifically has happened that's gotten the AI industry from that moment of question last year to the point that it is today?
Well, for us internally, we were never questioning that.
Just to be clear, I think we've always been seeing great improvements.
So we were a bit puzzled by why there was this question in the air.
I mean, some of it was to do, people were worried about data running out.
And there is some truth in that is all the data had been used.
Can we create synthetic data that's going to be useful to learn from?
But actually, it turns out you can ring more juice out of the existing architectures and data.
So there's plenty of room, I think, and we're still seeing that in both the pre-training,
the post-training and the thinking paradigms.
and also the way that they all kind of fit together.
So I think there's still plenty of headroom there just with the techniques we already know about
and tweaking and kind of innovating on top of that.
All right.
Here's what a skeptic would say.
Yeah.
That there have been a lot of tricks that have been put on top of LLMs.
I hear often about scaffolding and orchestration and AI that can use a tool to search the web,
but it won't remember what it learns.
As soon as you close that session, it forgets.
Is that just a limitation of the large language model paradigm?
Well, look, I think there is, and I'm definitely a subscriber to the idea that maybe we need one or two more big breakthroughs before we'll get to AGI.
And I think there are along the lines of things like continual learning, better memory, longer context windows or perhaps more efficient context windows would be the right way to say it.
So don't store everything, just store the important things.
That would be a lot more efficient. That's what the brain does.
And better long-term reasoning and planning.
Now, it remains to be seen whether just sort of scaling up existing ideas and technologies will be enough to do that, or we need one or two more really big, insightful innovations.
I'm probably, if you were to push me, I would say I would be in the latter camp.
But I think no matter what camp you're in, we're going to need large foundation models as the key component of the final AGI systems.
Of that, I'm sure.
So I'm not a subscriber to someone like Jan Lacoon who thinks, you know, that there's, you know, that there's,
sort of a some kind of dead end. I think the only debate in my mind is are they a key component
or the only component? So I think it's between those two options. And for me, this is one advantage
we have of having such a deep and rich research bench. We can go after both of those things
at maximum with maximum force, both, you know, scaling up the current paradigms and ideas. And
when I say scaling up, that also involves innovation, by the way, pre-training especially, I think,
we're very strong on. And then really new blue sky ideas for new architectures and things,
you know, the kinds of things we've invented over the last 10 years is Google and Deep Mind,
you know, of course, including transformers.
Can something with a lot of hard-coded stuff ever be considered, AGI?
No, I think, well, depends what you mean by a lot. I think that I'm very interested in hybrid
systems, as what I would call them, or neuro-symbolic, sometimes people could call them.
you know, Alpha-fold, AlphaGo are examples of that.
So some of our most important work combines neural networks and deep learning with things like
Monte Carlo research.
So I think that could be possible.
And there's some very interesting work we're doing, building, using the LLMs with things like
evolutionary methods, Alpha-Evolve, to actually go and discover new knowledge.
You may need something beyond what the existing methods do.
But I think learning is a critical part of AGI.
It's actually almost the defining feature.
When we say general, we mean general learning.
Can it learn new knowledge and can it learn across any domain?
That's the general part.
So for me, learning is synonymous with intelligence and always has been.
Okay.
So if learning is synonymous with intelligence,
and these models still don't have the ability to continually learn,
like I said earlier, it has goldfish brain.
It can search the internet and it can be like, I figured this out.
But it doesn't change the model.
It's just forget it after the session.
Do you have a theory as to how the continual learning problem can be solved?
And do you want to share it with us all?
I can give you some clues.
We are working very hard on it.
We've done some work on, you know, I think the best work on this in the past with things like Alpha Zero, you know, that learned from scratch.
Versions of AlphaGo, AlphaGo Zero also learned on top of the, the,
the knowledge it already had. So we've done it in much narrower domains. You know, games are
obviously a lot easier than the messy real world. So it remains to be seen if that those kinds
of techniques will really scale and generalize to the to the real world and actual real world
problems. But at least the methods we know can do some pretty impressive things. And so now
the question is, can we blend that, at least in my mind, with these big foundation models?
And so, of course, the foundation models are learning during training, but we,
would love them to learn, you know, out in the world. And including things like personalization,
I think that's going to happen. And I feel like that's a critical part of building a great
assistant is that it understands you and it works for you as technology that works for you. And we've
released our first versions of that just last week. Personal intelligence is the sort of first
baby steps towards that. But I think to have it, you want to do it more than just having your
data in the context window. That's, you want to have something a bit deeper than that, which is, as you say,
actually changes the model over time.
That's what ideally you would have.
And that technique has not been cracked yet.
We've brought up AGI a couple times.
So let me put this to you because I was speaking with Sam Altman towards the end of the year.
And I asked him, I was like, you know, you seem to be saying two things.
We're not at AGI yet.
But every time he talks about what GPT models can do, it seems like it fits his definition.
And he said that AGI is underdefined and what he wishes, everybody,
but he could agree to was that we've sort of wooched by AGI and we move towards superintelligence.
Do you agree with that?
I'm sure he does wish that.
But it's, no, absolutely not.
I don't think AGI should be sort of turned into a marketing term for commercial gain.
I think there is always been a scientific definition of that.
My definition of that is a system that can exhibit all the cognitive capabilities humans can.
And I mean all.
So that means, you know, the kind of high.
levels of human creativity that we always celebrate the scientists and the artists that we admire.
So it means, you know, not just solving a math equation or a conjecture, but coming up with
a breakthrough conjecture.
That's much harder, you know, not solving something in physics or some bit of chemistry,
some problem, even like alpha folds, you know, protein folding, but actually coming up
with a new theory of physics, something like, you know, like Einstein did with general relativity,
right?
Can a system come up with that?
Because of course, we can do that.
the smartest humans with their brain architect,
our human brain architectures have been able to do that in history.
And the same on the art side, you know,
not just create a pastiche of what's known,
but actually be Picasso or Mozart and create a completely new genre of art
that we'd never seen before, right?
And today's systems, in my opinion, are nowhere near that.
Doesn't matter how many, you know, erdos problems you solve,
which for some reason, you know, I mean, you know,
it's good that we're doing those things.
But I think it's far, far from what, you know, a true invention or someone like a Ramanujan
would have been able to do.
And you need to have a system that can potentially do that across all these domains.
And then on top of that, I'd add in physical intelligence because of course, you know, we can play
sports and control our bodies and to amazing levels, the elite sports people that are walking
around, you know, here today in Davos.
And we're still way off of that on robotics as another example.
So I think an AGI system would have to be able to do all of those things.
things to really fulfill the original sort of goal of the AI field. And I think, you know, we're
five to ten years away from that. I think the argument would be that if something can do all
those things, it would be considered superintelligence, but you think AGI is a good time for that.
No, of course not, because those individual humans could, we can come up with new theories.
Einstein did, Feynman did, all the, all the greats that, all my scientific heroes, they were
able to do that. It's rare, but it's possible with the human brain architecture. So superintelligence
is another concept that's worth talking about,
but that would be things that can really go beyond
what human intelligence can do.
We can't think in 14 dimensions
or plug in weather satellites into our brains.
Not yet, anyway.
And so those are truly beyond human or superhuman.
And that's a whole other debate to have,
but once we get to AGI.
I was listening to you recently
and something you said really surprised me.
You were asked on the Google Deep Mind podcast,
which is a great listen. If you have a system today that is close to AGI, I thought it might be Gemini 3.
You named Nanobanana. Yes. The image generator. Yes. What?
Well, you know, sometimes you have to have these fun names and have fun with those and, you know.
But how is an image generator close to AGI? Oh, well, of course. Look, let's take image generators,
but also let's talk about our video generator, VEO, which is the state of the art in video generation.
I think that's even more interesting in from an AGI perspective.
You know, you can think of a video model that can generate you 10 seconds, 20 seconds of a realistic scene.
It's sort of a model of the physical world.
Intuitive physics, we'd sometimes call it in physics land.
And it's sort of intuitively understood how liquids and objects behave in the world.
And that's, and obviously one way to exhibit understanding is to be able to generate it,
at least to the human eye being accurate enough to be satisfying to the human eye.
Obviously, it's not completely accurate from a physics point of view, and we're going to
improve that.
But it steps towards having this idea of a world model, a system that can understand the
world and the mechanics and the causality of the world.
And then, of course, that would be, I think, insensual for AI, because that would allow
these systems to plan, long-term plan in the real world over perhaps very long-time horizons,
which of course we as humans can do.
You know, I'll spend four years getting a degree
so that I have more qualifications
so that in 10 years I'll have a better job.
You know, these are very long-term plans
that we all do quite effortlessly.
And at the moment, without these systems,
we still don't know how to do.
We can do short-term plans over one time scale.
But I think you need these kind of world models.
And I think you imagine robotics,
that's exactly what you want for robotics,
is robots planning in the real world,
being able to imagine many trajectories
from the current situation
they're in in order to complete some task. That's exactly what you'd want. And then finally,
from our point of view, and this is why we worked with Gemini as being multimodal from the
beginning, able to deal with, you know, video, image, and eventually converge that all into one
model. That's our plan, is that it'll be very useful for a universal assistant as well.
So let's talk product a little bit. I watched the documentary, The Thinking Game, along with 300
million other people. There was something kind of interesting that happened there. Throughout the documentary
yourself and some colleagues kept pointing your phone at things and asking an assistant Alpha
what was going on. And I was yelling at the computer as I usually do and said, this guy needs
glasses. He needs smart glasses to be able to do it. The phone is the wrong form factor. What is your vision
for AI glasses and when is the rollout happening? I think you're exactly right.
Right, and that was our conclusion.
It's very obvious when you sort of dog food these things
and internally that as you saw from the film,
we were holding up, you know, you're holding up your phone
to get it to tell you about the real world.
And it's amazing, it works, but it's not the,
it's clearly not the right form factor
for a lot of things you want to do.
You know, cooking or you're roaming around a city
and asking for directions or recommendations
or even helping the, you know, partially cited.
There's a huge, I think, use case there to help with those types of situations.
And for that, I think you need something that's hands-free.
And the obvious thing is, for those of us anyway, that wear glasses like me, is to put it on glasses.
But there may well be other devices, too.
I'm not sure that glasses is the final form factor.
But it's definitely, it's obviously a clear next form factor.
And of course, at Google and an alphabet, we have a long history of glasses.
And maybe we're a bit too early in the past.
But I think my analysis of it and talking to the people working on that project was a couple of things.
The form factor was a bit too chunky and clunky in the battery life and these kind of things, which are now more or less solved.
But I think the thing it was missing was a killer app.
And I think the killer app is universal digital assistant that's with you, helping you in your everyday life.
And they're available to you on any surface on your computer, on your browser, on your phone, but also on, you know, devices like glasses.
when you're walking around the city.
And I think it needs to be kind of seamless
and kind of knows each of those contexts
and understands each of those contexts around you.
And I think we're close now, especially with Gemini III,
I feel we finally got AI that is maybe powerful enough
to make that a reality.
And it's one of the most exciting projects we're working on,
I would say, and it's one of the things I'm personally working on
is making smart glasses really work.
and we hope to, we've done some great partnerships with Warby Parker and Gentle Monster and
Samsung to build these next generation glasses.
And you should start seeing that, you know, maybe by the summer.
Yeah, Warby Parker did have a filing that said that these glasses are coming out pretty soon this year.
Yeah, and the prototype design.
It depends how, you know, we're in prototype phase.
It depends how quickly that advances.
But I think it's going to happen very soon.
And I think it will be, you know, a category.
a new category defining technology.
Given your personal involvement, is it safe to say that this is a pretty important initiative
for people?
Yeah, it's one, well, yes, but it's, I mean, I, you know, I like to, it's not just as important.
Obviously, I like spending my own time on important things, but I like to be at the, push
the most cutting edge thing, and, and, and, and, and, and, and, and, and, and, and, and, and,
that's often the hardest thing, and picking interim goals and giving confidence to the team,
and, and, and also just sort of understanding if the timing's right.
And over the years, I've been doing this, the many, you know, the decades now,
I've got quite good at doing that.
So I try to be at the most cutting edge parts of,
I feel I can make the most difference there.
So things like glasses, robotics I'm spending time on,
and world models.
Right, okay, so timing's right for glasses.
Let's talk about ads.
Sure.
It's the timing right for ads.
Let me say that.
Yes.
Okay.
There's been some news that Jim and I might include ads.
There's been some news that some of your competitors might include ads.
The funniest thing I saw about that on social media was someone who said,
these people are nowhere close to AGI.
It's not going to be this world disrupting technology if the business model is advertising.
Yeah.
Well, it's interesting.
I think those are tells on, you know, I think actions speak louder than words going back to the original conversation we're having with, you know,
Sam and others claiming AGIs around the corner.
Why would you bother with ads then?
So that is, I think, a reasonable question to ask.
But I think, look, from our point of view, we have no plans at the moment to do ads.
If you're talking about the Gemini app, right, specifically.
I think we are going, obviously, we're going to watch very carefully what, you know,
the outcome of what Chachupit is saying they're going to do.
I think it has to be handled very carefully because the dichotomy I see is that if you want an assistant that works for you,
What is the most important thing? Trust.
So trust and security and privacy, because you want to share potentially your life with that assistant,
then you want to be confident that it's working on your behalf and with your best interests.
And so you've got to be careful.
I think there are ways one could do it, but you'll be careful that the advertising model doesn't bleed into that
and confuse the user as to what is this assistant recommending you.
I think, you know, that's going to be an interesting challenge in that space.
And that's what's not to do.
And Soudar in a recent earnings call said there are some ideas within Google of the right
way to approach this.
Sure.
How do you approach advertising?
Well, you know, we're still brainstorming that.
But I think it's, I think it's, there are also, you know, very interesting ways when
if you think about glasses, devices, there are other revenue models out there.
Okay.
So, you know, it's going to be interesting to see.
I don't think we've made any strong conclusions on that.
but it's an area that needs a very careful for.
Just to get a definitive answer from you, I think you've given it, but I'm just going to do it one more time.
I read before we met, Google has told advertisers in recent days from last year that it plans to bring ads to its AI chatbot Gemini in 2026.
Nope.
We have no current plans.
That's all right.
That's pretty definitive.
All right.
Let's just keep going through some of your competitors, Anthropic.
Claude.
Claude.
Yeah. Cloud Code and Claude co-work have caused a tremendous amount of buzz.
Yeah. It is amazing to see what some people have done.
I saw a post from the next Amazon executive who said that he built a custom CRM in a weekend,
or actually a day and a half. That's called a weekend.
What do you think about it? Yeah. And do you plan to have an answer to it?
It's very exciting. And I think that you know, kudos to Anthropic.
I think they built a very good model there with Claude Code. We're very happy with the
current coding capabilities of Gemini 3, it's very good at certain things like front-end work.
I've been using it over the Christmas to prototype games. So it's amazing. It's getting me
back into programming. I love the whole vibe coding wave that's happening. I think it will open up
the whole productivity space to designers, creatives, artists that maybe would have had to work
with teams about access to teams of programmers. Now they can probably do a lot more just on their
own. I think that's going to be amazing once that's sort of out in the world in a more general way
to create lots of new creative opportunities. We're working on, we're very happy with our
work on code. We've got a lot, you know, we've got more to do there. We've just released
anti-gravity, our own IDE, which is very, very popular. We can't actually serve all the demand that
we're seeing there. And we're pushing very hard on coding and tool use performance of Gemini.
But it's one thing that I think Anthropic have fully focused on.
They don't make image models, multimodal models, world models.
They just do coding and language models.
And they're very, very good at that.
And we're pleased to be partnering on that on the one hand.
And also it gives us something to push for to improve with our own models.
Let's just talk broadly about the AI industry business.
I have a theory for how this could all fall apart.
And I want to run it by you.
So it's three-step, a three-step process.
The first is that large language model training runs produce limited returns.
The second is that there are Flash models like Gemini Flash that run AI computing as cheap as search.
And then step three is that the massive infrastructure commitments that have been made become somewhat useless, given those two factors.
And there is a cascading collapse that happens.
Is that a legitimate worry?
I think it's a plausible possible scenario. I don't think it's the, I don't think it's the likely one, in my opinion. I mean, in my mind, there's no doubt AI's gone already proved out enough, I would say, and our work, I think, in things like science and alpha fold and drug discovery that it's here to stay. It's not like tomorrow, oh, we found out AI doesn't work. We've gone way, we've blasted way past that. So I think that's, it's clearly going to be the most transformous technology in human history. There's maybe a question mark about timelines. Is it two years or five years? I mean, either way.
it's very soon for something this transformative.
And I think we're still in the nascent era of actually figuring out how to make use of it and deploy it.
Because the technology is improving so fast, I think there's a huge capability overhang, actually,
of what even today's models can do that maybe even us as building those things don't fully know.
So I think there's just a vast amount of product opportunities that we see.
And I think we're, as Google only just starting to scratch the surface now,
of actually natively sort of plugging these things in to our amazing existing products,
let alone building the new ones.
You know, AI inbox.
We've just started trialing.
I mean, who wants to do email admin?
I mean, wouldn't we all love that to just go away?
That's my number one pain point for my work King Day.
And there's so many examples like that.
Just waiting to be addressed, I think, you know, agents in browsers, helping out with
YouTube.
Obviously, we're now powering search with it.
So I think there's enormous opportunities.
And if you're talking about the AI bubble, if that's the question.
I was trying to not ask the AI bubble question.
Well, I think it's fine.
I mean, it seems like that's the question.
I'm very happy to answer it.
Because I think, look, my view is it's not binary.
Are we in a bubble, not in a bubble?
I think parts of the AI industry probably are.
And other parts, I think it remains to be seen.
So I think some of the things are, you know,
when you see seed rounds of tens of billions of dollars
of companies that basically have no product or research
is just some people coming together. That seems a bit unsustainable to me in a normal market,
a bit frothy. On the other hand, you know, we're businesses like us, we have massive underlying
businesses and products that it's very obvious how AI would increase the efficiency or the
productivity of using those products. And then it remains to be seen how popular the monetization
of these new AI native products like chatbots, glasses, all of these things. We'll have to
I think there will be enormous markets, but they're yet to be proven out. But from my perspective,
you know, running Google DeepMind is my job is to make sure that whatever happens with an air bubble
if it bursts or if there isn't one and it continues, we win either way. And I think we're
incredibly well positioned as alphabet in either case, you know, doubling down on existing
businesses in the one case or being at the forefront and the frontier in the in the bull case.
Going back to Thinking Game, speaking of the way that this will impact the economy, I started to feel bad for the opponents of your technology.
Lisi Dahl, demoralized.
This guy, Manna, who played Starcraft, beat your bot, but realized that it's basically over for humans versus machines.
Now, we're all up against this in some way as this stuff makes its way into knowledge work.
I thought you were meaning our AI competitors.
Them I'm okay with.
I don't feel sad about that.
So a relentless progress of AI.
You mean the gamers?
The gamers, yeah.
You made me feel bad for gamers.
But I want to ask about this.
We're going to have the same situation with knowledge work.
That these models that performed admirably against the world's best StarCraft and Go players
are now starting to do our work.
And are we going to end up in the same position?
Well, look, let me let's give you.
Even you brought up games as an example, let's look at what's happened in games.
So chess, we've had chess computers that are better since I was a teenager than Gary Kasparov
in the 90s, right?
They weren't general AI systems, but they were, you know, deep blue.
Chess is more popular than ever.
No one's interested in seeing computers playing computers.
We're interested in Magnus Carlson playing, you know, the top, the other top chess players
in the world.
Interestingly, in Go, the best Go play in the world is a South Korean, and he was about
15, I think, when AlphaGo match happened. He's in his mid-20s now, and he's by far the strongest
player there's ever been by the ELO ratings, because he's learned natively young enough.
He was, you know, he's the first generation, you could say, that's learned with AlphaGo
knowledge in the knowledge pool. And, you know, he may actually be stronger than AlphaGo was back
then. So I think, and we all still enjoy Starcraft and all the other, all the other computer games.
We enjoy human endeavor. I think it's a bit more, a bit similar to like, we still love the
hundred meters, uh, uh, Olympic race, um, even though we have vehicles that can go way faster than
Usain Bolt, but, you know, we, we don't, that's, that's a different thing, right? And so I think
we have infinite capacity to adapt and, um, and, uh, and, and, and sort of evolve, uh, with our
technologies. I because why is that? Because we have, we are general intelligences. Um, that's the thing
about it is we are AGI systems. We are, obviously, we're not artificial. We're general
systems and we're capable of inventing science and we're tool making our animals. That's what
separates us humans from the other animals is we're able to make tools all around modern civilization,
including computers and of course AI being the ultimate expression of computers. That all
has come from our human minds, which were evolved for, you know, hunter-gathering lifestyle.
So it's kind of amazing we were able, and it shows how general we are that we're able to get
to the modern civilization we see around us today. And we're talking about things like AI and,
you know, science and physics and all these things. And I think we'll adapt again. But there is an
important question, actually, beyond the economics one about jobs and those things is purpose and
meaning, because we all get a lot of our purpose and meaning from the jobs we do. I certainly do from the
science I do. So how does what happens when a lot of that is automated? I think, you know, that's why
I've been calling for, you know, I think we knew new great philosophers actually.
And it will be a change to the human condition.
But I don't think it necessarily has to be worse.
I think it's like the Industrial Revolution, maybe 10x of that, but we'll have to adapt again.
And I think we'll find new meaning and things.
And we do a lot of things already today that are not just for economic gain.
You know, art, extreme sports, polar exploration, many of these things.
And maybe we'll have much more sophisticated, esoteric versions of those things in the future.
Okay, two minutes left. I have two questions. I don't know if we're going to get to both of them.
Let me ask the one that I want to know the answer most about. In a recent interview, you said that you have a theory that information is the most fundamental unit of the universe, not energy, not matter, information.
Yeah.
How?
Well, look, I think if you look at energy, I mean, I don't know if we'll better cover this in two minutes,
but in energy and matter, you can definitely, I think a lot of people sort of think of them
as isomorphic with information.
But I think information is really the right way to understand the universe.
So we think of biology and living systems.
We're information systems that are resisting entropy, right?
We're trying to retain our structure, retain our information in the face of, you know,
a randomness that's happening around us.
And I think you can look at that, you know, in a larger physics scale.
So almost not just biology, but things like, you know, mountains and planets and asteroids,
they've all been subject to some kind of selection pressure, not Darwinian evolution,
but some kind of external pressure.
And the fact that they've been stable over a long amount of time means that that information
is kind of stable and meaningful.
So I think one could view the world.
world in terms of its complexity, information complexity. And I think a lot of what we're doing
with our, the reason I'm thinking about all of that is because of things like alpha go and alpha
fold, especially alpha fold, where, you know, we solved all the protein structures that are
kind of known to science. And how we've done that? Well, because only a certain number of those
in the kind of almost infinite possibilities of protein structures are stable. And, and those are the
ones you've got to find. So you've got to understand that topology, that information topology,
and follow it. And then suddenly these problems that seem to be intractable, because, you know,
how can you find the needle in the haystack actually become very tractable if you understand
the energy landscape or the information landscape around that. And that's how I think eventually
will solve most diseases, come up with new drugs, new materials, new superconductors, with the help
of AI helping us navigate that information landscape. Dennis, before we go, I just want to wrap with
this. Maybe quickly this first one and then a big question at the end. In the first,
thinking game, speaking of health and AI, there's this moment where there's a discussion in the
lab about whether to release the results of Alpha Fold. And you kind of sit there adamantly
and you're like, why are we going through a process? Release it. Release it now. Talk a little bit
about the lesson from there. Yeah, well, look, we started Alpha Fold to crack a unbelievably tough
scientific challenge, 50-year ground challenge of protein folding and protein structure prediction.
And the reason we worked on that and the reason we've put so much effort into it is we sort of thought it was a root node problem.
If we could solve it and put that out in the world, it could do an amazing untold impact on things like human health and understanding of biology.
But we as a team, no matter how talented or hard, hard working we are, we would only be able to scratch a surf a small tiny amount of that potential on our own.
It's clear.
So in that case, and in this case, it was obviously the right thing to do to maximize the benefit to the benefit to the
world here to put it out there to the scientific, massive scientific community to build on top
of and use Alpha Fold.
And it's been incredibly gratifying to see, you know, 3 million researchers around the world
use it in their important research.
I think in future, almost every single drug that's discovered from now on will probably
have used Alpha Fold at some point in that process, which is, you know, amazing for us and, you
know, really that's what we do all the work we do for.
I also read that moment, you tell me if I'm wrong, is something of a metaphor, small
passionate AI division kind of yelling in a big company get this out cut the red tape yeah
potentially but look I mean we've had amazing support from the beginning from Google and they
the reason that we you know we joined forces with Google back in 2014 is Google itself is a
scientific research engineering technical led company always has been and has that at its
core and that's why you know I think that we have the scientific method and the
scientific approach that thoughtful approach that rigorous approach in
everything we do. So of course, they're going to love something like Alpha Fold.
Okay, here's the big question at the end. You built AlphaGo, trained the computer to play
go on human knowledge. And then once it mastered the human level playing, you kind of like let it
loose with a program called Alpha Zero. Yeah. And it started doing things that you could never even
imagine and making new circuits in ways that surprised you. Eventually, maybe there will come a time where
there are LLMs or some version of them, reach a mastery of human knowledge in the same way.
What is going to happen when you then let that loose and it does the same,
potentially does the same thing as alpha zero?
I think it would be great exciting.
I mean, that's what to me is it would be the AGI moment is, you know,
then it will discover a new superconductor, room temperature superconductor that's possible in the laws of physics,
but we just haven't found that needle in the haystack or a new source.
of energy, a new way to build optimal batteries.
I think all of those things will become possible.
And indeed, not just possible, I think they will happen once we get to a system that's,
first of all, got to, you know, human level knowledge.
And then there'll be some techniques.
Maybe it will have to help invent some of those techniques, but kind of like Alpha Zero,
that will allow it to go beyond into new uncharted territory.
That idea of it, like plugging weather system into its brain, like it's going to be on that
that way.
Yeah, exactly.
All right.
Exciting times.
Demis, thanks for coming on the show.
Thank you.
Thanks, everybody.
