Big Technology Podcast - How Google DeepMind Operates & Experiments — With Lila Ibrahim and James Manyika
Episode Date: February 18, 2026Lila Ibrahim is the COO of Google DeepMind. James Manyika is the senior Vice President for Research, Technology, and Society at Google. The two join Big Technology Podcast to discuss how Google's AI e...ffort operates and runs experiments. In this conversation, we discuss the fundamental operating structure of DeepMind, how Google proper has become more experimental with the revival of Labs and other programs, and how the company is thinking about AI and education. We also cover weather and flood prediction at global scale, and training AI in space. Hit play for a deep inside look at the mechanics behind Google’s AI research machine and the big ideas it’s betting on next. EXCLUSIVE NordVPN Deal ➼ https://nordvpn.com/bigtech Try it risk-free now with a 30-day money-back guarantee! Take back your personal data with Incogni! Go to incogni.com/bigtechpod and Use code bigtechpod at checkout, our code will get you 60% off on annual plans. Go check it out! Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
How does Google DeepMind operate and make bets?
And what's making Google more experimental?
Let's talk about it with two Google leaders right after this.
Have you been waiting for the perfect time to upgrade your tech?
Good news, the wait is over.
Dell Tech Day's annual sales event is here,
and we're celebrating our best customers with fantastic deals on the latest PCs,
like the Dell 14 Plus with Intel Core Ultra processors.
We've also got incredible perks,
like Dell Rewards, Fast, Free Shipping, Premium Supports,
price match guarantee, and more.
And while you're upgrading your PC,
you may as well go all out,
because we're also offering huge deals
on our premium suite of monitors and accessories.
You know what that means.
That's right.
You can get a whole new setup with amazing savings.
Clearly, this is a sale you don't want to miss.
Visit Dell.com slash deals.
That's Dell.com.
slash deals.
Searchlight Pictures presents
in the blink of an eye
on Hulu on Disney Plus,
a sweeping science fiction drama
spanning the Stone Age,
the present day,
and the distant future
about the essence of what it means
to be human,
regardless of our place in history.
The film is directed
by Oscar-winning filmmaker Andrew Stanton
and stars Rashida Jones,
Kate McKinnon,
and David Diggs.
Stream in the blink of an eye
now only on Hulu on Disney Plus.
Sign up at Disneyplus.com.
Welcome to Big Technology Podcast, a show for cool-headed and nuanced conversation of the tech world and beyond.
We have a great show for you today because we're going to go deep inside the way Google's AI and technology research operations work.
We have two great guests with us today. Lila Ebrim is here.
She is the chief operating officer of Google DeepMind. Lila, welcome.
And we're also joined by James Minika.
James is the SVP of Research, Labs, Technology, and Society at Google. James, welcome.
Well, thanks for having me.
And of course, this is our concluding conversation here in our series at Davos.
And we do have a live audience.
Live audience, make some noise.
Let them know you're here.
All right.
So much to get to not a lot of time.
Let's just start with the way that Google DeepMind operates.
Demis Sassabas, the CEO of Google DeepMind, who is recently on the show, has described DeepMind as sort of a modern-day Bell Labs.
But what does that mean, Lila?
Can you tell us a little bit about how the research, is it a lab, the operation company?
How does it operate?
Maybe I should start with our mission because I think everything is kind of based off of that,
which is to build AI responsibly to benefit humanity.
And so the first thing we do is take really ambitious research agendas.
We structure it in a way where we're looking at what are the big problems,
but not telling people how to do it.
And when you think about how did we first approach that,
it's really about taking inspiration from the goals.
era of Bell Labs, but also government programs like the Apollo program, and even more recently,
Pixar. So it's all focused around bringing in really great talent and creating an environment
for them to succeed and to explore. So first thing is that big research agenda telling people
what to kind of the area to focus on, but not how to do their job. The second thing is really
because it's such a broad agenda, we want to build interdisciplinary teams. How do you create a culture
where you can have a bioethicist next to a computer scientist and a neuroscientist
because we think that's really where the magic happens and unlocks the work.
And, you know, this type of approach has resulted in such extraordinary efforts.
And we're also not afraid to explore and then say, when is it time?
I think Demas has a remarkable way of measuring time.
Like, time to explore.
Are we setting the really ambitious goals?
How are we doing progress towards that?
also not being shy to say, okay, now is the time to take a step back and pause it or double down.
Great example of that is over the past few years, we've been doing a lot of work around one science
area, learning science. How do people learn and can we improve it? Right. And then this year was really,
Demis was like, okay, Gemini is good enough. It's time to infuse everything we've done with the industry
around learning science into Gemini. And that was one of our focus areas to really advance how
Gemini could be provided for learners. So there's something, I think, quite magical within Google DeepM
Mind about timing. Okay, GDM, I guess we're going to go. Everybody in the tech industry is a
I almost, I almost count myself before saying it. So let's talk about that. So the way, I just want to
talk through process a little bit, the way that you just described that, Demis said that that Gemini
was ready for learning and then Google DeepMind started to pursue it. How much of what Google DeepMind works on
is top-down versus bottom-up.
A way that I've heard OpenAI describe the way that it works
is like a bunch of different startups
within a larger company.
Is that a similar way that Google operates
or does it come more from the top?
Well, because our mission is so ambitious,
we're really trying to understand
what are the big challenges
where AI can help us unlock our understanding
of the universe around us
and solve some of humanity's biggest challenges.
And it's abroad enough that we can do things like how do we do weather exploration and try to predict weather forecasts.
How do we do alpha fold and protein structure prediction to help us better understand diseases so we can come up with better therapeutics?
Generative AI, how can we continue to improve that to make people's lives better?
So again, we take a very broad portfolio perspective, but we allow the space for researchers to explore.
And that's really what I meant in the beginning of, like,
we've got to find the right talent.
So mission-driven, culture and values aligned,
people who want to have this type of exploration
and a big impact and scale that we can have
of being part of Google.
So I would say some of this is Demis is quite remarkable
in terms of his thinking in this space
because he's been doing it for so long, right?
Deep Mind was founded 16 years ago.
It's been kind of a lifelong mission.
of his, and yet we have an organization full of people who are creative, who like to work
in an interdisciplinary environment, who want to have impact in this world. So they also come up
with their own approach to things, setting goals. So it's a little bit of both. Pardon me? Yeah,
a little bit of both. Some top down from Demis and then some. Bottom set. Okay. And which makes
managing part of that organization structure quite a challenge. I'm definitely going to talk with you about talent. I will
talk with you about talent, for sure. And, you know, on that note, how have things changed?
Because I'm just going to talk about the tech industry more broadly. It seems like there
used to be a moment where a lot of tech companies gave, you know, these talented people
broadly way to explore things that might not have immediate results. Then all of a sudden,
we got into this AI race. And many companies brought their researchers who were working on these
long-term products much closer, or much long-term projects, much closer to the product.
And all of a sudden, there was a almost imperative for long-term research to make immediate
product impact. So has that changed as well over time? Is that something that's going on
within DeepMind as well? Yeah, I've been, I joined about eight years ago, and we've definitely
been on a journey. But what I think is so exciting about Google DeepMine, and I think why
so many of our employees stay so long is because we have that breadth of portfolio.
So there are some people that want to continue the deep research, frontier AI research that they do,
or a scientific, more focused on the science.
And we have the space to do that exploration,
while also delivering on the advancements around generative AI,
such as all the progress we made last year with Gemini.
Okay. Let me take that a step further.
the way that the transformation within Google has been described is that instead of having every
product area or product group chart its own direction on AI, there's now this central engine room
within the company, which is, I think, the AI division that generates, that creates the
AI and then farms it out to these product areas. So can you talk a little bit about that process
and how that works? Yeah, and actually I think that's been one of the
exciting things over the past few years with a combination of Google Brain and deep mind of bringing
the best of Google's AI teams and research together under one roof where we could have,
we could explore such a broad portfolio. And so we've really been focused on, as you mentioned,
becoming the AI innovation engine. And then I wouldn't say we farm things out to other Google
teams. We collaborate very closely with the product areas and their customers to understand what
the needs are so that we can build the models better from the start and do so in a very
collaborative and responsible way such that by the time it goes to different Google products,
it's already been through a lot of that testing and can be refined for that specific use case.
And that's actually helped us. I think what's resulted in that, for example,
is like Gemini 3. We launched it and then it was available to a broad group of developers and
users right away. All right. One last question on this. And then we're going to go to
James. And James, thanks again for being here. So let me just ask you this. On our show, we have this
hypothesis that Sundar spent time at McKinsey, and this is sort of like a McKinsey style approach
to like reorg, centralize, and then work with the groups. Is there a truth of that?
Well, you have a former McKinsey person here who might be able to address the structure.
James?
No, I think what you've got going on is an extraordinary thing, right? Because on the one hand,
you've got the Gemini program,
which underlies all of this,
building the kind of large-scale models,
Gemini itself, Gemini 2.5, 3, and all of that.
And this kind of came about back in three years ago
when we put together the Google Brain Team
and the DeepMind team to create the Gemini program.
So that program now underlies all the things across the company.
So you see Gemini show up in search,
in Google Workspace.
It shows up in all our products.
in Nopalcol, all of these things.
So it's kind of the foundation,
and that's why, as Lada said,
Google Deep Pine and the Geminiat program
has become the engine room.
But in addition to that,
you've got all these other things going on.
There's deep science going on in the company.
I mean, this idea of, you know,
kind of this foundational kind of tackle the biggest root node problems
that open up research and innovation in so many areas.
So you've got all of that going on too.
And then you've got all these other, you know,
kind of ambitious projects working on things like genie who build world models you've got work going on to build special things for Waymo and enhance the models that waymo's that lead to the waymo the driver the waymo driver so you've got a lot of these things going on so I don't think there's a top down as much as let's take advantage of the foundation called the Gemini program make sure that every time we do these rapid iterations you've seen it are now in a cycle of every six-ish
months there's a new generation of Gemini make sure it shows up immediately as
lada described there's no you know shipping and delay so the minute the latest version of
gemini comes out you're going to see it in search you're going to see it in you know in the
jemini app itself you're going to see it everywhere so that's kind of the incredible thing that's
that's happened over the last three years all right i want to talk about labs so google labs a lot
of us who used Google products in the early days, you know, we saw this era of experimentation
within Google and then Labs went away for a bit, not that Labs was the only bit of experimentation
within the company, but then Labs was revived. And it seems like we're starting to see many
more experimental projects come out of Google proper in a way that we hadn't seen in a long time.
So how responsible is Labs for that? And why is Labs back? Oh, Labs is so much fun. So what
actually happened was three years ago, you know, this is a kind of inspired, you know, Sundar moment,
said, let's reboot Labs. And, you know, we're in this AI moment. How do we kind of explore
and experiment and build these AI-first AI products that are totally AI first? So the idea
with Labs is, let's take the most amazing research coming out of Google Deep Mind and Google Research
and any other place, quite frankly, in the company where there's incredible research and
focus primarily on how do we build experimental AIF first products. I think what most people probably
know of the most is what's now, you know, notebook ALM. You know, the way that started, by the way,
is incredible because I remember when I first encountered. So what is, what is notebook element, tell
the story? So notebook is fun. So it started out as a product called Tailwind. There were four,
five people working on it. And the idea was, you know, can we build a very AI native research tool
that is grounded on what you put into it.
So in other words, your sources,
you might have books,
you might have papers,
you might have drafts,
you may have whatever your content
that you want to ground it on,
put it in a notebook and be able to engage with it.
So that was the conception of the idea.
And in fact,
in some ways he got additional impetus
from Stephen Johnson,
who's a writer.
And Stephen Johnson,
is one of these people who kind of keeps everything.
So he has notes from the 90s
and drafts of books
and all kinds of things.
I'd love a product where I can dump all that stuff in
and engage with it.
What was I think in 1997?
What was that draft I did?
And be able to be able to,
so what No Book LM has become is this incredible research tool
grounded on what you've put in.
And when you engage with it and it summarizes or drafts something,
it gives you these citations.
And that's in some ways is a key feature of it.
So if it says, Alex, you know, you said this or your source.
says this and summarizes in some way, it'll give you citations.
If you want, you can click on the citations.
They take you all the way back to the original content.
Right.
So, which is incredibly useful.
Then a fun thing happened, which was, well, you know, so it was already a very useful
tool.
Then we said, well, actually, you know what?
Sometimes I want to hear my sources as opposed to just engage with them.
So I said, okay, well, actually the technology is ready enough.
We can actually add AI audio overviews.
Which is, like, effectively, a podcast.
You can have it with, like, two hosts.
You could have it.
Actually, the original idea wasn't even to do that.
So initially the idea was a few of us, you know, Jeff Dean and this legendary Jeff Dean said,
well, actually, you know what?
We're reading all these papers that are coming out of this incredible pace in the computer science field.
It would be nice to be able to hear a summary of them verbally while I'm driving into work or something.
So just, you know, then I can figure out which people I'm going to read.
So that was the original idea.
They said, actually, no, you know what?
it's easier to learn stuff when you hear people talking about it, engaging.
That's why seminars are interesting, right, as a learning mechanism.
So that's where the idea came from.
So we did these audio overviews, which in the form of a podcast or a discussion with two hosts discussing it.
And now it's evolved.
And that's when the product just kind of took off.
Yeah, whenever I give a presentation about AI, that's the party trick, where I build one of these notebooks in front of the audience.
And then I hit play on the podcast and people who have a show.
seen it before. It's like a jaw-drop moment. In fact, we've had multiple people on our YouTube
feed and coming from the podcast. They've asked, Alex, did they train on your voice? Because it sounds a lot
like me. And I say, no, listen, they always say, let's unpack this at the beginning. And you have to
understand every podcaster says that. So it's not me. Actually, you know, one of the most fun use cases of
a notebook, by the way, is because now you can put in things in all kinds of formats. There can be papers,
that can be YouTube videos,
there can be whatever is on your hard drive.
One of my fun use cases was actually when I had to do this thing
where I was seeing all these papers
from literally over 100 countries in different languages.
So I put them all in and just engage with content
in multiple other languages,
because No Book LM can handle multiple languages.
And now you can do video overviews.
Right, that it can make like an animated,
not an animated video, but a video with like graphics.
With graphics and slides.
But I think this is an example of the kind of thing that happens
and labs where we try to take this incredible research that Lila and colleagues and others
are doing at Google Deepvine and Google research to say, how do we build amazing AI-first
products?
Flow is another example.
And if you play with Flow.
I just, so I'll tell you a story about flow.
Then I'll let you talk a little bit more about it.
I just did my first and last mountain climb.
And it was Cotopaxi in Ecuador.
And I wanted to make a video sort of capturing the moment.
But there were a couple things that happened that I just could not.
that I didn't videotape because I decided to spend the climb actually climbing as opposed to
YouTubeing, which is apparently from what I hear rare these days.
And there was a moment where my water bottle fell out of my backpack and rolled down the glacier
and then kind of disappeared into the darkness.
And I wanted to illustrate that.
So I went to flow the Google video generator and I said,
I want to make an animation documentary style to show this and slotted that into the video.
So now you can, and before I would have to hire an animator.
Now you can do it.
Yeah, no, it's incredible.
But I think, well, you know,
flow is an example of the magic that happens in labs.
So I remember a bunch of us got together.
So Josh, who runs labs and, you know,
Demms and a few of us say,
what if we put all these tools we now have together
into something that's actually useful?
And in fact, the initial version of it that we have,
you know, in some ways it was clunky.
Then we said, well, actually, let's just talk to some actual filmmakers
and get their input.
So one of the things that happens in labs, by the way,
is we try to engage a lot with,
creatives and others to help us think about how we build these tools.
So anyway, that's how flow came about.
Yeah, you can build scene by scene prompting into video.
And you can have continuation.
I think that's probably where the name comes.
It can flow.
And what you just said was an insight that came from filmmakers.
In fact, the initial virtual was said, no, no, no, what you've got to is actually
not very useful.
I'd like to be able to build things scene by scene and be able to sit them together,
be able to do this.
So, you know, so that's why it's been helpful.
So if you say, what is labs?
It's a place where we try to experiment with all these things.
At any one time, we probably have about 30 experiments cooking.
So if you go to the Google Lab site, you'll probably see about 30 different things.
But I have a request for you.
Broaden the access.
Because there's a lot of projects in there that seem really interesting to use.
But every time I'm there, it's a wait list.
We'll work on that.
We'll work on that.
So for example, one of the other ones,
and sometimes we're surprised what people find useful.
I'll give you an example.
One is Pomelli, which is the, it's a tool for SMBs to imagine,
this is not your typical kind of techie startup, SMB,
but kind of more kind of traditional SMB,
where they want to build a web presence.
And so you can literally engage with Pomelli as an SMB
and be able to build literally a web presence in incredibly imaginative ways.
So we always have all these things cooking in labs, Air Studio,
is another example of the kinds of things.
This is for developers.
So we're trying to think of all these incredible creatives,
whether they're developers, artists, filmmakers, musicians
to create these incredible air-first tools.
Yeah, there's two that I really want to get access to,
and I think are potentially going to be big.
Maybe the next notebook LM.
There's CC, which is an experimental productivity agent
within Google, which looks great.
And then disco.
Oh, Disco's fine.
You can build a web app, basically,
based on links. So if you're like thinking about doing something for the weekend, you can just like
open a bunch of tabs and then it will figure out what type of app to make for you. So maybe a custom
map with dots for each potential event. And then you can pick the dates that you want to actually
be in the place that you're thinking about. And then it will sort of highlight what's going to be
available then. So this is to both of you. Back in the day, Google had this concept called 20%
time where Google employees were basically empowered to spend 20% of their time on something
That wasn't core of their job description.
And that's where a lot of big Google products came out of.
I think Gmail was a 20% project.
So I want to ask you both about these experimental projects.
Who builds them and is a version of 20% time back?
Or how does this, you know, obviously a lot of cool experiments.
How is it happening inside the company?
Well, I'm happy to start.
So I think effectively that's still alive.
So go back to labs.
So if you think about the things that are in labs,
I would say something like maybe 80% of them came out of people actually in the labs team.
The other 20% came from 20% stuff.
I'll give you a good example on a topic that...
20% time still lives within Google.
We encourage you both to come up with those things.
So I'll give you a good example in an area that Lala and I care deeply about,
which is education and learning.
So somebody in Google research came up with the idea that, oh, they're working on something else.
But they came out of the idea, what if we created a way for somebody to learn something their way, however they want to learn?
Because it's not possible to get these tools to help you learn in any number of ways.
So that eventually became Learn Your Way, which is an experimental product you'll find in Google Labs.
That was not done by somebody in Labs, somebody else in another part of the company, it came up with the idea.
So we constantly are getting all these ideas from across Google about these incredible things.
Another example that actually came out of Google DeepMine and Google Research is co-scientist,
which those teams worked on, which is a tool for scientists to do actual scientific discovery.
Now you're going to see that show up in labs as a way to test it, get other people to work on it.
But it wasn't, as it were, built inside labs.
So the notion of people generating ideas from across the company is very much alive.
And you get some exciting innovations from that.
Lila DeepMind researchers have the ability
if they want to build an experimental product
to maybe do that and...
Yeah, I think this is actually just part of our culture.
And that's really about
giving people the chance to explore
and also taking a very interdisciplinary approach.
So it's actually not just limited to researchers,
which has been quite exciting.
It's actually being able to pull together
different perspectives and trying to solve real challenges.
And sometimes that's even actually
AI tools to help us accelerate how we're working. How does our legal team make the review of
research papers faster and be able to provide feedback? How do we do red teaming for more
automated red teaming for our responsibility team? Or how do we understand ancient texts?
We have a project that actually one of our researchers decided to, he wanted to explore,
it's not just about intelligence today,
but what is it about knowledge from the past
that we might not know about?
So he worked to come up with a project
that was not just to be able to date a tablet,
but also to fill in missing gaps to translate it.
And so we now have Project Aeneas,
which is all about ancient texts.
So there are, to James's point,
one of the things that we have at Google
is really smart, curious people
and a culture that supports that exploration.
Yeah, as we close this segment,
I talk a little bit about why I'm so interested in it.
I think the average company last century
was on the S&P 500, once they reached for 67 years.
Now it's like 15 years right now.
And as this AI moment happens,
you know, it's going to, I mean, Google's seen this firsthand, right?
It's going to be, things will be moving even faster.
and where ideas come from, the imperative to experiment and, you know, create new projects.
I think that's key to any company's long-term sustainability.
So it's very interesting to hear how it operates within Google.
You know, I was going to comment.
I spent some of my career in venture capital,
and I used to say that that was the most remarkable place to be
because you'd have these entrepreneurs with audacious ideas
who wanted to build ideas.
And I think what's crazy about my experience,
Google is this is just part of everyday culture, and it happens in all parts of the organization.
I think how it comes to life is quite different, might be quite different in Google DeepMind
than other parts of Google, but the fact that it's supported across the entire organization.
Yeah, if I could add one of the pieces on this, Alex, I think one of the things that I think
is really quite unique about the research culture at Google, and I'm including back to your
original Bell Labs question, and this happens in Google DeepMind and Google Research, is this
this idea that we've got to go from research to reality.
And I think what you see, a lot of these kind of research, you know, originated,
breakthrough ideas, then very quickly transition into real-world impact.
I mean, Alpha IV is a good example, right, which is an incredible breakthrough,
Nobel Prize-worthy and all of that.
But look at what's happened since then, right?
You now have three and a half million researchers accessing it in over 190 countries.
You take some of the breakthroughs in weather and forecasting and prediction.
they're now actually being used in the real world.
We now do flood forecasting,
which is a very incredible kind of research question,
but now it's covering 150 countries with 2 billion people.
So I think this idea of,
from research, breakthrough scientific research,
translating that to societal impact,
I think it's a very unique aspect of what we do.
There's a natural follow-up here that I have to ask,
because if I don't ask it,
the audience is going to be like, why didn't you ask that?
For many years, Google seemed like it was,
or at least the perception was that it was afraid to ship,
Case in point, you created the transformer model, chat GPT is the first mainstream application built off of that.
In fact, I spoke with Sam, Sam Altman, you know, at the end of the year, and one of the things that he said,
one of the sort of notable things he said in that interview was that if Google took us seriously early on,
they would have smashed us, and now they're a formidable competitor.
So has the imperative to ship become something that's more important within Google,
and has there been more ambition
to bring these experiments out into the public?
I think there is, but I think there's a natural evolution of that.
I think one of the things that's important is, you know,
there are incredible amount of research breakthroughs going on,
and there's always going to be, at Google,
I think there's productive tension between,
is it ready, is it not?
And we don't always get that right.
And I think that tension, I think I actually think it's a great tension
because this idea of part of being bold and responsible,
I think we have to live with that tension.
So you've got that going on.
But I think what you also see is a realization that for many of these experiments and innovations,
there's actually a lot to learn.
This is back to the scientific method by having people use it, experience it, and we learn from that.
So there's so much kind of red teaming you can do of a product that we do a lot of that.
But there's also a lot you can also learn from when people use it, either, you know, usefully or even adversarily.
You're going to learn a lot from that.
I think that's been a bit of the evolution, that shipping and, you know, useful products
and also learning from that shipping is very helpful.
So you're seeing us, you know, we like to talk about this idea of relentless shipping.
So we're now kind of on the cycle of our Gemini models where every five, six months,
there's the latest generation.
I think that's part of what you're seeing going on.
Okay, I definitely want to make time to talk about AI and education, which I know
Lila and both of you have really worked on, but Lila has been a very important and passion of you.
for of yours. Let's take a break and we'll come back right after this.
Starting something new isn't just hard. It's terrifying. So much work goes into this thing
that you're not entirely sure will work out. And it can be hard to make that leap of faith.
When I started this podcast, I wasn't sure if anyone would listen. Now I know it was the right choice.
It also helps when you have a partner like Shopify on your side to help.
Shopify is the commerce platform behind millions of businesses around the world and 10% of all
e-commerce in the U.S. from household names like
all birds and coto patsy to brands just getting started.
With hundreds of ready-to-use templates,
Shopify helps you build a beautiful online store
that matches your brand's style.
You can also get the word out,
like you have a marketing team behind you.
Easily create email and social media campaigns
wherever your customers are scrolling or strolling.
It's time to turn those what-ifs into with Shopify today.
Sign up for your $1 per month trial
at Shopify.com slash big tech.
Go to Shopify.com slash big tech.
That's Shopify.com slash big tech.
Here is the problem.
Your data is exposed everywhere.
Personal data is scattered across hundreds of websites,
often without your consent.
And that means that data brokers buy and sell your information,
your address, phone number, email, social security number,
and that exposure leads to real risks,
things like identity theft, scams, harassment, higher insurance rates.
Incogni tracks down and removes your personal data
from data brokers, directories, people search sites, and commercial databases.
Here's how it works.
First, you create your account and share minimal information needed to locate your profiles.
Second, you authorize Incogni to contact data brokers on your behalf.
Third, that Incogni will remove your data, both automatically with hundreds of brokers and via customer removals.
There's also a 30-day money-back guarantee.
Take back your personal data with Incogni.
Go to incogny.com slash big tech pod and use code big tech pod at checkout.
Our code will get you 60% off an annual plan.
Go check it out.
And we're back here on Big Technology podcast with Lila Ibrahim, the CEO of Google DeepMind
and James Munica, SVP of Research Labs, technology, and society at Google.
It's great to have you both.
AI and education has been something that you're both passionate about
and have done a lot of work on.
A recent study that you did found that 85% of students 18 plus are using AI.
I mean, probably the other 15% aren't telling you.
And 81% of teachers report using AI, which far surpasses the global average, which is that 66% of the public uses AI.
So this is making real impact in education.
Let's just start with your perspective on, is this a net?
positive to education because I think the criticisms are like they're out there, that kids are using
it to cheat and teachers are using it to grade those cheated papers. What's happening in the,
you know, in practicality? Well, I think, first of all, this is a really important area that,
as James mentioned earlier, and we're approaching it as we approach everything, which is
how do we be bold and thinking about how AI might actually transform how people learn,
and really unlock human potential while also being responsible and thinking about what the risks are
and making sure that we're investing in mitigating those.
One of the things that we found also in that survey is about 80% of the 18 plus learners
are actually finding it's helpful for their education and their learning.
So it's giving them the information they need in the way, format that they might need it.
And one of the areas that we really have been focused on is making sure that it's not just like
providing an answer, but that will actually take you through those steps. And this is grounded in
everything we do, which is a scientific approach. So back up three years ago, we said, let's treat
learning like a first class science problem. How do people learn? And we have some of that
experience and expertise within Google. And we also know that the world is full of people who are
studying this. So we took a very deliberate approach to collaborate with pedagogy experts,
educators worldwide, and have been doing a lot of that, what we call Learn LM.
And this was the year that we infused that into Gemini, and then developed features like
guided learning in the Gemini app where you can go through and it helps you actually
break down the problem.
So it's teaching you how to learn and how to break down the problem.
And for someone like me who also happens to be a parent of teenagers, I think about this a lot.
I have twin daughters.
So I'm constantly running A-B tests.
Yeah, you should want to use AI and make sure the other doesn't and then see who turns out better.
You know, what's interesting, well, I'll take that as input for my next experiment.
But one of my daughters is dyslexic.
And the way the education system has been built is not for someone like her.
And yet what I have found is when she can integrate AI into her learning process,
whether it's breaking down a math problem or helping her take her word,
that are sometimes scrambled and put them into something more coherent. It's actually giving
her the confidence in a way that I have never seen her before. And I think back a lot to, I also have
a sister with a physical disability, tools were not, education system was not made for her.
Think about the entire world and how many students have been left behind because they just
didn't have access to this technology. So our idea is imagine that every student could have a
personalized tutor. And if every teacher could have a teaching assistant where AI is a productivity
tool that really could change the dynamic of how teachers and students interact. We're not saying
that the AI is the magic, the teacher is still the magic. But it frees up the teacher to actually
do that human-to-human interaction. And we've seen some really great progress in a lot of the work
that we're doing with productivity tools for teachers. I was just in Northern Ireland.
and teachers there, they worked with the government and ran a pilot,
and the teachers had like little post-it notes.
And what they found was, on average, they were saving 10 hours per week per teacher.
And their post-it notes were how they were using their time,
which was, I'm getting time back with my family.
I can now do lesson plans for different learners of different types
within my 30-plus student classroom.
It was so encouraging.
But there's still a lot to learn.
we're still in the early stages, and we have to go into this knowing that it is high stakes.
We're talking about people's lives and their longevity, but helping them learn, being able to learn and opening up the opportunities, and then being able to learn from that and integrated into our research is critically important.
Yeah, one thing I would add is I think one of the things we're learning is that learning is no different than other areas of society, right, which is when a new technology comes in, you don't just bolt it on to an existing process and an existing workflow.
You have to almost reimagine the workflow.
Let me give you an example in learning.
So we know that, you know, there's this issue and concern around cheating.
So in a world in which you have tools like this, I'm not quite sure you want to do.
do tests and assessment the old way, for example.
So we found, so it's such quite interesting where we, you know,
workers of school districts, for example, we found so we,
Lada described guided learning.
It actually turns out when students actually use guided learnings,
they actually do learn and the, you know, the mastery of the subject improves.
But this school district found that actually, you know what,
maybe we should have more tests because we know that when students are getting ready
for a test, they actually do use guided learning,
whereas when they're just trying to hand in homework at 11 p.m. the night before, they don't.
Any student watching is going to have a heart attack here.
More tests?
So what they realize is that, well, let's do an experiment.
What if we actually have a weekly test?
Oh.
So in other words, let's expand this window when students are motivated to turn on guided learning
and actually master the thing because they're going to have to do a test.
They actually find that students were actually learning more.
So that's an example of how maybe we need to read it.
imagine even what the workflow and the learning process is,
as opposed to just try to bolt on a technology to an existing structure and
existing workflow.
So there's a lot of interesting experiments and innovations that we're learning a lot
from by talking to teachers and some schools and school districts.
So I think we're at the very early stages of this.
But I think the concerns that people have around cognitive offloading and so forth,
those are real concerns.
And we have to work on that.
I do want to talk about that because, like with many things,
with technology and especially AI,
I think the concern is that these uses that we're talking about,
it's by the way, amazing that the Learn LM will go step by step
and actually, instead of spitting out an answer,
work with the person using it to be able to help them make progress.
But the issue is that some of the most ambitious people will use this,
this is a potential issue,
and their performance will just go through the roof.
But then it will just create this dichotomy between the people that use it the right way and those that use it the wrong way.
There was a great article in The New York Times recently about it's not just students, it's teachers.
The headline is the professors are using chat GPT and some students are unhappy about it.
And there's this student at Northeastern who is reading her professor's slides and seeing the slides fill with spelling mistakes and extraneous body parts in the images, which are like telltell signs of AI.
So what do you think about the fact that this could create even broader divergence in society, Lala?
Actually, it reminds me a lot of when we introduce computers into classrooms and into universities.
So I think there's actually quite a few lessons I have from those days that we're trying to explore and do research.
So one is what we can do about that.
But one thing we are also separately trying to do is convene leaders to talk about
how to approach this from a system perspective, bringing together administrators to say,
what is the framework that they want to use within their organizations for responsible usage
of the technology? I think one of the challenges we have right now is it's a little bit of
everything happening rather than taking an exploratory approach to say, listen, AI isn't going
away. Access and literacy, equitable access and literacy is important.
So some students might be using it because they want to get ahead.
Others are afraid they're going to be perceived as cheating, so they're not going to use it.
And that, to your point, that creates a separation.
And sometimes we see that based on gender, too, by the way.
Oh.
So I think what we can do is how do we bring together leaders to explore how we enter this next chapter?
How do we start to set the guardrails in a way that maximizes the benefits while mitigating the risks?
And we held an event, James and myself and a few other colleagues co-hosted late last year,
to start exploring and sharing best practices, what are people experimenting with, what is working, what's not.
And we had our researchers there as well.
We also did some hands-on training so that teachers can actually learn how to use the tools responsibly.
Again, I think this is more about unlocking productivity and potential versus, like, some of the replacement.
So we have to work on making sure the incentive models are in place as well.
That's for sure.
Okay, we have 10 minutes left.
So I think there's so much experimental technology that I want to talk about.
So can we just use our remaining time to go through four of your cutting edge technology approaches or disciplines,
maybe two minutes each or so, where we'll just kind of talk about the state of them.
It's definitely too much to cover in a short amount of time, but I don't want to leave here without touching on them.
So first to you, James, state of quantum seems like it's moving faster than a lot of people anticipate.
Yeah, quantum, you know, we have an incredible quantum AI team that's doing an extraordinary kind of path-breaking work.
And I think the headline on this is that I think quantum computing is actually making more progress than people realize.
Because keep in mind that the whole idea of what everybody is aiming for in quantum is how do we build a fully error-corrected quantum computer.
and there's been lots of different approaches to this.
I think the dominant approach that most people are taking
is the superconducting qubits approach.
That's what our team is doing.
There are other teams in the world that are doing that.
It's a very complex way of doing at it.
People think it's the best shot at it.
But there are other mechanisms.
There's neutral atom's approaches.
There's a whole range of approaches.
I think what the progress that happens as follows,
the underlying chips are making incredible progress,
Our willow chip, for example, hit a big milestone.
It was a big enough deal about a year in a half ago where, you know, it was able to do in, you know, a computation, a benchmark computer called RCS, which would take a classic frontier supercomputer 10 subtillion years to do.
And that's like, you know, one of like 25 zeros.
It's a big number.
It was able to do it under five minutes.
So the progress on and also and correct errors in a fundamentally break.
breakthrough way. One of the things that's always been an issue with error correction, which is the other big
barrier in quantum computing, is how can you reduce the error rate as you scale up and add qubits?
So the real breakthrough, despite the fun, spectacular number that I told you about, the real
breakthrough, which is what got us the breakthrough of the year award prize, was that for the first time,
we're able to show that you can do what's called below threshold error correction, which is,
as you scale up the system, the error rates are actually going down,
which is exactly what you'd want, as opposed to that they're actually going up.
So that was a big deal.
The other big deal was actually late last year, because all these benchmarks,
including the one I just told you, these are computations that are fun and great for benchmarking,
but these are competitions that are actually not useful for anything.
But last year we were able to show probably the first useful computation.
this is our quantum echoes results.
It was a big enough deal
of made the cover of nature,
which is great.
Our teams were excited about that.
What that showed was an actual useful computation
for figuring out the spin dynamics of molecules,
which could not have been done any other way.
And we're able to validate the result
with colleagues at Berkeley,
who actually validated the results in a lab with NMR data.
So that was the first example of a useful computation.
So you put all that together,
you realize that the problem,
that people had kind of decades away is actually happening much faster.
So I actually think we're going to start to see useful applications in the next five or so years from quantum computing.
And that's pretty exciting.
Definitely. We're going to spend much more time, I think, on this show, thinking about that.
Material science, I think, is one of the more overlooked areas of AI research where you can actually find new materials through AI predictive techniques.
So, Lila, talk a little bit about where that stands today.
It goes back to what are some of the root node problems that we might, if IEI can help us unlock
a basic understanding of the universe around us, it can open an entire field for ourselves and
other researchers to build upon that, alpha-fold being one of those.
The alpha genome, the alpha-nome, the one that you've just mentioned, our material science,
was really exciting because we basically went from 40,000 known stable crystals to 400,000.
that are now being tested in research and in labs.
And what that really means is if you think about things like
how do we build better batteries for electric vehicles
or superconductors for supercomputers,
it's really going to, one way we can do that
is through thinking of new materials.
So we're still, I think, quite early in this stage,
but we believe this is something promising
that could really change how we work and live.
And what do we get if there's new materials discovered?
or is it like something that's maybe t-shirt thinness, but winter coat, warmth?
Yeah.
I mean, looking at the background behind you, that's all I can.
Yeah, I think this is like when you look everything around us, and like I said, if you think
about even batteries, right, and electric vehicles of how do you make a vehicle, like the range
of a vehicle or the charging capacity of it, being able to have better batteries.
and not be limited by some of today's physics.
I think things like that are going to be possible
with some of these basic materials.
Okay. Now, weather.
Weather prediction with AI.
Is it actually something that Google's working on pretty diligently?
In many different ways.
Yeah, we actually have a very broad program around weather,
and that's work in Google DeepMinding Google Research.
There's so many things you want to predict with weather.
One is just forecasts.
What's where they're going to be like next week, tomorrow?
There's that kind of work.
So Graphcast, which came up.
Google DeepMinds, an incredible kind of state-of-the-art kind of model for that.
You're also trying to predict other things in weather.
You're trying to predict monsoons, cyclones.
You're trying to figure out when floods are going to happen.
These are weather, or these extreme weather events.
So we actually have a very broad program where we're trying to use the latest AI innovations
to make predictions.
I'll give you an example of one that actually two quick examples.
No, no, do one quick one because I have to ask you about a suncatcher.
So if you want to talk about a sun catcher, unless your team gives me more time, let's just do one example.
Well, let me do one example because this actually affects people and saves lives.
So it has always been known that if you could predict floods with more than six days advance notice,
you can actually save lives.
The UN estimates it's like you can save probably half the damage that happens.
And so this has always been one of these kind of challenges.
Can you do that?
So our team's starting about maybe two and a half years ago built a model to do that.
that, to predict these so-called riverine floods.
And we tried it.
In Bangladesh, it worked.
Now, fast forward to today, we're making these river and flood predictions, covering 150 countries
and places where more than 2 billion people live.
I think that's extraordinary.
So that's an example of breakthrough innovation, leading all the way to societal, useful impact.
We're working with the National Hurricane Center as well, where we could predict 15 days
in advance 50 different routes for hurricanes and actually track Hurricane Melissa.
So you start to think.
about what this type of insight might mean for crisis preparedness.
Yeah, and then more mundane things like airplane schedules.
So if you know that storm is coming, you can sort of take care of that in advance.
Okay, last thing, sun catcher.
What is sun catcher?
So this is in classic Google moonshot fashion where you say, okay, so imagine how we think
about training AI systems, how we do today.
And you imagine 100 years from now, how would you imagine we'll be doing it?
given the compute and energy requirements needed to train models.
So you say 100 years from now, of course we'll be doing it in space
because the sun has 100 trillion times more energy,
it's available 24-7, imagine that's probably how we're going to be doing it in the future.
So why do we try to build towards that future?
So Project Suncatcher is a moonshot in classic Google fashion
where we said, let's start to build towards that.
So we're going to try to put in, we've already done the first,
a few of the key milestones.
We're going to try to put TPUs,
our special purpose AI chips,
in space, and do training runs.
You're sending chips to space?
Chips to space.
This is actually happening.
Yeah.
So the first milestone is we're hoping that in 2027
will have done a couple training runs in space.
This is Project Suncatcher
with the idea towards building towards this future
where, you know, this will probably,
it's probably how we're going to be doing it.
So people will imagine Dyson's fears
and all these things about, you know,
course, you want to harness the energy capacity in your system, in your galaxy, in our case,
in our solar system first, and then eventually ultimately in the galaxy, you're going to do things
in space.
There's this idea that a former Googler, alias who's ever had that if we're going to get towards
AGI, maybe the world is going to have to be papered with data centers.
But you put them in space.
Maybe we can have the rest of the earth for us.
So stay tuned.
So our next milestone will be in 2027.
and hopefully will have done some training runs.
Would either of you go to space?
I would.
You trust the current spaceships?
Yeah, they're pretty good.
I grew up wanting to be an astronaut.
I failed, obviously.
Really?
So did not, and I will not be going to space.
All right.
I'm more interested right now
and how do we make Earth better,
and I think that's where AI can really make a difference.
Yeah, imagine focusing on this planet.
That's an idea.
All right, Lila, James, thank you so much
coming on the show. We appreciate you. Thanks for having us, Alex. All right, everybody.
Thank you for listening and watching and thank you again to Qualcomm for having us at your
space here in Davos. This concludes our series of episodes. At Davos, been a great for five episodes,
actually, if you include the one we did with Demis, and we'll see you next time on Big Technology
podcast. Thank you. Thank you. Michael Lewis here. My bestselling book, The Big Short,
tells the story of the buildup and burst of the U.S. housing market back in 2000.
A decade ago, the Big Short was made into an Academy Award-winning movie.
And I'm bringing it to you for the first time as an audiobook narrated by yours truly.
The Big Short's story, what it means to bet against the market, and who really pays for an unchecked financial system, is as relevant today as it's ever been.
Get the Big Short now at Pushkin.fm. slash audiobook, or wherever audiobooks are sold.
