WHOOP Podcast - How AI Is Shaping the Future of Medicine with Vivek Natarajan
Episode Date: May 6, 2026On this month’s episode of the WHOOP Podcast Longevity series, WHOOP SVP of Research, Algorithms, and Data Emily Capodilupo sits down with Google Deepmind AI Researcher Vivek Natarajan to explore ho...w large language models are transforming healthcare and biomedical research. Emily and Vivek discuss AI’s potential to both accelerate scientific discovery, such as developing new treatments and understanding diseases, and expand access to care globally by delivering medical guidance more equitably. The episode examines how AI can augment, rather than replace, doctors by improving diagnostics, increasing efficiency, and enabling more personalized care, while highlighting the importance of trust, safety, and human connection as these technologies evolve. (00:47) Understanding The Intersection of Large Language Models and Biomedicine(02:26) How To Approach AI in the Healthcare Landscape(05:52) What Is The Role of the Doctor as AI Becomes More Popular(09:39) What Does The Future of AI Healthcare Look Like? (12:51) Clinical Care: Where Can AI Assist?(16:03) Exploring The Dangers of AI (18:43) Process of Teaching Physicians and Users To Use AI (22:22) What Are The Areas of Concern Over AI?(24:12) Regulation and Pace of Development(26:51) Gaining and Maintaining Trust While Building AI Algorithms(30:38) The Impact of AI on Research(37:44) Biggest Misconception Regarding AI in HealthcareFollow Vivek Natarajan:LinkedInSupport the showFollow WHOOP:Sign up for WHOOP Advanced LabsTrial WHOOP for Freewww.whoop.comInstagramTikTokYouTubeXFacebookLinkedInFollow Will Ahmed:InstagramXLinkedInFollow Kristen Holmes:InstagramLinkedInFollow Emily Capodilupo:LinkedIn
Transcript
Discussion (0)
I'll be lying if I say that the healthcare profession is going to remain the same as it is right now.
Ultimately everything in healthcare and medicine moves at the speed of trust.
The wait time is six weeks or whatever to see a primary care doctor.
If you can get expert medical guidance and advice, why should it be that way?
How do we use AI to make that more immediate and available where you are?
I'm mostly motivated by trying to solve these problems, giving people the options,
and also be very precise and rigorous about what these technologies are meant for.
You're writing the future, right?
It's our collective responsibility to make sure that the future that actually manifests is much more efficient use of the human attention and the human time that we have in the healthcare system.
What we really want to do is bring forward the future of science and medicine.
Develop these incredible new interventions and therapies and cures for diseases and unlocking longevity.
Hi everybody. I am Emily Capitaluppo. I am Louvre Senior Vice President of Research Algorithms and Data.
Today I am joined by the incredible Vivek Natarajan from Google DeepMind, who is going to talk to us all about the intersection of large language models and biomedicine.
Vivek, thank you so much for being here with me today.
Thanks, Emily. It's a real pleasure to be over here.
So in just a couple of sentences, can you explain to people who might not understand what is the intersection of large language models and biomedicine?
We are entering this very interesting era where I think AI is going to be.
to profoundly transform the landscape of science and medicine over here. And so what that means
is two things. One is making all the scientific advancements and the discoveries that'll allow us
to like, you know, better treat diseases and also like unlock the secrets of like longevity and
aging and things like that. And so like AI agents for like scientific discovery and biomedical
discovery. But then the second thing, which is also equally exciting, is that now I think we are
going to have like AI agents that will also allow us to deliver that new knowledge that we are
discovering, the cures that we are discovering, the interventions that we are discovering, and deliver
that in a very equitable manner, not just to like, say, people around us over in Boston or like
the United States, but maybe to like the billions of people like, say in like the rural villages
in India or in like remote communities in Africa. And I think all that's going to happen over the next
decade. And so that's the real exciting part, like this confluence of AI and science and medicine.
that's truly going to help us like unlock a human potential at scale.
You know, I love that you mention that because so much of what we hear about going on in the longevity space
seems like a playground of billionaires trying to, you know, cheat death.
And, you know, I think there's all these kind of funny stories about like Brian Johnson spending $2 million a year on his like personal longevity journey.
And that's just completely inaccessible to, you know, well over 99% of the world's population.
and then, you know, the other kind of biggest thing going on in longevity is AI,
which in its very nature is incredibly equitable.
So I'm curious, like, can you talk more about that and sort of how does Google think about
equity and access in the work that you're doing?
Yeah.
I think in general, like, Google has always been about, like, democratizing knowledge
and making it available to everyone everywhere.
And I think with AI, that mission has only,
in some ways being accelerated.
And so, like, yeah, I think for us, like, the motivation is actually not, I mean, definitely,
like, we want to, like, advance health care for everyone.
But maybe it's worth thinking about, say, you know, that person, that mother of three
in a rural village in India who maybe has, like, a chronic condition like diabetes.
And we know that that is an age-accerating disease.
And it puts, like, someone like her at, say, risk of, like, you know, kidney failure or, like,
heart conditions, or even something like blindness. But for people like that person, and say,
you know, naming that person over here, you know, sometimes seeking that medical advice or that care
to manage that age-related chronic condition can be at the expense of something that's perhaps
even more important. And so in places like Indian Africa, often to like, you know, go see a doctor,
you might have to walk like, you know, 10, 20 kilometers in extreme heat. And that might come
at the expense of like a day's wage. And so it's like a choice between prioritizing my health,
taking care of my health, or putting food on my family's table at the end of the day.
And for, I think, most of humanity today, it is that choice.
And we want to not make that a choice anymore.
We want to, you know, build systems, build technology and build AI that really democratizes
all the cures and the advancements that, like, some parts of the world have, but make it
available to everyone so that, you know, people can maximize their health spans.
Because, I mean, managing something like diabetes is quite routine for us over here.
And we want to make that routine for everyone everywhere.
You know, I think it's so interesting that you bring that up.
And I think it's also worth mentioning, I heard a stat and full disclosure.
I don't have a source for this.
So it might be wrong.
But it sounds about right, which is that even in a city like Boston, it costs about $70 to see your primary care physician, simply not because of, you know, assuming that insurance actually pays for the visit, but just in terms of you have to get there.
Maybe you need to pay for child care.
You need to, you know, lost wages in terms of time not spent working.
And so if you kind of add all of that up, you know, something that we think of as free health care or insurance covered care is even fairly expensive.
And so it's on a different scale.
But I think this problem exists broadly.
And then again, even in a city like Boston, which is, you know, to some extent considered one of the best cities in the country for accessing health care.
You know, it might take six weeks to see your primary care physician.
And so I think it's really interesting this idea of how do we use a.
AI to make that more immediate and available where you are.
In some ways, this is not a new concept, right?
Dr. Google has existed for a while.
I think the thing that's really exciting and interesting is there's data that shows that when
you Google your health problem, you sort of get the right answer about 30% of the time.
Whereas I think AI promises to make those numbers a lot better.
Do you have a sense of with Gemini or with the work that you're doing?
Like, how good could this be?
And I think on that note, you're an AI researcher.
You're not a clinician, but you're working on really exciting medical breakthroughs.
And so what is the role of the doctor as AI goes from, you know, Dr. Google being helpful, but only about a third of the time to sort of the promise of what you're seeing in your lab?
Look, I've also like, you know, use search engines, including Google before.
Oh, same.
Everybody has.
Yeah.
And it is true.
Like, I think if you Google for some symptoms, it'll probably give you the most.
most extreme answer, because people like extreme versions of like results.
But I think what we are seeing right now is as AI technologies improves, you know,
as like we build better large language models, but also like agentic scaffolds around that,
the systems are getting like much more robust and they are able to like do very nuanced
clinical decision making. So we had like a couple of papers that were published in nature earlier
this year. And these were evaluating the AI system that we built, and we trained that specifically
to deliver expert medical guidance. And so in the first of these studies, the question we were
asked was, how well does this AI system do on its own in performing medical consultations
with patients? And what we saw, the results were quite striking. And so it definitely excelled
at clinical reasoning. And it was even like generating better diagnoses.
like care plans, compared to actually expert human doctors themselves.
But perhaps a more striking result or the profound result
was the fact that the AI was actually rated to be better at empathy than the human doctors.
And so, you know, the AI has like not only mastered the science,
but also the art of like medical conversations and medical dialogue.
So that is one aspect.
But then going to the second part of your question,
I don't think this is about replacement.
And I think it is about partnership.
And so in the second of these papers and these studies,
the question we asked was, do these AI systems, do they actually make good doctors even better?
And what we saw in our study was that it was a clear yes.
Like when doctors were effectively using these AI systems as a partner, they were getting much more accurate in terms of like solving these complex diagnostic cases compared to working on their own.
So I think that kind of like shows the potential in the sense that like the AI can like, you know, take away a lot of like the, you know, sometimes like doctors pay attention on things.
things that really don't matter.
And I think the AI systems are getting to a stage where you can, like, trust them to do
those things well.
And then the doctors can focus on the things that actually truly better.
And if you ask most doctors, they will tell you that what they want to do is to, like,
spend more time with their patients.
It's not just about the five-minute encounter where you just listen to their symptoms,
come up with a decision, and then you, you know, you're on to, like, you know, typing things
on a computer.
No, they actually want to spend time, like, you know, learning about you, your story.
Because that gives them a better nuance to, like, take care of you.
like they understand the context better.
And I think that is what we are going to be able to do with these AI systems,
which is to give time back to the doctors to be able to like take better care of their patients.
I fully agree with that.
And, you know, I heard Eric Topal, he has this great quote,
everybody loves to use on this context, which is, you know,
AI is not going to replace doctors.
Doctors who use AI will replace doctors who don't.
And I couldn't agree more in general.
But I'm curious, like, you're really on the bleeding edge of trying to replace doctors
or in some ways, right now I would say like intensely augmenting them.
When you kind of peak five years out, ten years out, do you sort of see this world where we go
from, you know, my primary care physician is chat to PT or and sort of their specialist
for the weird stuff? Or is even that primary care physician still employed and still really
critical in this process? I think it's hard to predict the future and how.
Well, I think you're writing the future, right? What's the philipage?
We are building parts of the future, but I think it's also up to, like, people around the world.
Everyone is going to have, like, agency.
And so I think for us as technologists, we can't be prescriptive to people about how to use technology,
but rather I think we should be more giving, like, people the options.
And also be very precise and rigorous about what these technologies are meant for.
And, like, communicating those aspects to the end users.
And then it's up to, like, I think the end users to, like, make their choices over here.
And so I still believe that, like, it is going to be about, like, like, augmentation.
Right.
I think so today if, like, a doctor is, like, say, taking care of, like, you know, 100 patients,
maybe with the help of air, they might be able to, like, take care of, like, 10,000 patients, right?
And I think that is great because, as you said, like, in Boston, like, the wait time is six weeks or whatever to see a primary care doctor.
I mean, why should it be that way?
It should not be, right?
And so, like, if you can get, like, expert medical guidance and advice, and then you can tell the doctor which cases to prioritize, which person
to see, then I think we can make much more efficient use of like the human attention and the
human time that we have in the healthcare system. And so I think that is the opportunity ahead of us.
But again, I think I'll be lying if I say that like the healthcare profession is going to
remain the same as it is right now. I think that is not true. I think professions are going to change.
And I don't think we are going to be able to like predict how that looks like because there's
always going to be like, you know, second order and third hour effects. But I mean, what my hope really is
and I'm like drawing from what I'm seeing actually in my day job of like, you know,
writing code and software.
And today we have these amazing AI agents that, you know, can generate code for you
if you give them, like, natural language instructions.
And I think that in turn makes me, like, a lot more productive.
Like, I can do a lot more things compared to any other point in my career as, like,
a software engineer and as a researcher.
And that makes me incredibly excited because my ideas, which were previously stuck in
my head, now I can, like, bring them to life as soon as possible.
And so I really hope that we can bring that same kind of joy to, like, other professions,
whether you're a doctor or a scientist.
And so, yeah, I mean, it's pretty obvious that I'm an optimist.
And but I think, yeah, we do have a responsibility to make sure that that is the case.
What's up, folks, if you are enjoying this podcast or if you care about health, performance, fitness,
you may really enjoy getting a whoop.
That's right.
You can check out whoop at whoop.com.
It measures everything around sleep, recovery, strain.
And you can now sign up for free for 30 days.
So you'll literally get the high performance wearer.
in the mail for free. You get to try it for 30 days, see whether you want to be a member.
And that is just at whoop.com. Back to the guests.
I want to get your take on not just sort of in general, like you've spoken broadly about
augmentation and the clinician still being important. I think there are obvious places
where AI is going to take a lot of sort of busy work of doctoring out, you know, like the note
taking, right? The sort of speech to text and all that.
that stuff should be totally automated. That's more or less a solved problem at this point. And I think
that that's a huge, like, quality of life or joy of work improvement because doctors don't become
doctors to write notes. They want to treat people. What do you see as like the places, you know,
you mentioned that AI is even better at empathy, right? And so, you know, we've thought historically
like bedside manner that's very human. Well, if AI is better at that, what are the places that are like,
this is going to stay very human and like that last frontier? And what are the,
other places beyond kind of note transcription and diagnostic support that you see kind of AI
augmenting or replacing sooner? I think in some ways it's just going to open up like new opportunities.
So think about the stuff that all of you in this company are doing. Like, you know, you're building
these amazing like variables that's, you know, helping measure and track and like helping people
on their fitness journeys, right? But when you go to see a doctor, they actually have like no
insight or no clue for the most part about all the stuff about you, right? But if you had like an AI
agent that could like, you know, very precisely ask the right kind of questions,
we're also like, you know, maybe do the right kind of data analysis to like, you know, summarize
and present that information to the doctor, then they'll be like much better place to like coach
and guide you in your healthcare and your wellness journey, right? And so those are some of the
things that we are not doing today that I think AI will open up the opportunity to, because like
we're measuring and we're collecting a lot more data and individuals now have the agency to then
like put AI agents and like summarize them. And then doctors now have the option to look
at that data monality. And I think that gives you like a rich insight into like what the person's
doing, their lifestyle, their inner biology and things like that. And in turn, I think that'll allow
us like a lot more opportunity to be like more precise and like the medical guidance, the fitness
guidance that we are giving to people. And so again, I think it's just that like when you have these
new powerful capabilities and technologies, I think it's kind of limiting to think that like our
professions are going to cap at the kind of things that we are doing. I think we are generally
going to be a lot more expansive and ambitious because I mean, we want to do more. I mean,
that's like inherently human.
We are explorers.
We want to like, you know, become like,
multi-planetary species and, like, you know, go throughout the system.
And so, like, inherently, when you give us, like,
us humans generally, like a powerful tool,
I think we're going to try and, like, strive for more.
And so, again, that is why I still remain, like, very optimistic
that, like, you're still going to have, like, the human aspects of care.
But, like, what we do today is not going to be what we're going to do,
like, five years down the line.
It's going to, like, look very different.
But it's still going to be, like, a very human endeavor.
And then the second part is, again, like, sure, I mean, AI is good at, like, empathy.
it's like, you know, remarkably patient, like, whether you call it at like 9 a.m. in the morning
or, like, you know, at midnight, it's going to answer to you in the same way. But oftentimes
when someone wants to see a doctor, they're maybe not like looking for medical guidance or
medical advice. They just want to hear the human voice on the other side because they might just be
lonely. And I think, again, those aspects, you can mimic a lot of that with AI, but it's not going to
go away. It's interesting that you say that, like, people really want that human voice because I think
we're already seeing people using LLMs to be that human voice in ways that I think are a little
bit scary because of how poorly understood they are. And there's obviously lots of horror
stories around kind of people using AI as a therapist, a best friend, a romantic partner,
and then that getting weird or dangerous. And do you think that as we become more like
AI native and comfortable with it, that there's still something uniquely human we're going to
want to come back to where more people kind of, I think there was some crazy stat about Gen Z
have like had a relationship, a romantic relationship with AI. It's staggeringly high where it's like
people are lonely. We know there's, you know, especially accelerated by the pandemic, but this massive
loneliness epidemic and this, you know, 24-7 available chatbot who's remarkably human and remarkably
better than human at empathy and we'll kind of chat with you. And so do you think that?
the specialness of humanness is going to be preserved or are you seeing in like the work you're doing
that like the AI is just going to be better at being human than humans are and we'll kind of switch over?
I mean, my opinion like, you know, interfacing and interacting and trying to build these systems is that
they're definitely intelligent. But I think that intelligence is very different and very complementary
to humans. So I mean, oftentimes they surprise me in the insights and the,
creative responses that they produce. But there are also like several times during the course of the day
where I'm like, come on. This is dumb. It shows like a lack of like reasoning that you would not expect like humans to do.
And so in that sense, I mean, we are trying to like, you know, and we now have this like natural language interface,
which makes it very easy to like communicate with these AI systems. But I think they're fundamentally a different kind of intelligence.
And they are complementary to us. So they can do things like, for example, you know, read up like entire Wikipedia in like a couple of hours and, you know,
summarize and get that to me. But they also can't do certain things that we humans take for granted.
Again, I think if you're looking for that human connection and if you understand what it truly
means to be a human, then I think you'll find those aspects missing in NAI. But when you don't
have like any option at all, and of course you'll back off to like the solution that you have
available at hand. I think that's what we're like roughly seeing right now. And so I think for us,
the question as a society is to better understand like a user's need and like making that
available to them, whether that's through technology or AI or where that's through, like, connecting
them with the right kind of humans that need to be in contact with.
We had Dr. Dan Henderson on the podcast, who's a longevity-focused primary care doctor.
He shared a stat that from when something is published as being, you know, sort of a new finding
until it becomes, you know, standard of care and health care can take, on average, about
17 years, I think was the number. And it seems like AI is getting adopted faster than that.
But I'm curious if you've been a part of any of the work to sort of translate, here's the research, here's what we know, here's how AI can productively augment care.
How are we training physicians to use this safely and appropriately? Because I think like a lot of times we see people get really excited and you can try a couple of things with AI and it seems like it's omniscient.
And then, you know, you see people just like one shot term papers and they don't even realize that it's got like stupid things in it because they didn't read it.
And so are medical schools sort of teaching doctors how to use AI?
What's going to be the process to make that part of the curriculum?
And is that something that is part of what your team thinks about at all or is it more just in the research side?
I think there are like extended parts of like my team at Google where we do think about like how AI is going to like transform medical education but also like education.
at large. And I think it's definitely very important to like, you know, teach students, like,
people in universities and colleges and grad schools, but also, like, generally, like, everyone
in the workforce today as to, like, how to use these AI systems, like, productively. So, yeah,
I mean, I think if you're, like, blindly looking at, like, social media or, like, you know,
following the influences, then you'd probably, like, fall into one of these two camps. Like,
one is, like, you know, AI is this super powerful thing. Like, AGI is over here. It's going to,
like, make things, it's going to do everything that you're doing already. Or the other
camp is it's no good. No point in using it. It's going to halisten.
it all the time. But I mean, like, we all know that the truth is somewhere kind of in the middle.
And so there are like many, many use cases where I think AI can be, like, extremely productive
and, like, help you. But then there are also going to be scenarios where it's not so good.
And so you as like a professional, as a student, as like a doctor, I think it's very important
to, like, have those interactions and, like, really understand. And so in some ways,
it's almost like a teammate or like a friend or like someone in your life as well, right?
I think there's going to be like strengths and weaknesses. And something that's human is our ability
you to like adapt very quickly. And I think we need to kind of like do that with this technology as well.
So yeah, I think that's going to be very, very important. And, and especially when we are
thinking about like, you know, deploying these technologies and like, you know, real world care
delivery settings, we try to make sure that like, firstly there's like enough of like a safety net around.
So like enough of like physicians around to, you know, catch if like something goes wrong or like,
say if it's a medical emergency. And the AI detects that. There's immediately someone backing up
to be able to like, you know, provide the human care that is needed. But then at then,
the same time, like, we also coach and train the doctors who are part of the safety net
to understand, like, okay, in what sort of scenarios do they need to, like, step in and take control.
And so in some ways, I would say that this is like, you know, the safety driver phase of, like,
autonomous cars where, like, you know, maybe like five years back you would see, like, these cars
around Silicon Valley driving around, but there was always, like, a safety driver around.
And now I think they've got into a stage where you don't need that person to be, like, physically
present, but you still have those remote operators who can, like, step in if they've, like,
something goes around. And so that is the stage where we are with these systems, where we are
maybe gradually replacing some of the direct synchronous oversight that we need, but we still need
that oversight from physicians. And the physicians in turn have to understand like, okay, when,
you know, things can go wrong over here. And I think it's also like the patients, like eventually.
So we have to like coach everyone in the ecosystem to understand. So we've established that
you're an optimist about this place. Yeah. I'm curious, you are just working on some incredibly
cool things. And I think, you know, a lot of it's too secret to talk about here. What are you
losing sleepover? What concerns do you have as AI kind of rapidly steps in and takes this
really unprecedented kind of new role in healthcare? Yeah, I think it's a great question. Yeah. So for
me, I think it's the things that I lose sleepover are actually not about the technology per se.
I think it's mostly about the problems that we as like humans and as society, we have not
get solved. You know, we still have, like, you know, aggressive diseases, like, Alzheimer's or,
like, Parkinson's that, like, despite, like, 30, 40 years of research and, like, billions of
dollars spent, we are not, like, very close to, like, solving, right? Same with cancer. And same with, like,
cumulatively, like, the thousands of, like, rare diseases that affect cumulatively, like, millions
of people. I mean, there are, like, families who are, you know, facing these devastating
scenarios where kids are diagnosed with these rare diseases and they don't know what to do.
So I think those are the kind of problems that keep me awake at night. And so, like, the question,
I keep asking myself as like, what can we do to accelerate like the solutions?
And so in that sense, like, I mean, LLMs, like the Gemini models that we are building
or the co-scientists that we're building, they're all like tools.
But I'm mostly motivated by trying to solve these problems so that families don't have to
face these scenarios where they have to hear like a fatal diagnosis of their loved one.
Or really, I think like I've gone through some of those things and I just hope that like
we can accelerate like the future of medicine forward so that, yeah, everyone just gets to spend like more time with their
loved ones in like a nice and healthy manner. So it sounds like what you're saying, which is such a
beautiful answer, you know, that you're losing sleep over. Are we moving fast enough? Yeah, that's a nice way
of like framing. Yeah. And, you know, I think it's so interesting to hear that because I think there's a
side of this of it's moving at such an unprecedentedly fast rate already that it's the law and the
regulations and even some of our understanding of what does this do well, what does this do poorly,
is having trouble keeping up because that's the sort of human eval on top of the rapidly evolving
AI. I'm curious, have you guys run into yet, like the regulatory side of this and how the FDA
thinks about, you know, you can't possibly go through the multi-year, you know, de novo
clearance process when your models are updating on a quarterly basis. So how does that work?
Yeah, I mean, I think what we can do is be like very rigorous and rest.
responsible around like how we are building and like deploying these systems. And so I mean,
I think it's good to ask these questions like why do certain rules and regulations exist in the
first place? And if you just, you know, think you'll figure out that that's actually ultimately
just to like protect the patient and patient safety is like paramount. I think for us that's most
important and we want to do things without taking any shortcuts. Because I think if we are able to do
that, then I think the future that we all imagine is going to come sooner or other than later.
But if you try and take shortcuts, then I think we're going to, like, postpone the future.
What might have, you know, naturally happened in like five years might take like 15, 20 years
because you're fundamentally broken trust. And so that is the thing that we, I think, need to avoid.
And yeah, I think it's also like beyond like, you know, the frontier labs or the researchers.
It is a dialogue. It's a continuous dialogue with society with like, you know, policymakers,
the regulators, doctors, patients and people at large. And so it's a huge part of the responsibility
to be able to like, you know, continuously, like,
educate and disseminate knowledge about the frontier
in a rigorous manner, in a responsible manner.
It's a funny evolution of, like, the kind of work that we do.
I mean, but I think we need to, like, take care of all of that
because ultimately everything in healthcare and medicine moves at the speed of trust.
Introducing specialized panels,
an all-new way to dial deeper into your health,
available through WOOP advanced labs.
Whether you want to understand your cardiovascular risk,
get to the root of energy crashes or make sense of hormonal symptoms,
choose from five panels of blood work that help you understand what's driving how you feel.
Unlike other tests, WOOP integrates your lab results with your WOOP data.
See how your habits and behaviors influence your biomarkers and get clear guidance on what to
change to improve your results.
To join the wait list, visit our website or the health tab in the Woop app.
You know, I really appreciate you saying that because I think like the race is totally on for, you know, the best LLM and, you know, across Google and OpenAI and Anthropic and XAI and all the major labs.
Like there's so much pressure, I think, to go faster and do better.
But it is very easy to lose trust.
I'm curious, how are you navigating e-val when you're thinking about, you know, rare diseases where you don't have large data sets to validate on?
and things like that, that might be confidential,
but I'm curious what you are comfortable sharing.
I think it's one of the harder problems to solve right now,
especially, like, evaluating scenarios in the long tail.
I wish we had, like, a good, perfect solution,
and I don't think we have that yet.
But I think one thing that's super promising to me
is the ability of, like, some of these AI systems
to almost act like, like, simulators and world models.
I think that is now starting to, like, manifest
around, like, different scales of, like, biology and medicine.
And so we are having these models, which I think some people in the community like to call like virtual cell models, that can very precisely like be able to like predict what happens when you do like these interventions where that's, you know, small molecules or like, you know, genetic perturbations and tell you like what would happen.
And I think that is super cool.
And I think we need to kind of like build those technologies and like, you know, make them more robust.
They're not there yet.
I think it's still a few years of research away.
But I think you can like gradually expand on that.
Like going from like, you know, virtual cell models to like, you know, virtual tissue.
models to like virtual organs and then maybe even like complete like digital twins.
So I think that part is going to be quite exciting if you're able to like, you know, build
that and simulate that with AI. But then on the other side of like clinical medicine and especially
with all the progress with like, you know, large language models, we're also seeing like evidence
of being able to, you know, simulating through LLMs like, you know, different patient personalities
or different crosses of like symptoms and like socioeconomic profiles and things like that. It's
quite funny because I think like tend to underestimate like what amount of information is there in the
internet. I mean, yeah, people tend to like make fun of like, you know, these Reddit forums and
things like that. But they often have those, you know, the golden nuggets of information that
might actually be the secret to, you know, solving something for someone. They're incredibly
personalized. And so the question is how do you extract that in like a contextual manner and nuanced manner?
All that information is encoded in these models or they're able to like retrieve it. And so
if you're able to tap into that, you can then create these incredibly rich like, you know, patient
personalities that can like simulate like you know patients across like these thousands of diseases
that we know are like you know the crosses of like diseases and symptoms and so in some way like
the the research AI co-physician that we are building i would say that in simulation it is kind of like
already seen like hires or millions of patients across diseases across like symptoms across personalities
across socioeconomic backgrounds and so even though it's in simulation and now we're now teaching it to
like you know drive it in the real world um i would say that's already
like the world's most experienced doctor because like if you think about like ordinary doctors like
normal doctors I think they would probably see like somewhere between like 10,000 to like maybe
50,000 patients in their entire lifetime and there's already like hundreds of millions and this is like still
like very early and so I think if we get this right and if you are able to like bridge the biology
and the clinical medicine and the human aspects around all of this then I think we're going to be
creating these very rich world models and digital world environments we are going to be able to teach
these AI systems to like learn and experience what it is like to you know
practice medicine and understand biology. And I think that's going to be incredibly rich because
there's all in simulation. It's all going to happen in a safe manner. But if you're able to get that
right and if you're able to bridge the gap between simulation and reality, and I think that's going
to give us a very safe pathway towards developing the kind of interventions and technologies that we
want. So everything you just said is incredibly exciting. I want to make sure people understand
how exciting this is. So I'm going to back up a couple minutes. Because I think we spent the bulk of
the first part of this podcast talking a lot about sort of augmenting that patient
clinician interaction, so very point of care focused. And then you started talking about simulating
things on the cell level. And that's really augmenting medical research in a different way.
And we haven't spent enough time there. So before we wrap up, I do want to make sure we touch
on why that is so incredibly exciting. So what does that actually mean? And without, I guess,
spoiling anything you're not comfortable sharing, what are the types of diseases that this kind of
cell-level simulation is very promising for, you know, advancing medicine on.
I think it's still, like, very early over there.
Okay.
So kind of, like, data sets that we are seeing, like, that's being generated right now
where I think primarily, like, you know, cancer-focused.
And I think that's not too surprising because that's where, like, a lot of, like, research
happens.
That's where, like, a lot of, like, attention is.
And, yeah, I mean, it's also, like, one of the biggest problems that we as a society face.
but I think those technologies that are being developed to kind of like genetic data,
I think that is transferable.
And so we will see that progress towards like other diseases that are like a huge burden
on society.
I think for rare diseases, it'll be a little bit more challenging.
But I think if you're able to like learn these models that have like good representations,
then I think they can like, you know, bridge and learn across like modalities of information
and fill in the gaps as needed.
That I think is also like going to be incredibly powerful.
And I think we could spend like another full podcast talking about that.
But I think that will probably be like the way to go for like cry diseases as well,
where we kind of like learn from like these data rich environments.
And then these models become like more intelligent and strong.
And then they then learn how to learn.
Right.
And so like it's almost like how we humans to like I think we don't need like, you know,
millions of examples of something to like, you know, master a given task.
Rather like five examples and like very specific natural language feedback is generally enough for us.
And so that is the kind of thing that we need to like, you know, teach the AI systems to do.
And that is what I think a lot of the collective field is like working towards giving the base knowledge that you need, but also teaching them how to learn.
It's all very exciting.
And I am totally here for that follow-up podcast when there's more that you're ready to share.
So you've mentioned a couple times this co-scientist project.
I think what you're doing there is incredibly cool.
Can you talk a little bit more about it?
Yeah.
So I almost see the AI co-scientist and the AI co-position as like two sides of the same coin.
So with the co-scientist, what we really want to do is,
again, like bring forward the future of like science and medicine and develop these incredible
new interventions and therapies and cures for diseases and unlocking longevity.
And then the AI co-physician is all about like delivering that to people at scale.
And so yeah, I think with the co-scientist, it's mostly about like, you know, building air systems
that can think and, you know, specifically like do the hard system two style thinking that is the
hallmark of real science.
And then once you're able to like teach the AI systems to do that, then hopefully you can
then partner them up with, like, real human scientists and augment and accelerate them.
And hopefully the pair of them together in partnership can, you know, really, like, lead to,
like, important new breakthroughs and discoveries.
And so that is what, like, we are essentially building towards.
And it's already, like, starting to lead to, like, you know, impressive results where
researchers have used the system to, like, you know, identify, like, new targets to, like,
reverse liver damage, identify, like, new drug refocusing candidates for, like, complex forms
of cancer.
And even, like, you know, design proteins for, like, more efficient, like, cellular rejuvenation.
And in some ways, this is only the beginning.
And so it is, like, incredibly exciting.
But again, I just want to bring back the focus that we have to do both.
Like, it's not enough to, like, just advance the science and, like, all these amazing new cures,
but we also have to figure out how to deliver them to everyone everywhere.
And so that together is, like, the focus of my team at Google.
Yeah, I really appreciate the focus on that translational element because I think there's so much cool stuff going on on the AI or, like,
cell biology and all these things that don't have clear paths.
to actually touching real people and impacting their lives.
And I think what's so special about the work that you're doing is it's designed very intentionally for that
translational element and getting it into real people.
And I follow this space very carefully because I think it's so interesting to see how people
are excited about AI, afraid of AI, and how people are letting it into their lives,
the things that people are so excited to just sort of outsource and delegate to the to the
AI and places where people are afraid of like, oh, I would never do XYZ because of what that,
I don't want that data on the internet or those kinds of fears. And I know this is a little bit
outside of the work that you do, but where are you finding pushback that feels like fear and
silly? And what are the just kind of macro elements that are making what you're doing more
challenging? And what do you wish people understood that they don't that, like, drives that fear?
I think it's mostly that, like, what I have come to realize is, like, there is no, like,
single crown truth of, like, you know, people's opinions, right? I think, like, in some ways,
like, when you look at, like, the media or social media, they tend to, like, echo and, like,
amplify the loudest voices. But then once you go onto the ground and, like, you know, talk to real
people, you know, whether that's, you know, patients or, like, scientists or even, like, generally,
you'll see, like, a huge diversity of, like, opinions over here.
And I think it is very important to kind of like respect that.
And the best way to do that is again, like not by being like very prescriptive about like how you would want like technologies to be built, but rather giving people the choice, giving people the agency.
So we build tools.
We clearly say, okay, this is what you can use it for.
This is what it's like good for.
And so the way you like, you know, talk about these things promote or marketed, I think it has to be like very rigorous and scientifically driven.
But then ultimately the choice is with the user.
And so, like, you know, if you go on social media or if you, like, read the media generally,
you would think that, like, oh, people are, like, super concerned about, like, you know, sharing Cal data or whatever.
But then sometimes I hear the opposite, we're just like, you know, people are telling me.
They just take my data because if it helps, like, someone in the future or, like, helps advance, like, a cure for, like, this disease, I would be incredibly thrilled.
I'm not saying that the other opinion doesn't exist, but, like, people are scared about, like, you know, sharing data with tech companies.
But we have to respect that there's a mixture or, like, there's a great diversity of, like, opinions.
And so I think ultimately it's about like respecting that and giving people the agency to do things that they believe in and like things that matter to them.
And so, so yeah, I think that is what at least I personally believe that we should be doing more of.
Yeah, I mean, I think again, like there's a lot of misconceptions around where AI is.
There's a lot of hype, but there's also like a lot of fear.
And I just hope that people spend like maybe more time in their lives, not just like looking at these like viral features like image generation or like video generation.
We're just like trying to figure out, okay, is it like really helpful in my day-to-day
tasks?
And it's okay if it's like not helpful.
But I think having like a better sense of what it can do rather than just reading the media
or like, you know, looking at social media.
And that'll be incredibly helpful.
Just because you said it, as we wrap up, what's the biggest misconception that you
would love to squash for all of our listeners?
I mean, one thing I'll say that is the future is going to look very interesting and it's
going to be very hard to predict.
So if I just think probabilistically, I think like every stream of like possibility from now
to the future is possible. And ultimately, I would say that it's our collective responsibility to
make sure that, like, the future that actually manifests is the one that is, like, kind of, like,
globally optimal for everyone. So in that sense, I would say that, yeah, I mean, like, everything is
possible because, like, we are on, like, the slope of an exponential, like, there's a convergence
of technology is happening. AI being one of them. But ultimately, the more you learn about these
technologies and, like, what is possible, I think the more agency you'll have to, like, influence
and, like, make sure the future is a bright one.
appreciate this attitude that you've had during this podcast of like it seems like you're fairly
convinced and I definitely am that the world is about to change in a very meaningful way.
You have this like very beautiful kind of healthy respect for there's no guarantee about
what that change looks like, which I appreciate because I know how hard you're working to drive
that change in a certain direction and in a direction that I hope is actually realized.
You know, just this importance of responsible AI that you keep bringing up, I think, is so, so important because we've seen AI famously do creepy things. And so we do know that it can go in directions that well-intended people didn't foresee. And that also it can bring really exciting improvements to health care, improvements to health and well-being and democratize access to care, which is something that I'm so excited about and part of why I've been at WOOP for 13 years because I think people have the right to this.
information and need this information and, you know, hopefully wearables can be a part of that too.
You know, I think like.
Definitely.
But I think it's important to both be optimistic, but also have that healthy fear of there's
no guarantee that this is good for us.
And so we do need to continue to invest in these areas.
So just want to say thank you for the incredible work that you do.
And thank you so much for taking the time out today.
You know, you flew in from California for this.
So thank you for being on the podcast today.
And, you know, likewise, a huge fan of everything that happens within Hoop, all the work that
you're doing.
If there's any one single takeaway, it's that I'm an optimistic person and that I think
collectively the technologies that we are building, us as a society and humanity as a whole,
I think we'll figure out, like, how to harness that in, like, a very positive manner.
I certainly hope you're right.
fingers crossed.
If you enjoyed this episode of the Whoop podcast, please leave a rating or review.
Check us out on social at Whoop at Will Ahmed.
If you have a question was answered on the podcast, email us, podcast to Whoop.com.
plus 508 443-4952.
If you think about joining Whoop, you can visit
whoop.com, sign up for a free
30-day trial membership.
New members to use the code Will,
W-I-L, to get a $60
credit on Whoop accessories when you enter
the code at checkout.
That's a wrap, folks.
Thank you all for listening.
We'll catch you next week on the Whoop Podcast.
As always, stay healthy and stay in the green.
