Microsoft Research Podcast - What AI's impact on individuals means for the health workforce and industry
Episode Date: May 29, 2025Ethan Mollick and Azeem Azhar, thought leaders at the forefront of AI’s influence on work, education, and society, discuss the impact of AI at the individual level and what that means for the health...care workforce and the organizations and systems in medicine.
Transcript
Discussion (0)
In American primary care, the missing workforce is stunning in magnitude. The shortfall estimated
to reach up to 48,000 doctors within the next dozen years. China and other countries with
aging populations can expect drastic shortfalls as well. Just last month, I asked a respected
colleague retiring from primary care who he would recommend as a replacement. He told me bluntly
that, other than expensive concierge care practices, he could not think of anyone even for himself.
This mismatch between need and supply will only grow and the U.S. is far from alone among developed
countries in facing it. This is the AI Revolution in Medicine Revisited. I'm your host, Peter Lee.
Shortly after OpenAI's GPT-4 was publicly released, Kerry Goldberg, Dr. Zach Kohani, and I published the AI revolution in medicine to help educate the world of healthcare and medical research about the transformative impact this new generative
AI technology could have. But because we wrote the book when GPT-4 was still a
secret, we had to speculate. Now, two years later, what did we get right and what did
we get wrong? In this series, we'll talk to clinicians, patients,
hospital administrators, and others
to understand the reality of AI in the field
and where we go from here.
The book passage I read at the top is from chapter four,
"'Trust but Verify,' which was written by Zach.
You know, it's no secret that in the U.S. and elsewhere,
shortages in medical staff and the rise of clinician burnout
are affecting the quality of patient care for the worse.
In our book, we predicted that generative AI
would be something that might help address these issues.
So in this episode, we'll delve into how
individual performance gains that our previous guests have
described might affect the health care workforce as a whole. And on the patient side, we'll look into the influence of generative
AI on the consumerization of health care. Now, since all of this consumes such a huge
fraction of the overall economy, we'll also get into what a general purpose technology
as disruptive as generative AI might mean in the context of labor markets and beyond.
To help us do that, I'm pleased to welcome Ethan Molyk and Aziz Mazhar.
Ethan Molyk is the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, and an Associate
Professor at the Ward School of the University of Pennsylvania.
His research into the effects of AI on work, entrepreneurship, and education is applied
by organizations around the world, leading him to be named one of Time Magazine's most influential
people in AI for 2024. He's also the author of the New York Times bestselling book Co-Intelligence.
Azim Azhar is an author, founder, investor, and one of the most thoughtful and influential
voices on the interplay between disruptive emerging technologies and business and society.
In his bestselling book, The Exponential Age, and in his highly regarded newsletter and
podcast, Exponential View, he explores how technologies like AI are reshaping everything
from healthcare to geopolitics.
Ethan and Azim are two leading thinkers on the ways that disruptive technologies, and
especially AI, affect our work, our jobs, our business enterprises, and whole industries.
As economists, they're trying to work out whether we are in the midst of an economic
revolution as profound as the shift from an agrarian to an industrial society.
Here's my interview with Ethan Molyneux.
Ethan, welcome. So happy to be here. Thank you. I described you as a Professor Wharton, which
I think most of the people who listen to this podcast series
knows of as an elite business school.
So it might surprise some people that you study AI.
And beyond that, that I would seek you out
to talk about AI in medicine.
So to get started,
how and why did it happen that you become one of the leading experts on AI?
It's actually an interesting story.
I've been AI-adjacent my whole career.
When I was in my PhD at MIT,
I worked with Marvin Minsky and the MIT Media Labs AI group,
but I was never the technical AI guy.
I was the person who was trying to explain AI
to everybody else who didn't understand it.
And then I became very interested
in how do you train and teach?
And AI was always a part of that.
I was building games for teaching,
teaching tools that were used at hospitals
and elsewhere, simulations.
So when LLMs burst into the scene,
I had already been using them
and had a good sense of what they could do.
And between that and kind of being practically oriented against the first research products
underway, especially under education and AI and performance, I became sort of a go-to person in
the field. And once you're in a field where nobody knows what's going on, we're all making it up as
we go along. I thought it's funny that you led with the idea that you have a couple months head start
for GPT-4, right? Like that's all we have at this point is a few month head start.
So being a few months ahead is good enough to be an expert
at this point, whether it should be or not,
it's a different question.
Well, if I understand correctly, leading AI companies
like OpenAI, Anthropic and others have now sought you out
as someone who should get early access to really start
to do early assessments and gauge early reactions. How has that been?
So, I mean, I think the bigger picture is less about me than about two things that tell us about
the state of AI right now. One, nobody really knows what's going on, right? So in a lot of ways,
if it wasn't for your work, Peter, like, I don't think people would be thinking about medicine as
much because these systems weren't built for medicine. They weren't built to change education.
They weren't built to write memos. They weren't built to do any
of these things. They weren't really built to do anything in particular. It turns out
they're just good at many things. To the extent that the labs work on them, they care about
their coding ability above everything else and maybe math and science secondarily. They
don't think about the fact that it expresses high empathy. They don't think about its accuracy
and diagnosis or where it's inaccurate. They don't think about its accuracy and diagnosis or where it's inaccurate.
They don't think about how it's changing education forever.
So one part of this is the fact that they go
to my Twitter feed or ask me for advice
is an indicator of where they are too,
which is they're not thinking about this.
And the fact that a few months Head Start
continues to give you a lead,
tells you that we are at the very cutting edge.
These labs aren't sitting on projects for two years
and then releasing them.
Months after a project is complete or sooner, it's out the door.
Like there's very little delay. So we're kind of all in the same boat here,
which is a very unusual space for a new technology.
And, you know, I explained that you're at Wharton.
Are you an odd fit as a faculty member at Wharton?
Or is this a trend now even in business schools,
that AI experts are becoming key members of the faculty?
I mean, it's a little of both, right?
It's faculty, so everybody does everything.
I'm a professor of innovation entrepreneurship.
I've launched ARPS before,
and working on that in education means I think about
how do organizations redesign themselves?
How do they take advantage of these kinds of problems?
So medicine's always been very central to that, right?
A lot of people in my MBA class have been MDs,
either switching careers or else looking to advance
from being sort of individual contributors to running teams.
So I don't think that's that bad of a fit.
But I also think this is general purpose technology.
It's gonna touch everything.
The focus on this is medicine,
but Microsoft does far more than medicine, right?
There's transformation happening in literally every field
in every country.
This is a lot widespread effect.
So I don't think we should be surprised
that business schools matter on this
because we care about management.
There's a long tradition of management
and medicine going together.
There's actually a great academic paper
that shows that teaching hospitals
that also have MBA programs associated with them
have higher management scores and perform better.
So I think that these are not as foreign concepts,
especially as medicine continues to get more complicated.
Yeah, well, in fact, I wanna dive a little deeper
on these issues of management, of entrepreneurship,
education, but before doing that,
if I could just stay focused on you,
there is always something interesting to hear from people
about their first encounters with AI.
And throughout this entire series,
I've been doing that, both pre-generative AI
and post-generative AI.
So you sort of hinted at the pre-generative AI.
You were in Minsky's lab.
Can you say a little bit more about that early encounter and then tell us about your first encounters
with GenerousVI?
Yeah, that's a great question.
So first of all, when I was at the Media Lab,
that was pre the current boom in sort of even
in the old school machine learning kind of space.
There was a lot of potential directions you know, directions to head in.
While I was there, there were projects underway, for example, to record every interaction small
children had. One of the professors was recording everything their baby interacted with. And I hope
that maybe that would give them a hint about how to build an AI system. There was a bunch of
projects underway that were about labeling every concept and how they related to other concepts.
So like, it was very much Wild West of like, how do we make an AI work, which has been this repeated problem in AI,
which is what is this thing? And the fact it was just like brute force over the corpus of all human knowledge
turns out to be a little bit of like a you know, it's a miracle and a little bit of a disappointment in some ways
compared to how elaborate some of this was. So you know, I think that that was sort of my first
encounters in sort of the intellectual way.
The generative AI encounters actually started with the
original sort of GPT-3, or, you know, earlier versions.
And it was actually game-based.
So I played the games like AI Dungeon.
And as an educator, I realized, oh my gosh,
this stuff could write essays at a fourth grade level.
That's really going to change the way like middle school
works, was my thinking at the time.
And I was posting about that back in 2021,
that this is a big deal.
But I think everybody was taken surprise,
including the AI companies themselves by,
chat GPT by GPT 3.5.
The difference in degree turned out to be a difference
in kind.
Yeah, if I think back, even with GPT 3,
and certainly this was the case with GPT-2,
it was, at least from where I was sitting,
it was hard to get people to really take this seriously
and pay attention.
And it's remarkable within Microsoft,
I think a turning point was the use of GPT-3
to do code completions.
And that was actually productized as GitHub Copilot, the very first version.
That, I think, is where there was widespread belief.
But, you know, in a way, I think there is, even for me, early on,
a sense of denial and skepticism.
Did you have those initially at any point?
Yeah. It still happens today.
This is a weird technology.
The original denial and skepticism was,
they couldn't see where this was going.
It didn't seem like a miracle because,
of course, computers can complete code for you.
What else are they supposed to do?
Of course, computers can give you
answers to questions and write fun things.
This difference of moving into a world of generative AI, I think a lot of people just thought that's what computers could do.
So it made the conversations a little weird. But even today, faced with these, you know,
with very strong reasoner models that operate at the level of PhD students, I think a lot of people
have issues with it, right? I mean, first of all, they seem intuitive to use, but they're not always
intuitive to use because the first use case that everyone puts AI to, it fails at because they use it like Google or
some other use case.
And then it's genuinely upsetting in a lot of ways.
I think, you know, I write in my book about the idea of three sleepless nights.
I, that hasn't changed.
Like you have to have an intellectual crisis to some extent, you know, and I
think people do a lot to avoid having that existential angst of like, Oh my
God, was it mean that a machine could think, apparently think like a person.
So, I mean, I see resistance now, I saw resistance then,
and then on top of all of that,
there's the fact that the curve of the technology
is quite great.
I mean, the price of GPT-4 level intelligence
from when it was released has dropped 99.97%
at this point, right?
I mean, I could run a GPT-4 class system
basically on my phone.
Microsoft's releasing things that can almost run on like,
you know, like it fits in almost no space
that are almost as good as the original GPT-4 models.
I mean, I don't think people have a sense
of how fast trajectory is moving either.
Yeah, you know, there's something that I think about often.
There is this existential dread
or will this technology replace me? But I think the first people to feel that are researchers,
people encountering this for the first time,
if you were working, let's say, in Bayesian reasoning or in traditional,
let's say, Gaussian mixture model-based speech recognition,
you do get this feeling, oh my god, this technology
has just solved the problem that I've dedicated my life to. And there is this really difficult
period where you have to cope with that. And I think this is going to be spreading in more and more walks of life. And so at what point does that sort of sense of dread hit you, if ever?
I mean, it's not even dread as much as like, you know, Tyler Cohen wrote that it's impossible
to not feel a little bit of sadness as you use these AI systems too, because like, I'm talking
to Fred just as the most minor example,
and his talent that he was very proud of was he was very good
at writing limericks for birthday cards.
He'd write these limericks,
everyone was always abused by them.
And now, you know, GPT-4 and GPT-5, 4.5,
they made limericks obsolete.
Like anyone can write a good limerick, right?
So this was a talent, and it's a little sad,
like this thing that you cared about mattered.
I mean, you know, academics were a little used to dead ends, right? And like, you
know, some getting the lap but like the idea that entire fields are heading that
way. Like in medicine there's a lot of support systems
that are now obsolete and the question is how quickly you change that. In
education a lot of our techniques are obsolete.
What do you do to change that? You know, it's like the fact that this
brute force technology is good enough to solve so many problems
is weird, right? And it's not just the end of our research angles that matter too.
Like, for example, I ran this 14 person plus multi-million dollar effort
working to build these teaching simulations.
And we're very proud of them. It took years of work to build one.
Now we've built a system that can build teaching simulations on demand
by you talking to it with one team member.
And you literally can create
any simulation by having a discussion with the AI.
I mean, there's a switch to a new form of excitement,
but there is a little bit of like this mattered to me.
And now I have to change how I do things.
I mean, adjustment happens.
But if you haven't had that displacement,
I think that's a good indicator that you haven't really faced AI yet.
Yeah, what's so interesting just listening to you
is you use words like sadness,
and yet I can see and hear the excitement
in your voice and your body language.
So that's also kind of an interesting aspect of all of this.
Yeah, I mean, I think there's something
on the other side, right?
But like, I can't say that I haven't had moments
where like, oh, but then there's joy
and basically like also, you know, freeing stuff up.
I mean, I think about doctors or professors, right?
These are jobs that bundle together lots of different tasks
that you would never have put together, right?
If you're a doctor, you would never have expected the same person to be good at
keeping up with the research and being a good diagnostician and be a good manager
and being good with people and being good with hand skills.
And like who would ever want that kind of bundle?
That's not something you're all good at.
Right.
And a lot of our stress of our job comes from the fact that we suck at some of it.
And so to the extent that AI steps in for that, you kind of feel bad about some of the stuff
that it's doing that you wanted to do,
but it's much more uplifted to be like,
I don't have to do this stuff, I'm bad at it anymore.
I get the support to make myself good at it.
And the stuff that I really care about,
I can focus on more.
Well, cause we are at kind of a unique moment
where whatever you're best at, you're still better than AI.
And I think it's ongoing question about how long that lasts.
But for right now, like you're not gonna say, okay, AI replaces me entirely in my job in medicine. It's very
unlikely. But you will say it replaces these 17 things I'm bad at, but I never liked that
anyway. So it's a period of both excitement and a little anxiety.
Yeah. I'm going to want to get back to this question about in what ways AI may or may
not replace
doctors or some of what doctors and nurses and other clinicians
do.
Before that, let's get into, I think,
the real meat of this conversation.
In previous episodes of this podcast,
we talked to clinicians and health care administrators
and technology developers that are very rapidly injecting AI today
to do various forms of workforce automation.
Automatically writing a clinical encounter note,
automatically filling out a referral letter
or request for prior authorization
for some reimbursement to an insurance company.
And so these sorts of things are intended not only to make things more
efficient and lower costs, but also to reduce various forms of drudgery,
cognitive burden on, uh, frontline health workers.
So, uh, how do you think about the impact of AI on that aspect of workforce? And what would
you expect will happen over the next few years in terms of impact on efficiency and costs?
So, I mean, this is a case where I think we're facing the big bright problem in AI in a lot of
ways, which is that this is at the individual level,
there's lots of performance gains to be gained, right?
And the problem though is that we as individuals
fit into systems and medicine as much as anywhere else
or more so, right?
Which is that you could individually
boost your performance,
but it's also about systems that fit along with this, right?
So, you know, if you could automatically, you know,
record an encounter, if you could automatically make notes,
does that change what you should be expecting for notes
or the value of those notes or what they're for?
How do we take what one person does
and validate it across the organization
and roll it out for everybody
without making it a 10-year process
that it feels like IT and medicine often is?
Like, so we're in this really interesting period
where there's incredible amounts of individual innovation
and productivity and performance improvements in this field, like very high
levels of it, but not necessarily seeing that same thing translate to organizational efficiency
or gains. And one of my big concerns is seeing that happen. We're seeing that in non-medical
problems, the same kind of thing, which is, you know, we've got research showing 20 to
40% performance improvements, like not uncommon to see those things, but then the organization
doesn't capture it.
The system doesn't capture it
because the individuals are doing their own work
and the systems don't have the ability
to kind of learn or adapt as a result.
You know, where are those productivity gains going then
when you get to the organizational level?
Well, they're dying for a few reasons.
One is there's a tendency for individual contributors
to underestimate the power of management, right? Good practices associated with good management,
increase happiness, decrease issues,
increase the success rates.
In the same way, about 40% as far as we can tell
of the US advantage over other companies of US firms
has to do with management ability.
Like management is a big deal, organizing is a big deal,
thinking about how you coordinate is a big deal.
At the individual level, when things get stuck there,
you can't start bringing them up
to how systems work together.
It becomes, how do I deal with a doctor
that has a 60% performance improvement?
We really only have one thing in our playbook
for doing that right now, which is, okay,
we could fire 40% of the other doctors
and still have a performance gain,
which is not the answer you wanna see happen.
So because of that, people are hiding their use.
They're actually hiding their use for lots of reasons.
And it's a weird case because the people who are able
to figure out best how to use these systems,
for a lot of use cases,
are actually clinicians themselves
because they're experimenting all the time.
Like they have to take those encounter notes.
And if they figure out a better way to do it,
they figure that out.
You don't want to wait for a med tech company
to figure that out and then sell that back to wait for a, you know, a med tech company to figure that out and then
sell that back to you when it can be done by the physicians themselves.
So, we're just not used to a period where everybody's innovating and where the management
structure isn't in place to take advantage of that.
And so we're seeing things stalled at the individual level and people are often, especially
in risk averse organizations or organizations where there's lots of regulatory hurdles,
people are so afraid of the regulatory piece that they don't even bother trying to make change.
If you are the leader of a hospital or a clinic
or a whole health system, how should you approach this?
How should you be trying to extract positive success out of AI?
So I think that you need to embrace
the right kind of risk, right?
We don't want to put risk on our patients.
Like we don't want to put uninformed risk,
but innovation involves risk to how organizations operate.
They evolve change.
So I think part of this is embracing the idea
that R&D has to happen in organizations again.
What's happened over the last 20 years or so
has been organizations giving that up.
Partially that's a trend to focus on what you're good at and not try and do this
other stuff, partially it's because it's outsourced now to software companies that
like Salesforce tells you how to organize your sales team.
Workforce tells you how to organize your organization.
Consultants come in and will tell you how to make change based on the average of
what other people are doing in your field.
So companies and organizations and hospital systems
have all started to give up their ability
to create their own organizational change.
And when I talk to organizations,
I often say they have to have two approaches.
They have to think about the crowd and the lab.
So the crowd is the idea of how to empower clinicians
and administrators and supporter networks
to start using AI and experimenting in ethical, legal ways,
and then sharing that information with each other.
And the lab is how are we doing R&D
about the approach of how to AI to work,
not just direct patient care, right?
But also fundamentally, like what paperwork can you cut out?
How can we better explain procedures?
Like what management role can this fill?
And we need to be doing active experimentation on that.
We can't just wait for Microsoft to solve the problems.
It has to be at the level of the organizations themselves. So let's shift a little bit to the patient. One of the things that we see
and I think everyone is seeing is that people are turning to chatbots like chatgpt actually to
seek healthcare information for their own health or the health of their loved ones.
And there was already prior to all of this,
a trend towards, let's call it consumerization of healthcare.
So just in the business of healthcare delivery,
do you think AI is going to hasten these kinds of trends
or from the consumer's perspective? I mean, what, you know, from the consumers perspective?
I mean, absolutely. Right. And like all the early data we have suggested for most common medical
problems, you should just consult AI too. Right. Like, in fact, there is a real question to ask
him. At what point does it become unethical for doctors themselves to not ask for a second opinion
from the AI? Because it's cheap, right? You can overrule it or whatever you want, but like
not asking seems foolish. I think the two places where there's a burning,
almost moral imperative is,
let's say I'm in Philadelphia, I'm a professor,
I have access to really good healthcare
through the Hospital University of Pennsylvania system.
I know doctors, I'm lucky, I'm well connected.
If something goes wrong, I have friends who I can talk to,
I have specialists, I'm pretty well educated in this space.
But for most people on the planet,
they don't have access to good medical care,
they don't have good health,
it feels like it's absolutely imperative to say,
when should you use AI and when not?
Are there blind spots?
What are those things?
And I worry that, to me, that would be the crash project
I'd be invoking, because I'm doing the same thing
in education, which is, this system is not as good
as being in a room with a great teacher
who also uses AI to help you,
but it's better than not getting it, you know, to the level of education people get in many cases.
Where should we be using it? How do we guide usage in the right way? Because the AI labs aren't
thinking about this. We have to. So to me, there is a burning need here to understand this. And I
worry that people will say, you know, everything that's true, AI can hallucinate, AI can be biased.
All of these things are absolutely true,
but people are going to use it.
The early indications are that it is quite useful.
And unless we take the active role of saying,
here's when to use it, here's when not to use it,
we don't have a right to say, don't use this system.
And I think, you know, we have to be exploring that.
What do people need to understand about AI?
And what should schools, universities,
and so on be teaching?
Those are kind of two separate questions in a lot of ways.
I think a lot of people want to teach AI skills.
And I will tell you, as somebody who works in this space a lot,
there isn't like an easy sort of AI skill.
I could teach you prompt engineering in two to three classes,
but every indication we have is that for most people
under most circumstances, the value of prompting, you know, any one case is probably not that useful.
A lot of the tricks are disappearing because the AI system are just starting to use them
themselves. So asking good questions, being a good manager, those, you know, being a good
thinker tend to be important. But like magic tricks around, you know, making the AI do
something because you use the right phrase is, used to be something that was real, but
is rapidly disappearing. So I worry when people say that was real, but it's rapidly disappearing.
So I worry when people say teach AI skills,
no one's been able to articulate to me
as somebody who knows AI very well
and teaches classes on AI,
what those AI skills that everyone should learn are, right?
I mean, there's value in learning a little bit
how the models work.
There's value in working with these systems.
A lot of it's just hands on keyboard kind of work.
But like we don't have an easy slam dunk.
This is what you learn in the world of AI
because the systems are getting better.
And as they get better,
they get less sensitive to these property techniques.
They get better property themselves.
They solve problems spontaneously and start being agentic.
So it's a hard problem to ask about like,
what are you training someone on?
I think getting people experience in hands and keyboards,
getting them to,
there's like four things I could teach you about AI
that and two of them already started to disappear.
But like one is be direct,
like tell the AI exactly what you want,
that's very helpful.
Second, provide as much context as possible
that can include things like acting as a doctor,
but also all the information you have.
The third is give it step-by-step directions,
that's becoming less important.
And the fourth is go to bad examples
of the kind of output you want.
Those four, that's it as far as the research
telling you what to do,
and the rest is building intuition.
I'm really impressed that you didn't give the answer.
Well, everyone should be teaching my book, Co-Intelligence.
Oh no, sorry.
Everybody should be teaching my book, Co-Intelligence.
I apologize.
Very good.
That it's good to chuckle about that, but actually I can't think of a better
book, like if, if you were to assign a textbook in any professional education space, I think
co-intelligence would be number one on my list.
Are there other things that you think are essential reading?
That's a really good question.
I think that a lot of things are evolving very quickly.
I happen to kind of hit a sweet spot with co-intelligence sub-degree because I talk
about how I used it
and I was sort of an advanced user of these systems.
So it's sort of like my Twitter feed, my online newsletter.
I'm just trying to kind of,
in some ways it's about trying to make people aware
of what these systems can do by just showing a lot, right?
Rather than picking one thing,
and like, this is a general purpose technology,
let's use it for this.
And like everybody gets a light bulb for a different reason.
So more than reading, it is using.
That can be Copilot or whatever your favorite tool is, but using it, voice modes help a
lot.
In terms of readings, I think that there is a couple of good guides to understanding AI
that have written a blog post.
I think Tim Lee has one called Understanding AI.
It's a kind of good overview of that topic
that I think explains how transformers work,
which can kind of give you some mental sense.
I think Kapathi has some really nice videos of use
that I would recommend.
It's like on the medical side,
I think the book that you did,
if you're in medicine, you should read that.
And I think that that's very valuable. But like all we can offer are hints in some ways. Like there isn't, if you're in medicine, you should read that. And I think that that's very valuable.
But like all we can offer are hints in some ways.
Like there isn't, if you're looking for the instruction
manual, I think it can be very frustrating
because it's like you want the best practices
and procedures laid out and we cannot do that, right?
It's not, that's not how the system like this works.
It's not a person, but thinking of like a person
can be helpful, right?
One of the things that has been sort of a fun project
for me for the last few years is I
have been a founding board member of a new medical school at Kaiser Permanente. And
that medical school curriculum is being formed in this era, but it's been perplexing to understand
what this means for a medical school curriculum. And maybe even more perplexing for me at least is the accrediting bodies,
which are extremely important in US medical schools.
How you, accreditors should think
about what's necessary here.
Are there, besides the things that you've,
the kind of four key ideas you mentioned,
if you were talking to the board of directors of the LCME accrediting body,
what's the one thing you would want them to really internalize?
This is both a fast-moving and vital area.
This can't be viewed like a usual change, which let's, let's see how this works
because it's like the things that make it medical technology is hard to do, which
is like unclear results, limited, you know, expensive use cases where it rolls out
slowly.
So one or two, you know, advanced medical facilities get access to, you know,
proton beams or something else and billions-billions of dollars a cost,
and that takes a while to diffuse out.
That's not happening here.
This is all happening at the same time all at once.
This is now, AI is part of medicine.
I mean, there's a minor point that I'd make
that actually is a really important one,
which is large language models, generative AI overall,
work incredibly differently than other forms of AI.
So the other worry I have with some of these accreditors
is they blend together algorithmic forms of AI,
which medicine has been trying for a long time,
decision support, algorithmic methods,
like medicine is more than medicine,
other places have been thinking about those issues.
Generative AI, even though it uses
the same underlying techniques,
is a completely different beast.
So like even just take the most simple thing
of algorithmic aversion,
which is a well understood problem in medicine, right?
Which is, so you have a tool that could tell you
as a radiologist, you know, the chance of this being cancer,
you don't like it, you overrule it, right?
We don't find algorithmic aversion happening
with LLMs in the same way.
People actually enjoy using them
because it's more like working with a person.
The flaws are different, the approach is different.
So you need to both view this as universally applicable
today, which makes it urgent,
but also as something that is not the same
as your other form of AI and your AI working group that is thinking
about how to solve this problem is not the right people here.
You know, I think the world has been trained because of the magic
of web search to view computers as question answering machines.
Ask a question, get an answer.
Write a query, get results. And as I have interacted
with medical professionals, you can see that medical professionals have that model of a machine
in mind. And I think that's partly, I think psychologically, why hallucination is so alarming.
Because you have a mental model of a computer
as a machine that has absolutely rock-solid, perfect memory
recall.
But the thing that was so powerful in co-intelligence,
and we tried to get at this in our book also,
is that's not the sweet spot.
It's this sort of deeper interaction,
more of a collaboration. And
I thought your use of the term co-intelligence really, just even in the title of the book,
tried to capture this. When I think about education, it seems like that's the first
step to get past this concept of a machine being just a question answering machine. Do
you have a reaction to that idea?
I think that's very powerful.
You know, we've been trained over so many years
about using computers, but also science fiction, right?
Computers are about cold logic, right?
They will give you the right answer,
but if you ask it what love is, they explode, right?
Like that's the classic way you defeat the evil robot
in Star Trek, right?
Love does not compute.
Instead we have a system that makes mistakes, is warm,
beats doctors and empathy in almost every controlled study on the subject. Right? Like,
absolutely can outwrite you in a sonnet, but will absolutely struggle with giving you the
right answer every time. And I think our mental models are just broken for this. And I think
you're absolutely right. And that's part of what I thought your book does get at really well is,
like, this is a different thing. It's also generally applicable.
Again, the model in your head should be kind of like a person, even though it isn't right.
Like there's a lot of warnings and caveats to it.
But if you start from person, smart person you're talking to, your mental model will
be more accurate than smart machine, even though both are flawed examples, right?
So it will make mistakes.
It will make errors.
The question is, what do you trust it on?
What do you not trust it on?
And as you get to know a model, you'll get to understand,
like, I totally don't trust it for this,
but I absolutely trust it for that, right?
All right, so we're getting to the end
of the time we have together.
And so I'd just like to get now into something
a little bit more provocative.
And I get the question all the time,
will AI replace doctors?
time, you know, will AI replace doctors?
What are in medicine and other advanced knowledge work project out five to 10 years, what do you think happens?
Okay.
So first of all, let's acknowledge systems change much more slowly
than, than individual use.
You know, doctors are not individual actors.
They're part of systems, right?
So not just the system of a patient
who like may or may not wanna talk to a machine
instead of a person, but also legal systems
and administrative systems and systems that allocate labor
and systems that train people.
So like, it's hard to imagine five to 10 years
medicine being so upended that even if AI was better
than doctors at every single thing doctors do,
that we'd actually see as radical change in medicine
as you might in other fields.
I think you will see faster changes happen
in consulting and law and other coding,
other spaces than medicine.
But I do think that there is good reason to suspect
that AI will outperform people
while still having flaws, right?
That's the difference.
We're already seeing that for common medical questions
and enough randomized controlled trials that best doctors beat AI, but the AI beats the mean doctor, right?
Like, that's just something we should acknowledge is happening at this point.
Now, will that work in your specialty?
No.
Will that work with all the contingent social knowledge that you have in your space?
Probably not.
Like, these are vignettes, right?
But, like, that's kind of where things are.
So let's assume, right, they're asking two questions.
One is how good will AI get?
And we don't know the answer to that question.
I will tell you that colleague, your colleagues at Microsoft and increasingly
the labs, AI labs themselves are all saying they think of a machine smarter
than a human in every intellectual task in the next two to three years.
If that doesn't happen, that makes it easier to assume the future, but
let's just assume that that's the case.
I think medicine starts to change with the idea that people feel obligated
to use this to help for everything.
Your patients will be using it and it will be your advisor and helper
at the beginning phases.
Right.
And I think that, that I expect people to be better at empathy.
I expect better bedside manner.
I expect management tasks become easier.
I think administrative burden might get a light in if people, if we
handle this right way or much worse worse if we handle it badly.
Diagnostic accuracy will increase, right?
And then there's a set of discovery pieces happening too,
right?
One of the core goals of all the AI companies
is to accelerate medical research.
How does that happen and how does that affect us
is a kind of unknown question.
So I think clinicians are in both the eye of the storm
and surrounded by it, right?
Like they can resist AI use for longer
than most other fields,
but everything around them is going to be affected by it.
Well, Ethan, this has been really a fantastic conversation.
And I think in contrast to all the other conversations
we've had, this one gives especially the leaders in
healthcare, you know, people actually trying to lead their organizations into
the future, whether it's in education or in delivery, a lot to think about. So
really appreciate you joining. Thank you.
I'm a computing researcher who works with people who are right in the middle of today's bleeding-edge developments in AI.
And because of that, I often lose sight of how to talk to a broader audience about what
it's all about.
And so I think one of Ethan's superpowers
is that he has this knack for explaining complex topics
in the AI in a really accessible way,
getting right to the most important points
without making it so simple as to be useless.
That's why I rarely miss an opportunity
to read up on his latest work.
One of the first things I learned from Ethan
is the intuition that you can sort of think of AI
as a very knowledgeable intern.
In other words, think of it as a persona
that you can interact with,
but you also need to be a manager for it
and to always assess the work that it does.
In our discussion, Ethan went further to stress that there is, because of that, a serious education gap.
Over the last decade or two, we've all been trained, mainly by search engines, to think of computers as question-answering machines.
In medicine, in fact, there's a question-answering application that is really popular called
Up To Date.
Doctors use it all the time.
But generative AI systems like CHAT GPT are different.
There's therefore a challenge in how to break out of the old-fashioned mindset of search
to get the full value out of generative AI.
The other big takeaway for me was that
Ethan pointed out, while it's easy to see productivity gains from AI at the individual level,
those same gains, at least today, don't often translate automatically
to organization-wide or system-wide gains.
And one, of course, has to conclude that it takes more than just making individuals more productive.
The whole system also has to adjust to the realities of AI.
Here's now my interview with Azim Azhar.
Azim, welcome. Peter thank you so much for having me.
I think you're extremely well known in the world, but still some of the listeners of
this podcast series might not have encountered you before.
And so one of the ways I like to ask people
to introduce themselves is,
how do you explain to your parents what you do every day?
Well, I'm very lucky in that way
because my mother was the person
who got me into computers more than 40 years ago.
And I still have that first computer, a ZX81
with a Z80 chip.
Oh, wow.
To this day, it sits in my study, all seven and a half thousand transistors and Baker-like
plastic that it is.
And my parents were both economists and economics is deeply connected with technology in some
sense.
And I grew up in the late 70s and the early 80s.
And that was a time of tremendous optimism
around technology.
It was space opera, science fiction, robots,
and of course the personal computer
and Bill Gates and Steve Jobs.
So that's where I started.
And so in a way, my mother and my dad,
who passed away a few years ago,
had always known me as someone
who was fiddling with computers,
but also thinking about economics and society.
And so in a way, it's easier to explain to them
because they're the ones who nurtured the environment
that allowed me to research technology and AI and think
about what it means to firms and to the economy at large.
I always liked to understand the origin story.
And what I mean by that is, you know, what was your first encounter with
generative AI and what was that like?
What did you go through?
And what was that like? What did you go through?
The first real moment was when Mid Journey
and stable diffusion emerged in that summer of 2022.
I'd been away on vacation and I came back
and I'd been off grid in fact,
and the world had really changed.
Now I'd been aware of GPT-3 and GPT-2,
which played around with them and with BERT, the original
transformer paper about seven or eight years ago. But it was the moment where I could talk
to my computer and it could produce these images and it could be refined in natural natural language that really made me think we've crossed into a new domain.
We've gone from AI being highly discriminative to AI that's able to
explore the world in particular ways.
And then it was a few months later that ChatGPT came out November the 30th.
later that that chat GPT came out November 30th. And I think it was the next day or the day after that I said to my team, everyone has to use this and we have to meet every morning and discuss how
we experimented the day before. And we did that for three or four months. And it was really clear
to me in that interface at that point that we'd absolutely pass some
kind of threshold. And who's the we that you were experimenting with?
So I have a team of four who support me. They're mostly researchers of different types. I mean,
it's almost like one of those jokes. I know, I have a sociologist, an economist,
and an astrophysicist, and they walk into the bar, or they walk into our virtual
team room, and we try to solve problems. Well, so let's get now into brass tax year.
And I think I want to start maybe just with an exploration of the economics of all this
and economic realities.
Because I think in a lot of your work, for example, in your book,
you look pretty deeply at how automation generally
and AI specifically are transforming certain sectors
like finance, manufacturing,
and you have a really kind of insightful focus
on what this means for productivity
and which ways efficiencies are found.
And then you sort of balance that with risks,
things that can and do go wrong.
And so as you take that background and looking at all the
sectors, in what ways are the same patterns playing out or likely to play out in healthcare and
medicine? I'm sure we will see really remarkable parallels, but also new things going on. I mean, medicine has a particular quality compared to other sectors in the sense
that it's highly regulated, market structure is very different country to
country, and it's an incredibly broad field.
If you just think about taking a Tylenol and going through laparoscopic surgery,
having an MRI and seeing a physio. I mean, this is all medicine. I mean, it's hard to
imagine a sector that is more broad than that. So I think we can start to break it down.
And where we're seeing things with generative AI will be that the sort of softest entry point,
which is the medical scribing.
And I'm sure many of us have been with clinicians
who have a medical scribe running alongside.
They're all on Surface Pros, I noticed, right?
They're on the tablet computers and they're scribing away.
And what that's doing is, in the words of my friend,
Eric Topol, it's giving the clinician time back, right?
They have time back from days that are extremely busy
and full of administrative overload.
So I think you can obviously do a great deal
with reducing that overload.
And within my team, we have a view,
which is if you do something five times in a week, you should be writing an
automation for it. And if you're a doctor, you're probably reviewing your notes, writing
the prescriptions and so on several times a day. So those are things that can clearly
be automated and the human can be in the loop. But I think there are so many other ways just
within the clinic that things can help.
So one of my friends, my friend from my junior school, I've known him since I was nine,
is an oncologist who's also deeply into machine learning and he's in Cambridge in the UK.
And he built with Microsoft Research a suite of imaging AI tools from his own discipline,
which they then open-sourced.
So that's another way that you have an impact,
which is that you actually enable the generalist,
specialist, polymath, whatever they are in health systems,
to be able to get this technology,
to tune it to their requirements, to use it,
to encourage some grassroots adoption in a system that's often been very, very heavily
centralized.
And then I think there are some other things that are going on that I find really, really
exciting.
So one is the consumerization of healthcare. So I have one of those sleep tracking rings, the URA.
Yep.
That is building a data stream that we'll be able to apply more and more AI to.
I mean, right now it's applying traditional, I suspect, machine learning.
But you can imagine that as we start to get more data, we start to get more used to measuring
ourselves, we create this sort of pot of a personal asset
that we can turn AI to.
And there's still another category.
And that other category is what are the completely novel ways
in which we can enable patient care and patient pathway.
And there's a fantastic startup in the UK called NECO Health,
which I mean does physical MRI scans and blood tests
and so on.
It's hard to imagine NECO existing without the sort of
advanced data, machine learning, AI that we've seen emerge
over the last decade.
So, I mean, I think that there are so many ways
in which the temperature is slowly being turned up
to encourage a phase change within
the healthcare sector. And last but not least, I do think that these tools can also be very,
very supportive of a clinician's life cycle. I think we, as patients, we're a bit,
I don't know if we're as grateful as we should be
for our clinicians who are putting in 90 hour weeks.
But you can imagine a world where AI is able to support not just the clinicians workload,
but also their sense of stress, their sense of burnout.
So just in those five areas, Peter, I sort of imagine we could start to fundamentally
transform over the course of many years, of course, the way in which people think about we could start to fundamentally transform
over the course of many years, of course, the way in which people think about their health
and their interactions with healthcare systems.
I love how you break that down.
And I want to press on a couple of things.
You also touched on the fact that medicine is,
at least in most of the world,
is a highly regulated industry. I guess finance is the
same way, but they also feel different because like the finance sector has to be very responsive
to consumers and consumers are sensitive to an abundance of choice, they're sensitive to price.
Is there something unique about medicine besides being regulated?
I mean, there absolutely is. And in finance as well, you have much clearer end states. So if
you're not in the consumer space, but you're in the asset management space, you have to essentially
deliver, deliver returns against the volatility or risk boundary, right? That's what you have to essentially deliver returns against the volatility or risk boundary, right?
That's what you have to go out and do.
And I think if you're in the consumer industry, you can come back to very, very clear measures,
Net Promoter Score being a very good example.
In the case of medicine and healthcare, it is much more complicated because as far as the clinician
is concerned, people are individuals and we have our own paths and our own responses.
If we didn't, there would never be a need for a differential diagnosis. There'd never
be a need for, you know, let's try azithromycin
first and then if that doesn't work, we'll go to vancomycin. You know, whatever it happens to be,
you just know. But ultimately, you know, people are quite different. The symptoms they're showing
are quite different. And also their compliance is really, really different.
I had a back problem that had to be dealt with by physio and extremely boring exercises four
times a week. But I was ruthless in complying and my physio was incredibly surprised. He said,
well, no one ever does this. And I said, well, you know, the thing is that I kind of just want to get this thing to go away.
And I think that that's why medicine is,
and healthcare is so different and more complex.
But I also think that's why AI can be really, really helpful.
I mean, we didn't talk about AI in its ability
to potentially do this,
which is to extend the clinician's presence
throughout the week.
Right.
The idea that maybe some part of what the clinician
would do if you could talk to them on Wednesday, Thursday,
and Friday could be delivered through an app or a chat bot
just as a way of encouraging the compliance,
which is often, especially with older patients, one reason why conditions linger on for longer.
Just staying on that regulatory thing, as I've thought about this, the one regulated sector that I think seems to have some parallels to healthcare is energy
delivery, energy distribution.
Because like healthcare, as a consumer, I don't have choice in who delivers electricity
to my house.
And even though I care about it being cheap, or at least not being overcharged, I don't
have an abundance of choice.
I can't do price comparisons.
And there's something about that,
just speaking as a consumer of both energy
and the consumer of healthcare that feels similar,
whereas other regulated industries,
somehow as a consumer,
I feel like I have a lot more direct influence and power.
Does that make any sense to someone like you who's really much more expert in how economic systems work? I mean, in a sense, one part of that is very, very true. You have a limited panel of
energy providers you can go to, and in the US there may be places where you have no choice.
a limited panel of energy providers you can go to. And in the US, there may be places where you have no choice.
I think the area where it's slightly different
is that as a consumer or a patient,
you can actually make meaningful choices and changes
yourself using these technologies.
And people used to joke about, you know, asking Dr. Google,
but Dr. Google is not terrible, particularly if you go to WebMD.
Right.
And, you know, when I look at long range change,
many of the regulations that exist around healthcare delivery were formed at a point before people had access to
good quality information at the touch of their fingertips or when education levels in general
were much, much lower. And many regulations existed because of the incumbent power of particular
professional sectors.
I'll give you an example from the United Kingdom.
So I have had asthma all of my life.
That means I've been taking my inhaler,
Ventolin and maybe a steroid inhaler for nearly 50 years.
That means that I know actually what more experience
and I, in some sense, know more about it than a general practitioner. And until a few years ago I would have to go to a
general practitioner to get this drug that I've been taking for five decades
and there they are age 30 or whatever it is. And a few years ago the regulations
changed and now pharmacies can or pharmacists can prescribe those types of
drugs under certain conditions directly.
That was not to do with technology,
that was to do with incumbent lock-in.
And so when we look at the medical industry,
the healthcare space, there are some parallels with energy,
but there are a few little things,
the ability that the consumer has to put in some effort
to learn about their condition, but also the
fact that some of the regulations that exist just exist because certain professions are powerful.
Yeah, one last question while we're still on economics.
There seems to be a conundrum about productivity and efficiency in healthcare delivery. Because I've never encountered a doctor or a nurse that wants to be able to handle even more
patients than they're doing on a daily basis. And so, you know, if productivity means simply,
well, your rounds can now handle 16 patients instead of eight patients. That doesn't seem necessarily to be a desirable thing.
So how, how can we, or should we be thinking about efficiency and productivity?
Since obviously costs are in most of the developed world are a huge, huge problem.
Yes.
And when you described doubling the number of patients on the round, I
imagined you buying them all roller skates so they could just whiz around the hospital faster and faster
than ever before.
We can learn from what happened with the introduction
of electricity.
Electricity emerged at the end of the 19th century,
around the same time that cars were emerging as a product
and car makers were very small and very artisanal and in the early 1900s some really smart car makers figured out that
electricity was going to be important and they bought into this technology by
putting pendant lights in their workshops so they could visit more
patients right they could effectively spend more hours working.
And that was a productivity enhancement and it was noticeable.
But of course, electricity fundamentally changed the productivity by orders of magnitude of
people who made cars, starting with Henry Ford, because he was able to reorganize his
factories around the electrical delivery
of power and to therefore have the moving assembly line,
which 10x the productivity of that system.
So when we think about how AI will affect the clinician,
the nurse, the doctor, it's much easier for us to imagine it as the pendant
light that just has them working later than it is to imagine a reconceptualization of
the relationship between the clinician and the people they care for.
And I'm not sure, I don't think anybody knows what that looks like. But, you know, I do
think that there will be a way that this this changes and you
can see that scale out factor. And it may be, Peter, that what
we end up doing is we end up saying, okay, because we have
these brilliant AI's, there's a lower level of training and and cost and expense that's required for a
broader range of of conditions that need that need treating and that expands the market
right that expands the market hugely it's what happened has happened in the market for
taxis or ride sharing the introduction of u Uber and the GPS system has meant many more people now
earn their living driving people around in their cars. And at least in London, you had to be
reasonably highly trained to do that. So I can see a reorganization is possible, of course,
entrenched interests, the economic flow, and there are many entrenched interests,
particularly in the US between the health systems
and the professional bodies that might slow things down,
but I think re-imagining is possible.
And if I may, I'll give you one example of that,
which is if you go to countries outside of the US
where there are many more sick people per doctor,
they have incentives to change the way they deliver their healthcare.
Well before there was AI of this quality around, there was a few cases of health systems in
India.
Aravinda Eye Care was one and Narayana Hridalia was another and in the
latter they were a cardiac care unit where you couldn't get enough heart surgeons. So specially
trained nurses would operate under the supervision of a single doc surgeon who would supervise many
in parallel. So there are ways of increasing the quality of care, reducing the cost, but it does require a systems change
and we can't expect a single bright algorithm
to do it on its own.
Yeah, really, really interesting.
So now let's get into regulation.
And let me start with this question.
There are several startup companies I'm aware of that are pushing on, I think,
a near-term future possibility that a medical AI for consumer
might be allowed, say, to prescribe a medication for you,
something that would normally require a doctor or a pharmacist, you know,
that is certified in some way licensed to do.
Do you think we'll, we'll get to a point where for certain regulated activities,
humans are more or less cut out of the loop?
Well, humans will have been in the loop because they would have
provided the training data, they would have done the oversight, the quality control. But to your question in general,
would we delegate an important decision entirely to a tested set of algorithms?
algorithms, I'm sure we will. We already do that. I delegate less important decisions like what time should I leave for the airport to Waze. I delegate more important decisions
to the automated braking in my car. We will do this at certain levels of risk and threshold.
If I come back to my example of prescribing Ventolin, it's really unclear to me that the
prescription of Ventolin, this incredibly benign bronchodilator that is only used by
people who've been through the asthma process, needs to be prescribed by someone who's gone through 10 years or 12 years of
medical training and why that couldn't be prescribed by an algorithm or an AI system.
So I absolutely think that that will be the case and could be the case. I can't really
see what the objections are. And the real issue is, where do you draw the line of where
you say, listen, this is too important or the cost is too great or the side effects
are too high. And therefore, this is a point at which we want to have some human taking
personal responsibility, having a liability framework in place, having a sense that there
is a person with legal agency
who signed off on this decision.
And that line, I suspect, will start fairly low.
And what we'd expect to see would be that that would rise
progressively over time.
What you just said, that scenario of your personal asthma medication
is really interesting because your personal AI might have the benefit of
50 years of your own experience with that medication. So in a way, there is at least
the data potential for, let's say, the next prescription to be more personalized and more
tailored specifically for you. Yes. Well, let's dig into this because I think this is super interesting and we can look at how things have changed.
So 15 years ago, if I had a bad asthma attack, which I might have once a year, I would have needed to go and see my general physician.
In the UK, it's very difficult to get an appointment. I would have had to see someone privately who didn't know me at all, because I've just walked in off the street. And I would explain my situation. It would take me half a day, productivity
lost. I've been miserable for a couple of days with severe wheezing. Then a few years
ago, the system changed and a protocol changed. And now I have a thing called a rescue pack,
which includes prednisolone, steroids.
It includes, uh, uh, something else I've just forgotten and an antibiotic in case
I get an upper respiratory tract infection.
And I have an algorithm it's called a protocol.
It's printed out.
It's a flow chart.
I answer various questions and then I say, I'm going to prescribe this to myself.
And you know, UK doctors don't prescribe prednisolone or prednisone, as you may call
it in the US, at the drop of a hat, right?
It's a powerful steroid.
I can self-administer and I can now get that repeat prescription without seeing a physician
a couple of times a year.
And the algorithm, the AI is, it's obviously been done in PowerPoint naturally, and it's a bunch of arrows.
I mean, surely, surely an AI system is going to be more
sophisticated, more nuanced and give me more assurance that
I'm making the right decision around something like that.
Yeah.
Well, at the minimum, the AI should be able to make that
PowerPoint.
Thank God for clipping.
minimum of the AI should be able to make that PowerPoint. Thank God for clipping.
So, you know, I think in our book,
we had a lot of certainty
about most of the things we've discussed here,
but one chapter where I felt we really sort of ran out
of ideas, frankly, was on regulation.
And you know, what we ended up doing for that chapter is, um, I can't
remember who's carries or Zach's idea, but, uh, we asked GPT four to have a conversation,
a debate with itself about regulation.
And we made some minor commentary on the end.
Really.
I think we took that approach because we just didn't have much to offer.
By the way, in our defense, I don't think anyone else
had any better ideas anyway.
Right.
And so now, two years later, do we
have better ideas about the need for regulation,
the frameworks around which those regulations should
be developed and
what should this look like? So regulation is going to be in some cases very helpful because
it provides certainty for the clinician that they're doing the right thing, that they are
still insured for what they're doing and it provides some degree of confidence for the patient.
degree of confidence for the patient. And we need to make sure that the claims that are made stand up to quite rigorous levels where they ideally there are RCTs and there
are the classic set of processes you go through. You do also want to be able to experiment. And so the question is, as a regulator, how can you enable
conditions for there to be experimentation? And what is experimentation? Experimentation
is learning so that every element of the system can learn from this experience. And so finding that space where there can be a bit of experimentation,
I think becomes very, very important. And a lot of this is about experience. So I think the first
digital therapeutics have received FDA approval, which means there are now people within the FDA
who understand how you go about running
an approvals process for that
and what that ends up looking like.
And of course, what we're very good at doing
in this sort of modern hyper-connected world
is we can share that expertise,
that knowledge, that experience very, very quickly.
So you go from one approval a year
to a hundred approvals a year to a thousand approvals a year.
So we will then actually, I suspect,
need to
Think about what is it? What is it to?
approve
Digital
therapeutics because unlike
big biological molecules
We can generate these digital therapeutics at the rate of knots, you know every
we can generate these digital therapeutics at the rate of knots.
You know, every road in the Hayes Valley
in San Francisco, right, is chewing out new startups
who will want to do things like this.
So then I think about what does it mean to get approved
if indeed it gets approved?
But we can also go really far with things
that don't require approval.
And I come back to my sleep tracking ring, you know, so I've been wearing this for a few years.
And when I go and see my doctor,
and I have my annual checkup,
one of the first things that he asks
is how have I been sleeping?
And in fact, I even sync my sleep tracking data
to their medical record system.
So he's hearing what I'm saying,
but he's actually pulling up the real data going,
this patient's lying to me again.
Of course, I'm very truthful with my doctor
as we should all be.
You know, actually that brings up a point that,
you know, consumer facing health AI
has to deal with pop science, bad science,
weird stuff that you hear on Reddit.
And because one of the things that consumers want
to know always is, what's the truth?
What can I rely on?
And I think that somehow feels different than an AI
that you actually put in the hands of,
let's say a licensed practitioner.
And so the regulatory issues seem very, very different for these two,
for these two cases somehow.
I agree.
They're very different.
And I think for a lot of areas, you will want to build AI systems that are first
and foremost for the, the clinician, uh, even if they have patient extensions,
that idea that the clinician can still be
with the patient during the week.
And you'll do that anyway because you need the data
and you also need a little bit of a liability shield
to have like a sensible person
who's been trained around that.
And I think that's going to be a very important pathway
for many AI medical crossovers.
We're gonna go through the clinician.
But I also do recognize what you say
about the kind of kooky quackery that exists on Reddit.
Although on CreateEan, Reddit may yet prove to be
have been right.
That's right. Yeah, yeah, prove to be right. That's right.
Yeah, yeah, yeah.
Sometimes it's right.
And I think that it serves a really good role as a field of extreme experimentation.
So if you're somebody who makes a continuous glucose monitor, traditionally given to diabetics,
but now lots of people will wear them and sports people will wear them. You probably gathered a lot of extreme tail distribution data
by reading the Reddit slash biohackers
for the last few years where people were doing things
that you would never want them to really do with the CGM.
And so I think we shouldn't understate how important
that Petri dish can be for helping us learn what could happen next.
Oh, I think it's absolutely going
to be essential and the bigger thing in the future.
So I think I just want to close here then
with one last question.
And I always try to be a little bit provocative with this.
And so as you look ahead to what doctors
and nurses and patients might be doing two years from now, five
years from now, 10 years from now, do you have any kind of
firm predictions?
I'm going to push the boat out and I'm going to go further out than closer in.
Okay.
So, as patients, we will have many, many more touch points and interaction with our biomarkers and our health, we'll be reading how well we feel through an array of things. And some of them
will be wearing directly like sleep trackers and watches. And so we'll have a better sense
of what's happening in our lives. It's like the moment you go from paper bank statements
that arrive every month to being able to see your accounts
in real time.
And I suspect we'll have,
we'll still have interactions with clinicians
because societies that get richer see doctors more,
societies that get older see doctors more.
And we're gonna be doing both of those
over the coming 10 years.
But there will be a sense, I think,
of continuous health engagement,
not in an overbearing way,
but just in a sense that we know it's there,
we can check in with it,
it's likely to be data that is compiled
on our behalf somewhere centrally
and delivered through a user experience
that reinforces agency rather than anxiety.
And we're learning how to do that slowly.
I don't think the health apps on our phones
and devices have yet quite got that right.
And that could help us personalize problems before they arise.
And again, I use my experience for a few things that I've tracked really, really well.
And I know from my data and from how I'm feeling when I'm on the verge of one of those severe
asthma attacks that hits me once a year and I can take a little bit of preemptive measure. So I think that that will become progressively more common and that sense that
we will know our baselines. I mean, when you think about being an athlete, which is something
I think about, but I could never ever do, but what happens is you start with your detailed baselines and that's what your health coach looks at every three or four months.
For most of us, we have no idea of our baselines. You know, we get our blood pressure measured once
a year. We will have baselines and that will help us on an ongoing basis to better understand and
be in control of our health. And then if the product designers get it right, it will be
done in a way that doesn't feel invasive but will be done in a way that feels enabling. We'll still
be engaging with clinicians augmented by AI systems more and more because they will also have
gone up the stack. They won't be spending their time on just take two Tylenol
and have a lie down type of engagements
because that will be dealt with earlier on in the system.
And so we will be there in a very,
very different set of relationships.
And they will feel that they have different ways
of looking after our health.
Azim, it's so comforting to hear such a wonderfully optimistic picture of the future of healthcare.
And I actually agree with everything you've said.
Let me just thank you again for joining this conversation.
I think it's been really fascinating.
And I think somehow the systemic issues, the systemic issues that you
tend to just see with such clarity I think are going to be the most kind of profound drivers
of change for the future. So thank you so much. Well thank you, it's been my pleasure Peter, thank you.
I always think of Azim as a systems thinker.
He's always able to take the experiences of new technologies at an individual level and
then project out to what this could mean for whole organizations and whole societies.
In our conversation, I felt that Azim really connected some of what we learned in a previous episode,
for example, from Chrissy Farr on the evolving consumerization of healthcare,
to the broader workforce and economic impacts that we heard about from Ethan Malik.
Azim's personal story about managing his asthma was also a great example. You know, he imagines a future, as do I,
where personal AI might assist and remember decades
of personal experience with a condition like asthma,
and thereby know more than any human being
could possibly know in a deeply personalized
and effective way, leading to better care.
Azim's relentless optimism about our AI future
was also so heartening to hear.
Both of these conversations leave me really optimistic
about the future of AI in medicine.
At the same time, it is pretty sobering to realize
just how much we'll all need to change
in pretty fundamental and maybe even in radical ways.
I think a big insight I got from these conversations is how we interact with machines
is going to have to be altered not only at the individual level, but at the company level and maybe even at the societal level.
Since my conversation with Ethan and Zim, there have been some pretty important developments that speak directly to this.
Just last week at Build, which is Microsoft's yearly developer conference, we announced the slew of AI agent technologies. Our CEO Satya Nadella, in fact, started his keynote by going online in a GitHub developer environment
and then assigning a coding task to an AI agent, basically treating that AI is a full-fledged member of a development team.
Other agents, for example, a meeting facilitator, a data analyst, a business researcher, travel agent and more were also shown
during the conference.
But pertinent to healthcare specifically,
what really blew me away was the demonstration
of a healthcare orchestrator agent.
And the specific thing here was
in Stanford's Cancer Treatment Center,
when they are trying to decide on potentially experimental
treatments for cancer patients, they convene a meeting
of experts.
That is typically called a tumor board.
And so this AI health care orchestrator agent
actually participated as a full-fledged member of a tumor
board meeting to help bring data together, make sure that the latest medical knowledge was
brought to bear, and to assist in the decision-making around a patient's cancer treatment.
It was pretty amazing.
A big thank you again to Ethan and Azim for sharing their knowledge and understanding of the dynamics between AI and society more broadly. And to our listeners, thank you for joining us.
I'm really excited for the upcoming episodes, including discussions on medical students'
experiences with AI
and AI's influence on the operation of health systems and public health departments.
We hope you'll continue to tune in.
Until next time.