Microsoft Research Podcast - Navigating medical education in the era of generative AI
Episode Date: July 24, 2025Next-generation physicians Morgan Cheatham and Daniel Chen discuss how generative AI is transforming medical education, exploring how students and attending physicians integrate new tools while naviga...ting questions on trust, training, and responsibility.Show notes
Transcript
Discussion (0)
Medicine often uses the training approach when trying to assess multipurpose talent.
To ensure students can safely and effectively take care of patients,
we have them jump through quite a few hoops and they need good evaluations once they reach the clinic,
passing grades on more exams like the USMLE.
But GPT-4 gets more than 90% of questions on licensing exams correct. Does that provide
any level of comfort in using GPT-4 in medicine?
This is the AI Revolution in Medicine Revisited. I'm your host, Peter Lee. Shortly after OpenAI's GPT-4 was publicly released, Kerry Goldberg, Dr. Zach Kohani,
and I published The AI Revolution in Medicine to help educate the world of healthcare and
medical research about the transformative impact this new generative AI technology can
have. But because we wrote the book when
GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right
and what did we get wrong? In this series, we'll talk to clinicians, patients, hospital
administrators and others to understand the reality of AI in the field and where we go from here.
and others to understand the reality of AI in the field and where we go from here.
The book passage I've read at the top
is from chapter four, Trust But Verify.
In it, we explore how AI systems like GPT-4
should be evaluated for performance, safety,
and reliability, and compare this to how humans
are both trained and assessed for readiness to deliver health care.
In previous conversations with guests, we've spoken a lot about AI in the clinic, as well as in labs and companies developing AI-driven tools.
We've also talked about AI in the hands of patients and consumers.
But there has been some discussion also about AI's role in medical training. And as a founding board member of a new medical school at Casper Manente,
I definitely have my own thoughts about this.
But today, I'm excited to welcome two guests who represent the next generation
of medical professionals for their insights, Morgan Cheatham and Daniel Chen.
Morgan Cheatham is a graduate of Brown University's Warren Alpert Medical School with clinical training
in genetics at Harvard and is a clinical fellow
at Boston Children's Hospital.
While Morgan is a bona fide doctor in training,
he's also amazingly an influential health technology
strategist.
He was recently named partner and head of healthcare
and life sciences at Breyer Capital
and has led investments in several healthcare AI companies that have eclipsed multibillion-dollar valuations.
Daniel Chen is finishing his second year as a medical student at the Kaiser Permanente Bernard J. Tyson School of Medicine.
He holds a neuroscience degree from the University of Washington and was a research assistant in the Raskin Lab at the UW School of Medicine, working with imaging and genetic data analyses for biomedical research.
Prior to med school, Daniel pursued experiences that cultivated his interest in the application of AI in medical practice and education. Daniel and Morgan exemplify the real world future of healthcare.
A student entering his third year of medical school and a fresh medical school graduate who's starting a residency while at the same time continuing his work on investing in healthcare startups.
Here's my interview with Morgan Cheatham.
Morgan, thanks for joining. Really looking forward to this chat.
Peter, it's a privilege to be here with you.
Thank you.
So, are there any other human beings who are partners at big league venture firms, residents at a Harvard-affiliated medical center, author, editor for leading
medical journal.
I mean, who's your cohort?
Who are your peers?
I love this question.
There are so many people who I consider peers that I look up to who have paved this path.
And I think what is distinct about each of them is they have this physician plus orientation.
They are multilingual in terms of knowing
the language of medicine,
but having learned other disciplines.
And we share a common friend, Dr. Zach Kohani,
who was among the first to really show
how you can meld two worlds as a physician
and make significant contributions
to the intersections thereof. I also deeply in the world of business respect physicians like Dr. Krishna Ishwant at Google Ventures, who simultaneously pursued residency training and built what is now a large and enduring venture firm. So there are plenty of people out there who've carved their own path and become these multilingual beings, and I aspire to be one.
So, one thing I've been trying to explore with people are their origins with respect to the
technology of AI. And there's two eras for that. There's AI before chat GPT and before genders of
AI really became a big thing. And then afterwards.
So let's start first before ChatGPT.
What was your contact?
What was your knowledge of AI and machine learning?
Sure.
So my experiences in computer science
date back to high school.
I went to Thomas Jefferson, which
is a high school in Alexandria, Virginia that
prides itself on requiring students to take computer science in their first year of high school as
kind of a required torturous experience. And I remember that fondly, our final project was
Brick Breaker. It was actually, I joke, all hard-coded. So there was nothing intelligent
about the Brick Breaker that we had built. But that was my first exposure. I was a classics nerd and
I was really interested in biology and chemistry as a pre-medical student. So I really wouldn't
intersect with this field again until I was shadowing at Inova Hospital, which was a local
hospital near me. And it was interesting because at the time I was shadowing in the anesthesia
department, they were actually implementing their first instance of Epic.
And I remember that experience fondly because the entire hospital was going from analog,
they were going from paper-based charts to this new digital system.
And I didn't quite know in that moment what it would mean for the field or for my career,
but I knew it was a big deal because a lot of people had a lot of emotion around what
was going on.
And it was in that experience that I kind of decided to attach myself to the intersection of computational medicine.
And so when I got to undergrad, I was a pre-medical student. I was very passionate
about studying the sacred physician-patient relationship and everything that had to go on
in that exam room to provide excellent care. But there were a few formative experiences,
one working at a physician founded startup
that was using, at the time we called it Big Data,
if you remember, to match the right patient
to the right provider at the right time.
And it was in that moment that I realized that
as a physician, I could utilize technology
to scale that sacred one-to-one patient-provider interaction in nonlinear ways.
So that was kind of the first experience where I saw deployed systems that were using patient data and
clinical information in intelligent format. Yeah, and so you're a pre-medical student,
but you have this computer science understanding.
You have an intuition, I guess that's the right way to say it, that But you have this computer science understanding.
You have an intuition, I guess is the right way to say it, that the clinical data becoming
digital is going to be important.
So then what happens from there to your path to medical school?
Yeah.
So I had a few formative research experiences
in my undergraduate years,
nothing that ever amounted to significant publication,
but I was toying around with SVMs for sepsis
and working with the MIMIC database early days
and really just trying to understand what it meant
that medical data was becoming digitized.
And at the same time, again, I was rather unsatisfied
doing that purely in an academic context.
And I so early craved seeing how this would roll out
in the wild, roll out in a clinical setting
that I would soon occupy.
And that was really what drove me to work
at this company called Kairos
and understand how these systems are scaled.
And obviously that's something with AI
that we're now grappling with in a real way
because it looks much different.
So the other experience I had, which is less relevant to AI, but I did do a summer in banking
and I mentioned this because what I learned in the experience was it was a masterclass
in business.
And I learned that there was another scaling factor that I should appreciate as we think
about medicine and that was capital and business formation.
And that was also something that could scale non-linearly.
So when you married that with technology,
it was kind of a natural segue for me
before going to med school to think about venture capital
and partnering with founders who were going to be
building these technologies for the longterm.
And so that's how I landed on the venture side.
And then how long of a break
before you started your medical studies?
It was about four years.
Originally it was going to be a two year deferral and the pandemic happened.
Our space became quite active in terms of new companies and investment.
So it was about four years before I went back.
I see.
And so you're in medical school.
CHAT GPT happened while you were in medical school.
Is that right?
That's right.
That's right.
Right before I was studying for step one.
So the funny story, Peter, that I like to share with folks is I was, I was just embarking
on designing my step one study plan with my mentor.
And I went to NURPS for the first time.
And that was in 2022 when of course Chat Chachi BT was released. And for the remainder of that fall period,
I should have been studying for these shelf exams
and getting ready for this large board exam.
And I was fortunate to partner actually
with one of our portfolio company CEOs
who is a physician, he is an MD PhD
to work on the first paper that showed
that Chachi BT could pass the US Medical Licensing exam.
And that was a riveting experience for a number of reasons.
I joke with folks that it was both the best paper
I was ever a part of and proud to be a co-author of,
but also the worst for a lot of reasons
that we could talk about.
It was the best in terms of canonical metrics
like citations, but the worst in terms of,
wow, did we spend six months as a field thinking this was the right benchmark for how to assess the performance of
these models? And I'm so encouraged- You shouldn't feel bad that way because at that time,
I was secretly assessing what we now know of as GPD-4 in that period. And what was the first thing
I tried to do? Step one, medical exam. By the way, just for our listeners who don't understand about medical education, in the US, there's a three-part exam that
extends over a couple of years of medical school. Step one, step two, step three. And
step one and step two in particular are multiple choice exams. And they are very high stakes
when you're in medical school. And you really have to have a command
of quite a lot of clinical knowledge to pass these.
And it's funny to hear you say
what you were just sharing there
because it was also the first impulse I had with GPT-4.
And in retrospect, I feel silly about that.
I think many of us do,
but I've been encouraged over the last two years, to your point, that
we really have moved our discourse beyond these exams to thinking about more robust
systems for the evaluation of performance, which becomes even more interesting as you
and I have spoken about these multi-agent frameworks that we are now compelled to explore
further.
Yeah.
Well, and even though I know you're a little sheepish about it now, I think in the show
notes we'll link to that paper because it really was one of the seminal moments when
you think about AI, AI in medicine.
And so you're seeing this new technology and it's happening at the moment when you yourself have to confront taking the step one exam.
So how did that feel?
It was humbling. It was shocking.
What I had worked two years for grueling over textbooks and flashcards and all of the things we do as medical students to see a
system emerge out of thin air that was arguably going to perform far better than I ever would,
no matter how much I was going to study for that exam. It set me back. It forced me to
interrogate what my role in medicine would be. And it dramatically changed
the specialties that I considered for myself long-term.
And I hope we talk about how I stumbled upon genetics and
why I'm so excited about
that field and its evolution in this computational landscape.
I had to do a lot of soul searching to relinquish what I
thought it meant to be a physician
and how I would adapt in this new
environment.
CB You know, one of the things that we wrote in our book, I think it was in a chapter that
I contributed, I was imagining that students studying for step one would be able to do it more actively. Or you could even do a pseudo
step three exam by having a conversation. You provide the presentation of a patient
and then have an encounter where the chat GPT is the patient and then you pretend to be the doctor. And then
in the example that we published, then you say end of encounter and then you ask chat GPT for
an assessment of things. So maybe it all came too late for step one for you because you were already very focused
and had your own kind of study framework.
But did you have an influence or use
of this kind of technology for step two and step three?
So even for step one, I would say,
it dropped in November, I took it in the spring.
So I was able to use it to study.
But the lesson I learned in that moment, Peter,
was really about the importance of trust with AI
and clinicians or clinicians in training,
because we all have the same resources
that we use for these board exams, right?
UWorld is this question bank.
It's been around forever.
If you're not using UWorld, like good luck.
And so why would you deviate off of a well-troddened path
to study for this really important exam?
And so I kind of adjunctively used GPT alongside Uworld
to come up with more personalized explanations
for concepts that I wasn't understanding.
And I found that it was pretty good
and it was certainly helpful for me.
Fortunately, I was able to pass.
But I was very intentional about dogfooding AI
when I was a medical student.
And part of that was because I had been a venture capitalist and I'd made investments
to companies whose products I could actually use.
And so, you know, a bridge is a company in the scribing space that you and I have talked about.
Yeah.
I was so fortunate in the early days of their product to not just be a user,
but to get to bring their product across the hospital.
I could bring the product to the emergency department one week, to neurology
another week, to the PICU, you know, the next week and assess the relative
performance of, you know, how it handled, you know, really complex genetics
cases versus these very challenging, challenging, um, social situations that
you often find yourself navigating in primary care.
And so I, I, not only was I emotional about this technology,
but I was a voracious adopter in the moment.
Right. And you had a financial interest then
on top of that, right?
I was not paid by a bridge to use the product,
but I, you know, I joked that the key was coming sick of me.
But you were working for,
we're working for a venture firm that was invested
in these, right?
So, all of these things are wrapped up together. You're having to get licensed as a doctor while doing all of this.
So I want to get into that investment and new venture stuff there,
but let's stick just for a few more minutes on medical education.
So I mentioned what we wrote in the book,
and I remember
writing the example, you know, of an encounter. Is that at all realistic? Is anything like that?
That was pure speculation on our part. What's really happening? And then after we talk about
what's really happening, what do you think should happen in medical education, given the reality of generative AI? I've been pleasantly surprised talking with
my colleagues about AI in clinical settings, how curious people are and how curious they've been
over the last two years. I think oftentimes we say, oh, this technology really is stratified by age
and the younger clinicians are using it more and
the older physicians are ignoring it.
And, you know, maybe that's true in some regards, but I've seen, you know, many, you know, senior
attendings pulling up perplexity, GPT, more recently open evidence, which has been a really
essential tool for me personally at the point of care to come up with the best decisions
for our patients.
The general skepticism arises when people reflect
on their own experience in training,
and they think, well, I had to learn how to do it this way.
And therefore you using an AI scribe
to document this encounter doesn't feel right to me
because I didn't get to do that.
And I did face some of those critiques or criticisms where you need to learn how to do it the old school way first, and then you can use an AI scribe.
And I haven't yet seen, maybe even taking a step back, I haven't seen a lot of integration of AI into the core medical curriculum period. And as you know, if you want to add something to medical school curriculum,
you can get in a long line of people
who also want to do that.
But it is urgent that our medical schools
do create formalized required trainings
for this technology because people are already using it.
I think what we will need to determine is
how much of the old way do people need to learn in order to earn the right to use AI at the point of care?
And how much of that old understanding, that prior experience is essential to be able to assess the performance of these tools
and whether or not they are having the intended outcome?
I kind of joke, it's like learning cursive, right?
Like, I'm old enough to have had to learn cursive.
I don't think people really have to learn it these days. When do I use it?
Well, when I'm signing something, I don't really sign Juxy anymore.
The example I've used, which you've heard is, I'm sure you were still taught the
technique of manual palpation, even though you have access to technologies like
ultrasound. In fact, you would use ultrasound in many cases. And so I need to pin you down.
What is your opinion on these things?
Do you need to be trained in the old ways?
When it comes to understanding the architecture
of the medical note, I believe it is important
for clinicians in training to know how that information
is generated, how it's characterized, and how to go from a very broad reaching conversation to a
distilled clinical document that serves many functions.
Does that mean that you should be forced to practice without an AI scribe for the
entirety of your medical education?
No.
And I think that as you are learning the architecture of that document, you
should be handed an AI scribe.
And you should be empowered to have visits with patients, both in an analog setting where you are transcribing and generating that note.
And soon thereafter, I'm talking in a matter of weeks, working with an AI scribe.
Yeah.
That's my personal belief.
Yeah. So you're going to, well, first off, congratulations on landing a residency at
Boston Children's.
Thank you, Peter.
I understand there were only two people selected for this and super competitive. With that
perspective, how do you see your future in medicine,
just given everything that's happening with AI right now?
And are there things that you would urge,
let's say, the dean of the Brown Medical School
to consider or to change?
Or maybe not the dean of Brown, but the head of the LCME, the accrediting body
for US medical schools. What in your mind needs to change?
Dr. Ben Feroldi
Sure. I'll answer your first question first and then talk about the future. For me personally,
I fell into the field of genomics.
And so my training program will cover both pediatrics
as well as clinical genetics and genomics.
And I alluded to this earlier,
but one of the reasons I'm so excited to join the field
is because I really feel like the field of genetics
not only is focused on a very underserved patient population,
but not in how we typically think of underserved.
I'm talking about underserved as in patients who don't always have answers, patients for
whom the current guidelines don't offer information or comfort or support.
Those are patients that are extremely underserved.
And I think in this moment of AI, there's a unique opportunity to utilize the computational systems that we now have access to, to provide these answers more precisely, more quickly.
And so I'm excited to marry those two fields.
And genetics has long been a field that has adopted technology.
We just think about the foundational technology of genomic sequencing and variant interpretation.
And so it's a kind of natural evolution of the field, I believe, to integrate AI and
specifically generative AI.
If I were speaking directly to the LCME, I mean, I would just have to encourage the organization
as well as medical societies who partner with attending physicians across specialties to
lean in here.
When I think about prior revolutions in technology and medicine, physicians were not always at
the helm.
We have a unique opportunity now,
and you talk about companies like Abridged
in the AI space, companies like viz.ai,
clearly, I mean, I could go on, iterative health,
I could list 20 organizations that are bringing AI
to the point of care that are founded by physicians.
This is our moment to have a seat at the table and to shape not only the
discourse, but the deployment. And the unique lens, of course, that a physician brings is that of
prioritizing the patient. And with AI and this time around, we have to do that.
So LCMA for our listeners is, I think it stands for the Liaison Committee on Medical Education.
It's basically the accrediting body for US medical schools and it's very high stakes.
It's very, very rigorous, which I think is a good thing, but it's also a bit of a straight
jacket. If you are on the LCME, are there specific new curricular
elements that you would demand that LCME add
to its accreditation standards?
We need to unbundle the different components
of the medical appointment and think
about the different functions of a human clinician to answer that question.
There are a couple of areas that are top of mind for me, the first being medical
search. There are large organizations and healthcare incumbents that have been
around for many decades, companies like UpToDate or even, you know, the
guidelines that are housed by our medical societies that need to respond to the
information demands
of clinicians at the point of care in a new way with AI.
And so I would love to see our medical institutions
teaching more students how to use AI
for medical search problems at the point of care.
How to not only, from a prompt perspective,
ask questions about patients in a high efficacy way, but also to interpret the outputs of these systems
to inform downstream clinical decision making.
People are already adopting, as you know,
GPT, open evidence, proplexity, all of these tools
to make these decisions now.
And so by not, again, it's a moral imperative of the LCME,
by not having curriculum and support for clinicians
doing that, we run the risk
of folks not utilizing these tools properly, but also to their greatest potential.
Yeah.
And then, but zooming forward, then what about board certification?
Board certification today is already transitioning to an open book format for many specialties
is my understanding.
Yeah.
And in talking to some of my fellow geneticists who, you know, that's a pretty challenging board exam in clinical genetics or biochemical genetics.
They are using open evidence during those open book exams. So what I would like to see us do is
move from a system of rote memorization and regurgitation of fact to an assessment framework
that is adaptive, is responsive,
and assesses for your ability to use the tools
that we now have at our disposal
to make sound clinical decisions.
Yeah.
We've heard from Sarah Murray,
that when she's doing her rounds,
she consults routinely with Chachi Petit.
And that was something we also predicted,
especially Kerry Goldberg in our book,
you know, wrote this fictional account.
Is that the primary real world use of AI,
not only by clinicians, but also by medical students,
our medical students, you know,
engaged with chat GPT or, you know, similar?
Absolutely. I've listed some of the tools. I think they're, in general, Peter,
is this new clinician stack that is emerging of these tools
that people are trying.
And I think the cycles of iteration are quick.
Some folks are using Claude one week,
and they're trying perplexity, or they're trying open evidence,
they're trying GPT for a different task.
There's this moment in medicine that every clinician
experiences where you're on rounds, and there's that very very senior attending and you've just presented a patient to them
and you think you did like an elegant job and you've summarized all the information
and you really feel good about your differential.
And they ask you like the one question you like didn't think to address.
And I'll tell you, some of the funniest moments I've had using AI in the hospital has been,
and let me take a step back, that process of attending physician, interrogating a medical
student is called pimping, for lack of a better phrase.
And some of the funniest use cases I've had for AI in that setting is actually using open
evidence or GPT as defense against pimping.
So quickly while they're asking me the question, I put it in and I'm actually able to answer it right away. So it's been effective for that. But I would say the halls of most of
the hospitals where I've trained, I'm seeing this technology in the wild.
So now you're so tech forward, but that off-label use of AI, we also, when we wrote our book, we weren't sure that at least top health systems would tolerate
this.
Do you have an opinion about this?
Should these things be better regulated or controlled
by the CIOs of Boston Children's?
I'm a big believer that transparency
encourages good behaviors.
And so the first time I actually
tried to use Chatchie BT in a clinical setting, it was at a hospital in Rhode Island. I will not
name which hospital, but the site was actually blocked. I wasn't able to access it from a desktop.
That was the hospital's first response to this technology was let's make sure none of our
clinicians can access it. It has so much potential for medicine. The irony of that today. And it's
since become unblocked, but I was able to use it on my phone.
So to your point, um, if there's a will, there's a way, and we will, uh, utilize
this technology if we are seeing, um, perceived value.
So.
Yeah, no, absolutely.
So now, um, you know, uh, in a, in some discussions, one superpower that seems to
be common across, uh, people who are really leading the charge here is they seem to be very good readers and students.
And I understand you also as a voracious reader. In fact, you're even on the editorial team for a major medical journal. To what extent does that help?
And then from your vantage point at New England Journal of Medicine AI,
I'll have a chance to ask Zach Kohani as the editor-in-chief the same question.
What's your assessment as you reflect over the last two years, for the submitted manuscripts, are you overall
surprised at what you're seeing, disappointed, any kind of notable hits or misses
just in the steady stream of research papers that are getting submitted to that leading journal?
I would say overall, the field is becoming more expansive
in the kinds of questions that people are asking.
Again, when we started, it was this very myopic approach
of can we pass these medical licensing exams?
Can we benchmark this technology
to how we benchmark our human clinicians?
I think that's a, it's a trap.
Some folks call this the Turing trap, right?
Let's just compare everything
to what a human is capable of.
Instead of interrogating what is the unique, as you all talk about in the book, what are the unique
attributes of this new substrate for computation and what new behaviors emerge from it? Whether
that's from a workflow perspective in the back office or as I'm personally more passionate,
as we're seeing more people focus on in the literature, what are the diagnostic capabilities?
Right? I love Eric Topol's framework for machine eyes, right?
As this notion of like, yes, we as human have eyes,
and we have looked at medical images for many, many decades.
But these machines can take a different approach
to a retinal image, right?
It's not just what you can diagnose
in terms of an ophthalmological disease,
but maybe a neurologic disease or maybe a liver disease, right?
So I think the literature is in general moving to this place of expansion. disease, but maybe a neurologic disease or, you know, maybe liver disease, right?
So I think the literature is in general moving to this place of expansion and I'm excited
by that.
Yeah, that I kind of have referred to that as triangulation.
You know, one of the things I think a trap that specialists in medicine can fall into,
like a cardiologist, will see everything in terms of the cardiac system,
and whereas a nephrologist will see things in a certain lens. And one of the things that you
oftentimes see in the responses from a large language model is, is that more expensive view.
At the same time, I wonder,
we have medical specialties for good reason.
And at times I do wonder if there can be confusion
that builds up.
This is an under discussed area of AI.
AI collapses medical specialties onto themselves, right?
You have the canonical example of the cardiologist, you know,
arguing that, you know, we should diarese and maybe the nephrologist arguing
that we should, you know, protect the kidneys.
And how do two disciplines disagree on what is right for the patient when,
in theory, there is an objective best answer given that patient's clinical status.
My understanding is that the emergence of medical specialties
was a function of the cognitive overload
of medicine in general and how difficult it was
to keep all of the specifics
of a given specialty in the mind.
Of course, general practitioners
are tasked with doing this at some level,
but they're also tasked with knowing
when they've reached their limit
and when they need to refer to a specialist.
So I'm interested in this question of whether medical specialties themselves need to evolve.
And if we look back in the history of medical technology, there are many times where a new
technology forced a medical specialty to evolve, whether it was certain diagnostic tools that
have been introduced, or as we're seeing now with GLP-1s, the entire cardio metabolic field
is having to really reimagine itself with these new tools.
So I think AI will look very similar
and we should not hold on to this notion
of classical medical specialties simply out of convention.
Right.
All right, so now you're starting your residency. You're basically leading
a charge in health and life sciences for leading venture firm. I'd like you to predict what the
world of healthcare is going to look like two years from now, five years from now, 10 years from now.
And to frame that, to make it a little more specific, you know, what do you think
will be possible that you as a doctor and an investor will be able to do two years from now,
five years from now, 10 years from now that you can't do today?
Two years from now, I'm optimistic we'll have greater adoption of AI by clinicians, both for
optimistic we'll have greater adoption of AI by clinicians, both for back office use cases. So whether that's the scribe and the generation of the note for billing purposes,
but also now thinking about that for patient facing applications. We're already doing this
with drafting of notes. I think we'll see greater proliferation of those more obvious
use cases over the next two years. And hopefully we're seeing that across hospital systems,
not just large, well-funded academics,
but really reaching our community hospitals,
our rural hospitals, our under-resourced settings.
I think hopefully we'll see greater conversion.
Right now we have this challenge of pilotitis, right?
A lot of people are trying things,
but the data shows that only one in three pilots are
really converting to production use.
So hopefully we'll kind of move things forward that are working and pair back on those that
are not.
We will not solve the problem of payment models in the next two years.
That is a prediction I have.
Over the next five years, I suspect that with the help of regulators, we will identify better
payment mechanisms to support the adoption of AI because it cannot
and will not sustain itself simply by asking health systems and hospitals to pay for it.
That is not a scalable solution. Yep. In fact, I think there have to be new
revenue positive incentives if providers are asked to do more in the adoption of technology.
Absolutely.
But as we appreciate,
some of the most promising applications of AI
have nothing to do with revenue.
It might simply be providing a diagnosis to somebody
for whom that might drive additional intervention,
but may also not.
And we have to be okay with that
because that's the right thing to do.
It's our moral imperative as clinicians
to implement this where it provides value to the patient.
Over the next 10 years, what I, again, being a techno-optimist, am hopeful we start to see
is a dissolving of the barrier that exists between care delivery and biomedical discovery.
This is the vision of the learning health system that was written over 10 years ago.
And we have not realized it in practice.
I'm a big proponent of ensuring that every single patient
that enters our healthcare system
not only receives the best care,
but that we learn from the experiences of that individual
to help the next.
And in our current system, that is not how it works,
but with AI, that now becomes possible.
Well, I think connecting healthcare experiences
to medical discovery,
I think that that is really such a great vision for the future.
And I do agree, AI really gives us real hope
that we can make it true. Morgan,
this I think we could talk for a few hours more. It's just incredible what you're up to nowadays.
Thank you so much for this conversation. I've learned a lot talking to you.
Peter, thank you so much for your time. I will be clutching my signed copy of
the AI revolution in medicine for many years to come.
Morgan, obviously, is not an ordinary med school graduate. In
previous episodes, one of the things we've seen is that people on
the leading edge of real-world AI and medicine oftentimes are both practicing
doctors as well as technology developers. Morgan is another example of this type
of polymath, being both a med student and a venture capitalist. One thing that
struck me about Morgan is
he's just incredibly hands-on.
He goes out, finds leading edge tools and technologies,
and often these things, even though they're experimental,
he takes them into his education and into his clinical experiences.
I think this highlights a potentially important point for medical schools, and that is it might be incredibly important to provide the support and, let's be serious, the permission to students to access and use new tools and technologies. is that in these early days of AI in medicine, there is no substitute for hands-on experimentation,
and that is likely best done while in medical school.
Here's my interview with Daniel Chen.
Daniel, it's great to have you here.
Yeah, it's a pleasure being here.
Well, you know, I normally get started in these conversations by asking, you know, how
do you explain to your mother what you do all day?
And the reason that that's a good question is a lot of the people we have on this podcast
have fancy titles and unusual jobs. But I'm guessing that your mother would have already
a preconceived notion of what a medical student does.
So I'd like to twist the question around a little bit
for you and ask, what does your mother not realize
about how you spend your days at school?
Or does she get it all right?
No, she is very observant.
I'll say that off the bat.
But I think something that she might not realize
is the amount of efforts spent kind of outside the classroom
or outside the hospital.
She's always saying, oh, you have such long days
in the hospital.
You're there so early in the morning. But what she doesn't realize that maybe when I come back from the hospital, it's not like saying, oh, you have such long days in the hospital. You're there so early in the morning.
Um, but what she doesn't realize that maybe when I come back from the hospital, it's not just like, Oh, I'm done for a day.
It's wind down, go to bed.
But it's more like, okay.
Um, I have some more practice questions I need to get through.
I didn't get through my studying.
Um, let me write on like wrap up this research project I'm working on, uh, get
that back to the PI it's never ending to a certain extent
that those are some things she doesn't realize.
Yeah, I think all the time studying, I think,
is something that people expect of second year
medical students.
And even nowadays at the top medical schools,
like this one, being involved in research is also expected.
I think one thing that is a little unusual is that you are actually in clinic as well
as a second year student.
How has that gone for you?
Yeah, I mean, it's, it's definitely interesting.
I would say I spend my time, especially this year.
It's kind of three things.
There's a preclinical stuff I'm doing.
So that's your classic, you know, you're learning from the books,
though I don't feel like many of us do have textbooks anymore.
There's the clinical aspect, which you mentioned, which is we
have an interesting model, longitudinal integrated
clerkships, we can talk about that. And the last components,
the research aspect, right, the extracurriculars. But I think
starting out as a second year and doing your rotations,
probably early on
in kind of the clinical medical education has been really interesting, especially with
our format.
Because typically, med students have two years to read up on all the material and like get
some foundational knowledge.
With us, it's a bit more, we have maybe one year under our belt before we're thrown into
like, okay, go talk to this patient, they have ankle pain, right?
But we might not even started
talking about ankle pain in class, right?
Well, where do I begin?
So I think starting out, it's kind of like,
you know, the classic drinking from a fire hydrant.
But you also kind of have that embarrassment
of you're talking to the patient like,
I have no clue what's happening.
Or you might have my differentials all over the place, right?
But I think the beauty of the longitudinal aspect
is that now that we're in our last trimester,
everything's kinda coming together.
Like, okay, I can start to see,
here's what you're telling me,
here's what the physical exam findings are.
I'm starting to form a differential.
Like, okay, I think these are the top three things,
but in addition to that,
I think these are the next steps you should take so we can really focus and hone in on what exact diagnosis this might be.
All right. So of course, what we're trying to get into is about AI. And you know, the funny thing
about AI and the book that Carrie, Zach and I wrote is we actually didn't think too much about
Carrie, Zach, and I wrote is we actually didn't think too much about medical education, although we did have some parts of our book where we, well, first off, we made the guess that medical
students would find AI to be useful. And we even had some examples where you would have a vignette
of a mythical patient and you would ask the AI to pretend to be that patient.
Then you would have an interaction
and have to have an encounter.
I want to delve into whether any of that is happening,
how real it is. But before we do that,
let's get into first off your own personal contact with AI.
Let me start with a very simple question.
Do you ever use generative AI systems
like chat GPT or similar?
All the time, if not every day.
Every day, okay.
And when did that start?
I think when it first launched with GPT 3.5,
I was curious, all my friends work in tech. They're software engineers,
PMs. They're like, hey, Daniel, take a look at this. And at first, I thought it was just more of
a glorified search engine. I was actually looking back. My first question to chat GPT was, what was
the weather going to be like the next week? Something know, something very like something easily. You could have looked up on Google, your phone app, right?
Right.
Like, oh, this is pretty cool.
Um, but then kind of fast forwarding to, I think the first instance I
was using it in med school, I think the first like thing that really helped
me was, was actually a coding problem.
It was for research projects.
I was trying to, uh, use SQL.
Obviously I've never taken a SQL class in my life.
So I asked chat, like, hey, can you write me this code
to maybe morph two columns together, right?
Something that might have taken me hours to maybe Google
on YouTube or try to read some documentation.
It just goes through my head.
But chat GBT was able to not only produce the code,
but walk me through it like, OK, you're going to launch SQL, you're going to click on this menu, put the code in here,
make sure your file names are correct.
And it worked.
So it's been a very powerful tool in that way, in terms of like, um, giving
me expertise in something that maybe I traditionally had no training in.
And so, uh, while you're doing this, uh, I assume you had fellow students, friends, and others.
And so what were you observing about their contact with AI?
I assume you weren't alone in this.
Yeah, yeah.
I think, I'm not too sure in terms of what they were doing when it first came out, but
I think if we were talking about present day, a lot of it's kind of really spot on to what you guys talked
about in the book.
I think the idea around this personal tutor,
personal mentor is something that we're seeing a lot.
Even if we're having in-class discussions,
the lecturer might be saying something, right?
And then I might be, or I see a friend in ChatGBT
or some other model looking up a question.
And then you guys talk about, you know, how it can like explain a concept at different levels.
Right.
But honestly, sometimes if there's a complex topic, I asked ChatGPT, like, can you explain to me as if I was a six year old, breaking down complex topics?
Yeah.
So I think it's, it's something that we see in the preclinical space and lecture,
but also even in the clinical space, um, there's a lot of teaching as well.
Sometimes if my preceptor is busy with the patients, but I had a, maybe a
question, I would maybe converse with Chad GBT like, Hey, what are your thoughts
about this or like, um, a common one is like medical doctors love to use
abbreviations and these
abbreviations are sometimes only very niche and unique to their specialty right and I was reading
this note from a urogynecologist and I'm the entire first sentence I think there were like
10 abbreviations obviously I compile lists and ask chat you'd be like hey in the context of
urogynecology can you define what these could possibly mean, right? Instead of, hopefully, searching in a Google or maybe embarrassing asking the
preceptor. So in these instances, it's played a huge role.
Yeah, and when you're doing things like that, it can make mistakes. And so
what are your views of the reliability of generative AI, at least in the form of
chat GPT?
Yeah, I think into the context of medicine, right?
We fear a lot about the hallucinations that these models might have.
And it's something I'm always checking for when I talk with peers about this.
We find it most helpful when the model gives us a source, linking it back.
I think the gold standard nowadays in medicine is using something called up-to-date. That's
written by clinicians, for clinicians. But sometimes searching on up-to-date can be
a lot of time as well, because it's a lot of information to sort through. But nowadays,
a lot of us are using something called open
evidence, which is also an LLM, but they always cites their citations
with like published, uh, literature, right?
So I think being able to be conscious of the downfalls of these models and
also being able to have the critical understanding of like analyzing the
actual literature, I think double checking is something that we've been also getting really good at.
How would you assess student attitudes, med student attitudes about AI?
Um, is it, uh, it, it w the way you're coming across is it's just a natural part of life.
But, uh, do people have firm opinions for do people have firm opinions, pro or con,
when it comes to AI and especially AI in medicine?
I think it's pretty split right now.
I think there's the half kind of like us
where we're very cautiously optimistic
about the potential of this, right?
It's able to give us that extra information
being that extra tutor.
It's also able to give us information very quickly as well.
But I think the other flip side of what a lot of students
hesitate to, which I agree, is this loss of the ability
to critically think.
Something that you can easily do is give these models
like relevant information about the patient history
and be like, give me a 10 list differential, right?
Yeah.
And I think it's very easy as a students to,
this is difficult, let me just use what the model says
and we'll go with that, right?
Yeah. So I think being able to separate that, to, you know, this is difficult, let me just use what the model says and we'll go with that.
Right.
Yeah.
So I think being able to separate that, you know, medical school is a time where, you
know, you're learning to become a good doctor.
And part of that requires the ability to be observance and critically think having these
models simultaneously might hinder the ability to do that.
So I think, you know, the next step is like, these models can be great, a great
tool, absolutely wonderful, but how do you make sure that it's not hindering
these abilities to critically think?
Right.
And so when you're doing your LIC work, these longitudinal experiences and
you're in clinic, are you pulling the phone out of your pocket and
consulting with AI? Definitely. And I think my own policy for this, to kind of counter this,
is that the night before when I'm looking over the patient list, the clinic of who's coming,
I'm always giving it my best effort first. Like, okay, the chief complaint is maybe just a runny nose
for a kid in a pediatric clinic.
What could this possibly be, right?
At this point, we've seen a lot like, okay,
it could be URI, it could be viral, it could be bacterial,
and then I go through the,
I try to do my due diligence of like going through
the history and everything like that, right?
But sometimes if it's a more complex case,
something maybe a presentation I've never seen before,
I'll still kind of do my best coming up with maybe a differential might not be amazing.
But then I'll ask, you know, chat GPT like, okay, in addition to these ideas, what do you think?
Am I missing something? You know, and usually it gives a pretty good response.
You know, that particular idea is something that I think Carrie, Zach, and I thought would be happening a lot more today than we're observing. And it's the idea of a second set of eyes on your work. And somehow, at least our observation is that that isn't happening quite as much by today as we thought it might. And it just seems like one of the really safest and most effective use cases.
When you go and you're looking at yourself
and other fellow medical students,
other second year students, what do you
see when it comes to the second set of eyes idea?
I think a lot of students are definitely consulting
ChatchiBT in that regard, because even in the very beginning,
we're taught to be like, never miss these red flags, right?
So these red flags are always on our differential.
But sometimes it can be difficult to figure out
where to place them on that, right?
So I think in addition to coming up with these differentials,
something I've been finding a lot of value is just chatting with these tools to get their
rationale behind their thinking. Something I find really helpful, I think this is also part of the
arts of medicine, is figuring out what to order, right? What labs to order.
Obviously you have your order sets that automate some of the things like in the
ED or like there are some gold standard imaging things you should do for
certain presentations.
But then you chat to like 10 different physicians on maybe the next steps after
that, and they give you 10 different answers.
But there's never, I never understand exactly why it's always like, I've just
been doing this for all my training or that's how I was taught.
So asking chat GPT, like, why would you do this next?
Or like, is this a good idea?
And seeing the pros and cons has also been really helpful in my learning.
Yeah.
Wow.
That's a, that's super interesting.
So, so, so now, um, you know, I'd like to get into the education you're receiving. And I think it's fair to say Kaiser Permanente is very
progressive and really trying to be very cutting edge in how
the whole curriculum is set up.
And for the listeners who don't know this,
I'm actually on the board of directors of the school
and have been since the founding of the school.
And I think one of the reasons why I was invited
to be on the board is the school really wanted
to think ahead and be cutting edge
when it comes to technology.
So from where I've sat, I've never
been completely satisfied with the amount of tech
that has made it into the curriculum.
But at the same time, I've also made myself
feel better about that, just understanding
that it's sort of unstoppable, that students are so tech
forward already. But I wanted to delve into a
little bit here into what your honest opinions are and your fellow students opinions are about
whether you feel like you're getting adequate training and background formally as part of
your medical education when it comes to things like artificial intelligence or other technologies.
What do you think? Would you wish the curriculum would change?
Yeah, I think that's a great question. I think from a tech perspective, the school is very good about implementing opportunities for us to learn. Like for example, learning how to use Epic, right?
Or at Kaiser Permanente, they call it Health Connect, right?
These electronic health records.
That, my understanding is a lot of schools
maybe don't teach that.
That's something where we get training sessions
maybe once or twice a year, like, hey,
here's how to make a shortcut in the environment, right?
So I think from that perspective,
the school is really proactive
in providing those opportunities
and they make it very easy to find resources for that too.
Yeah, I think you're pretty much guaranteed
to be an epic black belt by the time you finish.
Yes.
You can degree.
Yes, but then I think in terms of the aspects of artificial intelligence, I think the school's
taken a more cautiously optimistic viewpoint. They're just kind of looking around right now.
Formally in the curriculum, there hasn't been anything around this topic. I believe the fourth
year students last year got a student-led lecture around this topic. But talking to other peers at
other institutions, it looks like it's something that's very slowly being built into the curriculum.
And it seems like a lot of it is actually student-led. You know, my friend at Feinberg,
it was like, we just got a session before clerkship about best practices on how
to use these tools.
I have another friend at Pitt talking about how they're leading efforts of maybe incorporating
some sort of LLM into their in-house curriculum where students can, instead of clicking around
the website trying to find the exact slide, they can just ask this tool like, okay, we had class this day, they talked about this,
but can you provide more information?
And it can pull from that.
So I think a lot of this is student driven,
which I think is really exciting.
Because it begs the question, I think,
current physicians maybe not be very well equipped
with these tools as well, right?
So maybe they don't have a good idea of what exactly is the next steps or
what does the curriculum look like.
So I think the future in terms of this AI curriculum is really student led as well.
Yeah, yeah, it's really interesting.
I think one of the reasons I think also that that happens is,
it's not just necessarily the curriculum that lags,
but the accreditation standards.
Accreditation is really important for medical schools
because you want to make sure that anyone who holds an MD
is a bona fide doctor.
And so, accreditation standards are pretty strictly monitored
in most countries, including the United States.
And I think accreditation standards are also, my observation, are slow to understand how to adopt
or integrate AI. And it's not meant as a criticism. It's a big unknown. No one knows exactly what to do and how to do.
And so it's really interesting to see that,
as far as I can tell,
I've observed the same thing that you just have seen,
that most of the innovation in this area
about how AI should be integrated into medical education
is coming from the students themselves.
It seems, I think, I'd like to think
it's a healthy development.
Something tells me maybe the students are a bit better
at using these tools as well.
I talk to my preceptors,
because KP also has their own version.
A preceptor, maybe we should explain.
Oh, yeah, sorry.
So a preceptor is an attending physician, fully licensed finished residency.
And they are essentially your kind of teacher in the clinical environment.
Right. So KP has their own version of some ambient documentation device as well.
And something I always like to ask, you know, like, Hey, what are your thoughts
And something I always like to ask, you know, like, Hey, what are your thoughts on, on, on these tools, right?
And it's always so polarizing as well, even among the same specialty.
Like if you ask psychiatrists, which I think is a great use case of these tools, right?
My preceptor hates it.
Another preceptor next door loves it.
So I think a lot of it is still like a lot of unknowns, like you were mentioning.
Right.
Well, in fact, I'm glad you brought that up because one thing that we've been hearing from
previous guests a lot when it comes to AI in clinic is about ambient listening by AI, for example,
to help set up a clinical note or even write a clinical note. And another big use case that we heard a lot about that
seems to be pretty popular is the use of generative AI
to respond to patient messages.
So let's start with the clinical note thing.
First off, do you have opinions about that technology?
I think it's definitely good.
I think especially where, you know,
if you're in the family medicine environment
or pediatric environment where you're spending
so much time with patients,
note like that is great, right?
I think coming from a strictly medical student standpoint,
I think it's, honestly, it'd be great to have,
but I think there's a lot of learning when
you write the note you know there's there's a lot of you know all my
preceptors talk about like when I read your note it should present it in a way
where I can see your thoughts and then once I get to the assessment and plan
it's kind of funneling down towards a single diagnosis or a handful of
diagnosis and that's I think a skill that requires you to practice over time, right?
So a part of me thinks like if I had this tool where I can just automatically give me
a note as a first year, then it takes away from that learning experience, you know.
Even during our first year throughout school, we frequently get feedback from professors and doctors
about these notes.
And it's a lot of feedback.
It's like, I don't think you should have written that,
that should be in this section, you know,
like a medical note or a soap note,
where, you know, the subjective is like
what the patient tells you,
objective is what the physical findings are,
and then your assessment of what's happening, and then your plan. Like it's very particular
and then I think medicine is so structured in a way that's that's kind
of like how everyone does it right. So kind of going back to the question I
think it's a great tool but I don't think it's appropriate for a medical
student. Yeah it's so interesting to hear you say that. I was, one of our previous guests is the head of R&D at Epic, Seth Hain.
He said, you know, Peter, doctors do a lot of their thinking when they write the note.
And of course, Epic is providing ambient clinical note taking automation.
But he was urging caution because you're saying, well,
this is where you're learning a lot.
But actually, it's also a point where as a doctor,
you're thinking about the patient.
And we do probably have to be careful with how
we automate parts of that.
All right, so you're gearing up for step one of the USMLE. That'll
be a big multiple choice exam. And then step two is similar, very, very focused on advanced
clinical knowledge. And then step three is a little more interactive. And so one question that people have had about AI is, you know, how do we regulate the use
of AI in medicine?
And one of the famous papers that came out of both academia and industry was the concept
that you might be able to treat AI like a person and have it go through the same
licensing. And this is something that Carrie, Zach, and I contemplated in our book. In the end,
at the time we wrote the book, I personally rejected the idea, but I think it's still alive. Um, and so I've wondered if you have any, you know, uh, any for, uh, first
off, are you opinionated at all about what should the requirements be, uh, for
the allowable use of AI, uh, in the kind of work that you're going to be doing?
Yeah, I think it's a tough question because like, where, where do you draw that line?
Right?
If you apply the human standards, it's passing tough question because like, where do you draw that line, right? If you apply the human standards,
oh, it's passing exams, then yes, in theory,
it could be maybe a medical doctor as well, right?
It's more empathetic than medical doctors, right?
So where do you draw that line?
I think, you know, part of me thinks it's,
maybe it is that human aspect
that patients like to connect with, right?
And maybe this really is just like,
these tools are just aids in helping, you know,
maybe load off some cognitive load, right?
But I think the other part of me,
I'm thinking about this is the next generation
who are growing up with this technology, right?
They're interacting with applications all day.
Maybe they're on their iPads. They're talking to chat bots.
They're using chat GPT.
This is kind of the environment they grew up with.
Does that mean they also have increased trust
in these tools that maybe our generation
or the generations above us don't have
that value that human connection?
Would they value human connection less?
I think those are some troubling thoughts that, yes,
at the end of the day, maybe I'm not as smart as these tools,
but I can still provide that human comfort.
But if at the end of the day, the future generation
doesn't really care about that, or they perfectly
trust these tools because that's all they've kind of known,
then where do human doctors stand?
I think part of that is there would be certain specialties
where maybe the human connection is more important.
The longitudinal aspect of building that trust,
I think is important.
Family medicine is a great example.
I think hematology, oncology with cancer treatment.
Obviously, I think anyone's not gonna be thrilled
to cure cancer diagnosis, but something
tells me that seeing that on a screen versus maybe a physician prompting you and telling
you about that tells me that maybe in those aspects, the human nature, the human touch
plays an important role there too. Yeah. I think it strikes me that it's going to be your generation that really is going to set the
pattern probably for the next 50 years about how this goes. And it's just so interesting because
I think a lot will depend on your reactions to things.
So for example, you know,
one thing that is already starting to happen are patients
who are coming in armed, you know, with a differential.
You know, that they've developed themselves
with the help of chat GPT.
So let me, you must have thought about these things.
So in fact, has it happened in your clinical work already? Yeah, I've seen people come into the ED during my ED shift, like emergency
department, and they'd be like, oh, I'm not like, I have neck pain and hear
all the things that, you know, chat told me, chat GBT told me, what do you think?
Do I need, I want this lab ordered, that lab ordered.
Um, and I think my initial reaction is great. That maybe we should do
that. But I think the other reaction is, is understanding is that not everyone
has the clinical background of understanding what's most important.
What do we need to absolutely rule out? right? So I think in some regards,
I would think that maybe a chat GBT
errs on the side of caution,
giving maybe patients more extreme examples
of what this could be,
just to make sure that it's in a way
is not missing any red flags as well, right?
But I think a lot of this is what we've been learning is it's all
about shared decision making with the patient, right? Being able to acknowledge like, yeah,
that list, most of the stuff is very plausible. But maybe you didn't think about this one symptom
you have. So I think part of it, maybe it's a sidebar here, is the idea of prompting, right?
They always talked about all these, prompt engineers, how well can you
give it context to answer your question? So I think being able to give these models the
correct information and the relevant information and keyword relevant, because relevant is, I guess,
where your clinical expertise comes in. Like what do you give the model, what do you
not give. So I think that difference between a medical
provider versus maybe your patients is ultimately the
difference.
Let me press on that a little bit more because you brought up
the issue of trust and trust is so essential for patients to feel good about their medical care. And I can imagine,
you're a medical student seeing a patient for the first time, so you don't have a trust
relationship with that patient. And the patient comes in, maybe trusting CHAT GPT more than you.
Very valid. No, I mean, I get that a lot, surprisingly, you know, sometimes
like you're like, Oh, I don't want to see the medical student because we
always give the patient an option, right?
Like it's their time, whether it's a clinic visit.
Um, but yeah, those patients, I think it's, it's perfectly reasonable.
If I heard a second year medical student was going to be part of my care team,
taking that history, I would be maybe a little bit concerned too.
Like, are they asking all the right questions?
Are they relaying that information back to their attending physician correctly?
So I think a lot of it is, at least from a medical student perspective, is framing it
so the patient understands that this is a learning opportunity for
the students and something I do a lot is tell them like hey like you know at the
end of day there is someone double-checking all my work yeah but for
those that come in with a list I sometimes sit down with them and and
we'll have a discussion honestly but like this I just not I don't think you
have meningitis because you know you don because you're not having a fever.
Some of the physical exam maneuvers we did were also negative.
So I don't think you have anything to worry about that.
So I think it's having that very candid conversation with the patient that helps build that initial
trust.
Telling them like...
It's impressive to hear how even-killed you are about this. You know, I think of course, and you're being very humble saying, well, you know, as a
second year medical student, of course, someone might not have complete trust. But I think that
we will be entering into a world where no doctor is going to be, no matter how experienced or how skilled is going to be immune from this issue.
So we're starting to run toward the end of our time together.
And I like to end with one or two more provocative questions.
And so let me start with this one.
Undoubtedly, I mean, you're close enough to tech and digital stuff, digital health, that
you're undoubtedly familiar with famous predictions by Turing and Nobel laureates
that someday certain medical specialties, most notably radiology, would be completely supplanted by machines. And more recently, there have been predictions by others,
like Elon Musk, that maybe even some types of surgery would be replaced by machines.
What do you think? Do you have an opinion?
I think replace is a strong term, right?
To say that doctors are completely obsolete,
I think is unlikely.
If anything, I think there might be a shift maybe
in what it means to be a doctor, right?
Undoubtedly, maybe the demand of radiologists
are going to go down because maybe more of the simple things
can truly be automated, right?
And you just have a supervising radiologist whose output is maybe 10 times as maybe 10 single radiologists, right?
So I definitely see a future where the demand of certain specialties might go down.
And when I think when I talk about a shift of what it means to be a physician,
maybe it's not so much diagnostic anymore, right? If these models get so good at like just taking in large amounts of information,
but maybe it pivots to being really good at understanding the limitations of these models
and knowing when to intervene is what it means to be the kind of the next generation of physicians.
I think in terms of surgery, yeah, I think it's a concern, but maybe not in the next 50 years.
Like those DaVinci robots are great.
I think out of Mayo Clinic, they were demoing some videos
of these robots leveraging computer vision
to like close portholes, like laparoscopic scars.
And that's something I do in the OR, right?
And we're at the same level at this point.
So at that point, maybe,
but I think robotics still has to address the understanding of like, what if something
goes wrong, right? Who's responsible? I don't see a future where a robot is able to react
to these, you know, dangerous situations when maybe something goes wrong. You still have
to have a surgeon on board to kind of take over. So in
that regard, that's kind of where I see maybe the future going. So last question. You know,
when you are thinking about the division of time, one of the themes that we've seen in the previous guests is more and more doctors
are doing more technology work, like writing code and so on. And more and more technologists
are thinking deeply and getting educated in clinical and preclinical work.
So for you, let's look ahead 10 years.
What do you see your division of labor to be?
Or, you know, how would you, what would you tell your mom then about
how you spend a typical day?
Yeah.
I mean, I think for me, technology is something I definitely want to be
involved in, in my line of work.
Whether it's AI work, whether it's improving quality of healthcare through technology,
my perfect division would be maybe still being able to see patients,
but also balancing some maybe more of these higher level kind of larger projects.
I think having that division would be something nice.
Yeah, well, I think you would be great just from the little bit I know about you.
And, Daniel, it's been really great chatting with you.
I wish you the best of luck with your upcoming exams and getting past this year
two of your medical studies and
perhaps someday I'll be your patient.
Thank you so much.
You know, one of the lucky things about my job is that I pretty regularly get to
talk to students at all levels spanning high school to graduate school. And when I get to talk, especially to med students, I'm always impressed with their intelligence, just how serious they are, and their high energy levels.
Daniel is absolutely a perfect example of all that. Now it comes across as trite to say that
the older generation is less adept at technology adoption than younger people,
but actually there probably is a lot of truth to that. And in the conversation
with Daniel, I think he was actually being pretty diplomatic but also clear
that he and his fellow med students don't necessarily expect the professors in their med school
to understand AI as well as they do.
There's no doubt in my mind that medical education will have to evolve a lot
to help prepare doctors and nurses for an AI future.
But where will this evolution come from?
As I reflect on my conversations with Morgan and Daniel, I
start to think that it's most likely to come from the students themselves. And
when you meet people like Morgan and Daniel, it's impossible to not be
incredibly optimistic about the next generation of clinicians. Another big thank you to Morgan and Daniel for taking time to share their experiences
with us.
And to our listeners, thank you for joining us.
We have just a couple of episodes left, one on AI's impact on the operation of public
health departments and healthcare systems, and another co-author roundtable. We hope you'll continue to tune in. Until next time.