Today, Explained - Paging Dr. ChatBot
Episode Date: October 26, 2025Patients and doctors both are turning to AI for help with diagnosing ailments and managing chronic issues. Should we trust it? This episode was produced by Hady Mawajdeh, edited by Jenny Lawton, fact...-checked by Melissa Hirsch, engineered by Adriene Lily and Brandon McFarland, and hosted by Jonquilyn Hill. Image credit Vithun Khamsong/Getty Images. If you have a question, give us a call on 1-800-618-8545 or send us a note here. Listen to Explain It to Me ad-free by becoming a Vox Member: vox.com/members. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
When you're with Amex Platinum,
you get access to exclusive dining experiences and an annual travel credit.
So the best tapas in town might be in a new town altogether.
That's the powerful backing of Amex.
Terms and conditions apply.
Learn more at Amex.ca.
This episode is brought to you by Peloton.
A new era of fitness is here.
Introducing the new Peloton Cross Training Tread Plus, powered by Peloton IQ, built for breakthroughs
with personalized workout plans, real-time insights, and endless ways to move.
Lift with confidence, while Peloton IQ counts reps, corrects form, and tracks your progress.
Let yourself run, lift, flow, and go.
Explore the new Peloton cross-training tread plus at OnePeloton.ca.
I didn't know what to do, so I turned to chat TPT.
How do you integrate this in a way that retains what is best about medicine?
They all say that healthcare is a sweet spot for AI.
This is explained to me from Vox.
I'm John Flynn Hill.
A couple weeks ago, I went to the doctor.
And there was a moment during the appointment that really surprised me.
She turned her computer monitor towards me,
and there on the screen was this colorful dashboard with all kinds of numbers and percentages.
She explained that she'd entered my information into a database with millions of other patients,
and that database used AI to predict my most likely outcome.
There it was.
A snapshot of my future.
Or at least maybe my future.
Usually, I'm skeptical when it comes to AI, but I do trust my doctors.
So if I trust them, I should trust this technology, too, right?
It turns out a lot of you already do.
I have used chat TBT to diagnose myself.
ChatGPT cured my acne.
ChatTDP has actually helped me navigate the disease better than most of the doctors.
I found out the gender of my second baby by using chat GPT.
Chat GPT is honestly the most calming, reassuring voice of like, hey, great question.
Today on Explain It to Me, Page and Dr. Chatbot, Pageing Dr. Chatbot.
How AI is shaping the way we get medical care.
We'll cover the do's and don'ts of self-diagnosis, how medical professionals are using these tools,
and hear a doctor make the case for why AI is the key to a more human experience at the doctor's office.
Full disclosure, Vox Media has a partnership with Open AI.
To start, I had to make an appointment with a doctor for an interview.
My name is Drew Kular. I'm a physician at Wild Cornell Medicine in New York.
I'm also a health services researcher here, as well as a writer at the New Yorker magazine.
One of the things he's written about for the New Yorker, medical care and AI.
Okay, so as a doctor, what do you think about folks who are self-diagnosing with AI chatbots?
Part of me feels like this is a natural thing that's going to happen, particularly in a system, has difficult to access as ours is and has difficult to navigate as ours is.
And AI is so fluent and so persuasive that it makes a lot of sense that people are starting to enter their symptoms into these chatbots and try to get.
diagnoses. But there's also real risks if you over-rely on AI.
I mean, these things are not infallible. They can give you misleading or incorrect medical
information. You can give it a prompt, and it will give you something back that's extremely
convincing, and it's completely wrong. The GPT's job is to convince you that it's right. You should
be careful. My worst fears are that we cause significant. We, the field, the technology, the industry
cause significant harm.
You know, one of the things that's so interesting about these chatbots is that they're not
like a COVID test or an MRI where you get the answer that you get.
I mean, how accurate these chatbots are really depends on how you're prompting them.
And so in the piece, I talked to this particular chat bot that's called Cabot, which is a chatbot
that was developed at Harvard.
That's not in clinical use.
It's more of a research tool.
But it can perform exceptionally well, kind of almost in a superhuman way, on the,
these specific, very challenging, complex clinical cases that are curated in a perfect way.
But the way that these chatbots perform depends on how the information that you give it is organized.
So if you give certain broad strokes or you don't emphasize the right details, you could get a very
different and possibly incorrect diagnosis.
You know, I will mention there was a recent survey that was done that found that something like
one in five, around 20 percent of Americans said that they had turned to a chatbot for advice.
that later turned out to be incorrect.
And so certainly there's a lot of incorrect information
that's coming out of them.
I'm going to be honest,
if I'm sick or worried about how I'm feeling,
I have gone to doctored Google.
You know, I have been in those WebMD trenches.
For those that are using AI,
are there things that they should do
or shouldn't do to get the most accurate information?
Sure.
I mean, first of all, you're not alone.
I mean, a lot of people for years have been using Google and now AI is kind of the latest iteration.
And I think it can be potentially revolutionary and transformative for people if they use it in the right way.
I don't think the right way is just to put in your symptoms and ask for a diagnosis.
At least I don't think that's the right thing to do right now.
But there are really important ways that people can use it for benefit.
You know, if you have symptoms, asking the AI to rate the urgency of those symptoms, listing possible conditions that could
explain them in some sense of which conditions might be most likely. I think it might be helpful
for people to ask about red flag symptoms. Those are warning signs that suggest that you might have
a more serious condition if you've gone to the doctor and you have lab results or clinic visits.
And AI might be able to walk you through those lab results in greater detail. And it might be able
to help you prepare questions for your next visit. And so in all those ways, I think it can be a really
helpful adjunct to the way that people are currently receiving care. Okay, so you're saying it may not be
a 100% bad idea to use chat GPT to interpret what your doctor's telling you? No, I don't think so at all.
And, you know, part of the challenge here is that health care is so resource constrained in a lot of
ways. There's such enormous time pressures, doctors, nurses don't always have the time and
attention to explain every diagnosis and treatment in the level of detail that we might want.
And AI does have unlimited time and attention in a way.
It can explain things at whatever level of sophistication you need.
It can help people navigate through the medical system.
It can help patients with limited access to care.
Some people are already starting to use it in an interesting way.
You know, I've spoken to patients who are now trying to record or asking if they can record
their conversations with physicians and then uploading those transcripts into chat,
at GBT to try to have them explain what happened in that visit in greater detail and then kind of
continually ask questions, probe for more information. And that has been really helpful for a number
of patients that I've spoken with. But there are also challenges. I mean, these AIs still hallucinate.
They still make things up. They may mix up one patient for another. I spoke to a woman who, you know,
her own medical conditions were being confused with those of her mothers. And it became a kind of really
confusing situation when she was speaking with a chatbot. And so because these chatbots are so
fluent and they're so persuasive in a way, it can make it challenging to figure out when they're
actually being inaccurate. And so that's kind of the note of caution that I want to sound as well.
We got a call about that, not from a patient, but from a doctor. They're worried about how
everyday people are using chatbots to diagnose themselves. I work as an ER doctor, and I have noticed
a lot more patients coming in after having talked to chat GPT, trying to figure out what's going on
with them. And on the one hand, I find people are asking really great questions. They're often
self-educating tremendously. On the other hand, I sometimes feel like the things they're bringing
up are kind of random and hard to encourage people that they're going to be okay or to
convince them that I think that's really unlikely. There's a lot of anxiety in the air, and I think
Chat GPT sometimes makes that worse.
So is this a problem you've dealt with?
You know, it's a real challenge.
In a way, it's a more sophisticated version of what Dr. Google has put out there for the past
few decades.
And, you know, when you look up your symptoms online, there's often a range of potential
diagnoses that are listed.
And it's only natural for the human mind to gravitate towards the most concerning or
the most dangerous ones.
Those are the ones that represent the greatest threat.
And the challenge with using these things is that they don't come with a lot of context.
They don't have the context that you might have if you came to those medical diagnoses
in a clinical setting with a physician or another clinician.
And so there's this challenge of helping people actually understand the context around the words
and the diagnoses that they're learning about online.
But there's also this challenge of, you know, is AI going to steer people away from medical
attention?
So on the piece, I note this Poison Control Center in Arizona that reported a drop.
in the overall call volume that they were getting,
but a rise in severely poisoned patients.
And the suggestion here was that the AI tools
could have steered people away from needed medical attention.
And so this is another part of the challenge
that people are starting to encounter.
What can't Dr. Chapbot tell us?
You know, like, what is it that doctors can do
for patients that chatbots can't?
Right now, there's a lot of,
lot that doctors do that chat bots can't. I mean, they're not reasoning clinically in the way
that a doctor is reasoning. They're not able to come to the same judgments and integrate
patients' values and preferences and circumstances in the way that a physician is, you know,
managing pain or talking to families, helping people understand their options, guiding them
through the tradeoffs that occur in any medical setting. So as helpful as these AI technologies can
be, they're only going to be part of the solution, at least for the foreseeable future.
When we get back, Drove is going to stick around and we'll ask him about how AI is changing
the way doctors are practicing medicine.
It's the matcha or the three ensemble kadocephora of the fact that I just
of deniches
who just
these ensembles?
The forms
standard and mini
regrouped?
What old are you?
And the embellage,
too beau,
who is practically
to be to donate?
And I know
I'd like the
Summer Fridays
and Rare Beauty
by Selena Gomez.
I'm,
I'm sure.
The most
ensemble
a gift of
the fendos
Cepora.
Summer Fridays,
Rare Beauty,
Way, Cipora
Collection, and
other, part of
Vite.
Procurry you
these formats
standard and mini
regrouped for
a better quality
price, on
C4.
or in
Magazin.
Wundery and their new podcast, Lawless Planet.
It unfolds almost like a true crime podcast I've been asked to tell you,
but it is about the global climate crisis, complex stories,
wide-ranging, happening in every corner of the planet.
On Lawless Planet, the new podcast from Wondry,
you will hear stories from the depths of the Amazon to small town America.
Host Zach Goldbaum takes you around the world as he investigates stories of conflict,
corruption, resistance, and highlights activists risking their lives.
for their beliefs, corporations shaping the planet's future and the everyday people affected along the way.
Each episode takes you inside the global struggle for our planet's future, mysterious crimes,
those high-stakes operations, those billion-dollar controversies that you do know so well.
To reveal what's truly at stake, you can follow Lawless Planet on the Wondry app or wherever you get your podcast.
You can listen to new episodes early and ad-free right now by joining Wondry Plus in the Wondry app,
Apple Podcast or Spotify.
We're back. This is explained it to me. I'm John Glenn Hill. Before the break, we heard from Dr. Drew Kulhar about how folks are using AI to help them understand their symptoms, come up with treatments, and even talk to their doctors. And those doctors are consulting AI too. They've got their own chatbots, which are trained on medical research and patient data, and even suggestions.
just their own diagnoses. And some physicians are listening. I work in a hospital and an emergency
department. And one of the cool things about being an ED doc is that you never know what you're going
to see. And patients coming with a question for you. And you got to kind of be the person that
gives them an answer and gives them next steps in terms of a solution. And so there's been a couple
really helpful times where I've typed in a patient's symptoms, for example, patient coming in
with abnormal lab values and a little bit of their history, and then it helping me be more
confident in what I think the diagnosis is. Okay, Drew, how common is that? Does that sound like
something you hear a lot? You know, I think this is one of the fastest uptakes of any technology
that I've seen in medicine, certainly since I've started practicing. So many of my colleagues now
turn to generative AI models, other forms of predictive analytics, to make decisions about
the patients that they're caring for. And I think these things are going to be incredibly powerful,
I think best used as a really good second opinion to try to get a consultant's advice,
basically in any specialty at any time. You know, you can put in a patient symptoms. It might
remind you of certain diagnosis, raise rare diagnoses that you haven't seen in months or
years and give you expertise and support that wouldn't otherwise be possible.
And I think this really needs to be balanced with something else that we're starting to see,
which is this idea of cognitive deskilling.
Not only does AI make it so that you're not learning those skills, new research suggests
that it's also making you unlearn those skills that you previously knew.
You know, if you're not doing the critical thinking of going through a patient's case,
understanding their problems, using kind of your own judgment to arrive at a diagnosis,
what happens to the skills that doctors have? You know, there's evidence already that doctors can get
deskilled pretty quickly. The doctor's baseline performance got worse after they got used to using
AI, which creates a risk if the AI fails, if it's unavailable or it just misses something.
And so the question then becomes, you know, a future in which AI basically
pervades medicine and it's extremely effective and useful. Is it a big deal that we've lost some of the
skills that we used to have? In the past, doctors were probably better at listening to heart murmurs
and doing certain physical exams. And now we have technology like echocardiograms or CT scans that can
replace that. I don't think people feel like we've had a huge loss there. But I do think there's
something distinct about the critical thinking that goes into diagnostic work. So I want to be
very careful that, you know, we really use this more of the second opinion.
rather than generating the initial kind of set of thinking using AI.
Yeah, I wonder how common this use is because, you know, we have shows like House.
Differential diagnosis, people.
Or ER.
It could be hyperaldastronism or barger syndrome.
God, I put that damn book down.
Or the pit.
So he's not coming back.
No.
What happens now?
He's hooked up to all those machines.
Take some time.
try to process this news.
Personally, I'm a pit head.
I love that show.
And, you know, a quote-unquote good doctor flexes their brain and, you know, maybe uses some books.
But I don't know.
Is using AI, I guess, for lack of a better word, is it cheating?
No, I don't think it's cheating.
I think the challenge, again, is how do you maintain critical thinking skills while offloading cognitive work that,
and be done by machines.
And one way, you know, I've started to think about this,
and this is a concept that came from a physician
that I interviewed for the piece, Dr. Gropreet Daliwal at UCSF.
And he told me, you know,
we shouldn't be thinking necessarily about AI
as solving the medical diagnosis.
It's better off thinking of AI as a partner
in what he called wayfinding, you know,
assisting doctors and patients along the diagnostic journey.
And that might involve alerting doctors
to a recent study, proposing a help
blood tests that, you know, could be used to aid in diagnosis, looking up a lab result that
happened to be in the medical record from decades ago, you know, there's a real difference
between, you know, getting the right answer and actually competently caring for people
along their medical journey. Okay, we've talked a lot about the doctor-patient relationship,
but health care is a lot more than that. Where else is AI showing up? There's a lot of ways that
people are trying to use AI in medicine. I think the first area that's going to have a big impact
is on the administration of healthcare that has to do with, you know, entering things in the medical
record, capturing diagnoses of patients who are coming in, writing orders, helping people navigate the
medical system. So all these administrative tasks are in some ways the low-hanging fruit of medicine
that rack up a lot of costs. Another area is kind of prediction or personalization. What does this
new guideline mean? How likely is this medication to be effective? Should you use this treatment
or not, you know, this procedure, does that make sense for you?
So I think AI can do a lot in terms of personalization and prediction of both risk and benefit
from particular medications.
And then there's this whole area that we haven't talked about, yeah, which is around drug
discovery and development.
I think there's a tremendous amount of potential for AI to supercharge drug discovery
so that in a handful of years, we have a lot more options and potentially the options
for conditions that thus far are incurable or very difficult to treat.
And so, you know, at least right now, you know, I think what the way in which AI can be most
helpful is in helping people prepare for their interactions with the medical system
and hopefully making those most seamless.
So the machines are here and there's an argument that they could actually make our
relationships with our doctors more human.
We'll hear that next.
You know what's better than the one big thing?
Two big things.
Exactly.
The new iPhone 17 Pro on TELUS' five-year rate plan price lock.
Yep, it's the most powerful iPhone ever, plus more peace of mind with your bill over five years.
This is big.
Get the new iPhone 17 Pro at tellus.com slash iPhone 17 Pro on select plans.
Conditions and exclusions apply.
Flyer Transat
Seven Time winners
Champions
Out of game
By the seven time
world's best leisure airline champions
Air Transat
You're listening
to explain it to me
Well I love seeing patience
I really like to listen
and help them as much as I can.
And that's what medicine's all about.
That's what drew me in 40 years ago.
Dr. Eric Topal is a physician scientist at Scripps Research.
He also founded the Scripps Research Translational Institute,
which means he thinks a lot about the ways technology can advance medicine.
And he's worried that the personal aspect of medicine is slipping away.
I think most people are familiar with there's been tremendous erosion of this patient-doctor relationship,
because we're talking about seven minutes for a routine follow-up visit or 12 minutes for a new patient.
Very limited time.
That time is often lost as far as face-to-face contact by typing into keyboard, looking at screens
rather than face-to-face eye-to-eye with patients.
And then, of course, there's a data clerk function of doing all the records and ordering of tests
and prescriptions and pre-authorizations
that each doctor's saddled with after the visit.
So it's a horrible situation
because the reason we went into medicine
was to care for patients.
And you can't care for patients
if you can't even have enough time with them,
listen to them, you know, really be present,
have a trust,
and basically have this what used to be
back in the 70s and 80s,
a precious, intimate relationship.
So we don't have that now by and large,
and we've got to get that back.
Yeah, what caused that change?
Why did that shift happen in that relationship
between patient and doctor?
If I were to simplify it into three words,
it would be the business of medicine.
And basically, the squeeze was on
to see more patients in less time
to make the medical practice money.
You've literally written
a book about how AI can transform health care and make healthcare human again? Can you explain
that idea? Because my first thought when I hear AI in medicine is not, oh, this will fix it and make
it more intimate and personable. Who would have the audacity to say technology could make us more
human? Well, that was me and I think we are seeing it now. So the gift of time will be given to us
through technology. Now, I walk through a few examples. One is that we can capture the conversation
with the AI ambient natural language processing, and we can make a better note than it's ever
been made by doctors from that whole conversation. And now we're seeing some really good products
that do that. But they don't just capture the note with audio links for the patient, so in case
there was any confusion or something forgotten during the discussion. But also, they do all these
things to get rid of data clerk work so that when the two get together, they really are getting
together. And I think we can, even with the physician shortage that we have today, we can leverage
this technology to make it much more efficient, but also much more human-to-human bonding.
Do you worry at all that, you know, if that time gets freed up, if it's like, okay, we have less administrative tasks and more time to spend on patients, like what's going to keep administrators from saying, all right, well, then you've got to see more patients. It's the same amount of time or you've got to go even faster, you know?
Well, yeah, no, I have been worried about that. That's exactly what could happen.
AI could be making more efficient and productive. So, oh, yeah, see more patients, read more scans and slides and whatnot. So, no, we have.
have to stand up for patients and for this relationship. And this is our best shot to get us back
to where we were or even exceed that. Yeah. I also wonder, you know, because there are so many
issues that come up in medicine and I think about bias in health care, I wonder how you think of
that factoring into AI. Because on one hand, I can see like, okay, it's taking that out. But
AI learns from human models and humans have bias. Like, how does that, how do you see that? How do you
see that. Yeah, so step number one is to acknowledge there's deep-seated bias. It's a mirror of our
culture and society. However, we've seen so many great examples around the world where
AI is being used in the hinterlands, in people, you know, low socioeconomic, low access, to give
access and help promote better health outcomes, whether it be in Kenya, for
for panda health or for diabetic retinopathy
and people that never had that ability to be screened,
a mental health in the UK for underrepresented minorities.
So you can use AI if you deliberately want to help reduce inequities
and try to do everything possible to interrogate a model about potential bias.
You talked about the disparities that exist and in our country,
if you have a high income,
you can get some of the best medical care
in the world here.
And if you do not have that high income,
there's a good chance that you're not getting
very good health care.
Are you worried at all that AI could deepen that divide?
You know, the people with money
will have access to almost this kind of super doctor
and those without
may not have,
we'll have to rely on chatbots instead
or, you know, something like that.
I am worried about that.
And we have a long history of not using technology to help people who need it the most.
So many things we could have done with technology we haven't done.
It's just going to be the time when we finally wake up and say,
it's much better to give everyone these capabilities to reduce the burden that we have on the medical system,
if you call it a system, to help care for patients.
So other countries will get ahead of us on that, JQ.
I mean, I think that's the issue, is that that's where we should be is making a level for all people.
To me, that's the only way that we should be using AI and making sure that the people who would benefit the most are getting it the most, right?
But we're not in a very good structure framework for that.
I hope we'll, you know, finally see the light.
What makes you so hopeful?
I mean, it's, I'm, and I consider myself an optimistic person, but sometimes it's very hard to be optimistic about health care in America.
It is.
I would be the first to acknowledge that.
But remember, we have 12 million errors a year, diagnostic errors that are serious with 800,000 people dying or getting disabled.
That's a real problem.
We need to fix that.
And we have lots of ways to get to much higher levels of accuracy.
So for those who are concerned about AI making mistakes, well, guess what?
We've got a lot of mistakes right now that can be improved.
I have tremendous optimism.
I recognize their challenges.
But, you know, if I had a better way to fix medicine, I don't know of it.
So it's going to take time.
We're still in the early stages of all this.
But I am confident we'll get there.
We won't even talk about AI and medicine.
It'll be all embedded.
It'll just be part of the practice of medicine
and someday we'll all be appreciative of it.
That was Dr. Eric Topal of Scripps Research.
Speaking of health care, open enrollment is coming up.
Insurance can be really confusing, especially right now.
Call in with your questions about insurance and FSAs and HRAs and PPO's and vision
and why it all works the way it does.
We'll decode it for you.
Or if you feel like you can't afford insurance
with all these upcoming increases,
we want to hear about that too.
1-800-618-8-8545.
You can also email us at AskVox at Vox.com.
If you like this and other Vox podcasts,
you can help make this work happen
by becoming a Vox member.
When you become a member,
you get to listen to this show,
ad-free, and you also get a ton of other perks.
Right now, we're having a sale on membership, which means you can get 30% off.
Just go to Vox.com slash members, and the deal is all yours.
This episode was produced by Hadeemuagdi.
It was edited by Ginny Lawton, and our executive producer is Miranda Kennedy.
Fact-checking was by Melissa Hirsch, with engineering by Adrian Lilly and Brandon McFarland.
Special thanks to Lauren Mapp.
I'm your host, John Glyn Hill.
Thanks so much for listening.
Talk to you soon.
Bye.
Rinse takes your laundry and hand delivers it to your door,
expertly cleaned and folded.
So you could take the time once spent folding and sorting and waiting
to finally pursue a whole new version of you.
Like tea time you.
Or this tea time you.
Or even this tea time you.
Said you hear about Dave?
Or even tea time, tea time, tea time you.
Mmm.
So update on Dave.
It's up to you.
We'll take the laundry.
Rinse.
It's time to be great.
