3 Takeaways - I’m a Doctor. ChatGPT’s Bedside Manner Is Better Than Mine. (#223)

Episode Date: November 12, 2024

For better and for worse, artificially intelligent communication is inexorably making its way into medical care. How will this affect the doctor-patient relationship? Can AI convey human empathy and e...motion? What will the impact be on your health? According to Dr. Jon Reisman, there’s no turning back. Listen, and learn what the future will hold.

Transcript
Discussion (0)
Starting point is 00:00:00 I'm going to start the podcast today with my guest reading the beginning of his recent New York Times op-ed article. John, please go ahead. As a young idealistic medical student in the 2000s, I thought my future job as a doctor would always be safe from artificial intelligence. At the time, it was already clear that machines would eventually outperform humans at the technical side of medicine. Whenever I searched Google with a list of symptoms from a rare disease, for example, the same abstruse answers that I was struggling to memorize for
Starting point is 00:00:35 exams reliably appeared within the first few results. But I was certain that the other side of practicing medicine, the human side, would keep my job safe. This side requires compassion, empathy, and clear communication between doctor and patient. As long as patients were still composed of flesh and blood, I figured their doctors would need to be too. The one thing I would always have over AI was my bedside manner. So, the one thing that he thought he would have over AI was his bedside manner. So the one thing that he thought he would have over AI was his bedside manner. But is that true?
Starting point is 00:01:10 Does it matter who or what we interact with in medicine or elsewhere in our lives if it provides us with compassion, empathy, and clear communication. Hi everyone, I'm Lynn Toman, and this is Three Takeaways. On Three Takeaways, I talk with some of the world's best thinkers, business leaders, writers, politicians, newsmakers, and scientists. Each episode ends with three key takeaways to help us understand the world,
Starting point is 00:01:43 and maybe even ourselves a little better. My guest today is John Reisman. He's an American doctor who has practiced medicine and worked as an emergency room doctor in hospitals throughout the U.S. and the world. He's worked in places as diverse as Alaska, Antarctica, and Nepal. He is also the author of the book, The Unseen Body, and he has written for the New York Times, the Washington Post, and other newspapers. Welcome John, and thanks so much for joining Three Takeaways today. Thank you for having me, Lynn. It is my pleasure. When ChatGPT and other large language models appeared, you saw your job security go out
Starting point is 00:02:31 the window. Let's start with the technical side. What did you expect from ChatGPT on the technical side? Well, I have to say I was very surprised by chat GPT's abilities, both on the technical side and just the verbal side of imitating human language to such an incredible degree, including very technical language that you expect sort of only from professionals who have studied some area for years and perhaps gotten several degrees. Computers seemed good at deciphering the technical side of medicine. So I was not surprised by its abilities there.
Starting point is 00:03:06 When I was a medical student, we used to Google things like blood in the urine, blood in the sputum and it would come up with the rare rheumatologic diseases that we were going after and things like that. And it always got it right. So that side, I was not surprised at. But I think like many other people, I was very surprised by how good CHATGPT was at mimicking humans basically, and making you think that there was a human behind the words. And that goes for everything from technical explanations of medical concepts to even human
Starting point is 00:03:38 conversation, which we often have in medicine. So I would say it was that sort of mimicking of humanity side that really caught me off guard as it did many other people. So you expected the technical side to be excellent, diagnosing complex diseases and offering evidence-based treatment plans, but you were surprised by the communication side. Correct. In one study, chat GPT's answers to patient questions were rated as both more empathetic and of higher quality than those written by actual doctors.
Starting point is 00:04:15 How is that possible? AI is not caring or empathetic. Right, and I'm sorry to say, perhaps, many doctors are not either. I think a lot goes into what people perceive Right, and I'm sorry to say, perhaps many doctors are not either. I think a lot goes into what people perceive as an empathetic answer from a doctor. For instance, chat GPT can generate language at a much quicker rate than a human. If a human doctor is slowly typing into their computer an answer or just speaking the answer,
Starting point is 00:04:41 it takes some time to come up with that answer, we're chat GPT can kind of generate a large chunk of text in an instant. Seemingly I thought about this a lot and I think what goes into feeling that a doctor's answer is empathetic. Part of it might just be the length of the answer alone. Obviously that's not the only thing, but if a doctor says something short and blunt, like, Oh, you're fine. Don't worry about it. Maybe from a doctor's perspective, we think that sounds authoritative and it sounds reassuring to a patient. But in reality,
Starting point is 00:05:12 it sounds like you're treating the patient like they can't handle the more details, they can't handle a more in-depth dive into what the technicalities of your decision are. And so perhaps we think that's reassuring, but I think a patient wants more information and wants to be a part of the decision too, and not just take our word for it, as they might have in decades past when medicine was more paternalistic. So I think just the length alone
Starting point is 00:05:35 and the instant it takes for CHAT GPT to generate a more in-depth, more explanatory explanation of what we think is going on and how the advice we're giving stems from that. I think that's a big part of it too. So I don't think that's the whole story, but perhaps that's a big part of it. And doctors being very busy and rushed all the time perhaps don't have the time to give those more in-depth answers that patients want and deserve. Students all learn in medical school how to break bad news to patients. What are the do's and the don'ts? As a medical student I learned that too. It's actually the only
Starting point is 00:06:10 training I really got in bedside manner besides watching more senior doctors and more senior residents enact the do's and the don'ts, learning from their positive examples and negative examples. For instance, when you come into the room you don't want to clobber the patient over the head with the news that they have cancer, but at the same time, you don't want to beat around the bush. They're there to get the results of their biopsy, let's say, so don't talk about the weather, get to the point. There's this tendency to soften the blow of the news by using overly technical language, words like adenocarcinoma, which is a technical
Starting point is 00:06:45 description of some kinds of cancer. You sort of can hide behind those technical words that the patient may not understand. And instead of coming out and saying words like cancer that feel hard to say when you're faced with that patient, it is actually difficult to come out with those words. So we tend to hide behind technical words. That's obviously a don't. Another important do is to always have a tissue box nearby in case the patient starts crying, of course, which sometimes happens. And then of course, a big do is to ask the patient what they know
Starting point is 00:07:15 about cancer, what they know about perhaps a specific kind of cancer that you're diagnosing them with to educate them. Because many people know the word cancer is bad but really don't know much more than that or what to expect in the coming months and years so explaining all that is very important. One of the do's that at least resonated with me really made sense to me was to think about using the quote I wish line as in I wish I had better news that somehow makes it seem more personal. One of the lines that I learned, one of the scripts was the I wish line. I wish I had better news that kind of does make it more personal and having those lines, I almost think about it like you have a tool belt with different tools you can pull out, different lines you can pull out in
Starting point is 00:08:01 different instances. And you know, it sounds robotic, it sounds technical, when you should be utterly human in that situation, yet you're pulling out these pre-scripted lines. But they really do help in those situations. As you've so eloquently said, John, you initially recoiled in medical school the idea that compassion and empathy could be choreographed like a toolbox or like a set of dance steps. But what happened when you were actually practicing medicine as an emergency room physician and you had to deliver really bad news? I did find that having that script, having those tools, those lines
Starting point is 00:08:42 really really helps. It's such a surreal situation. You know, I would have thought, I did think as a medical student, these situations are you just, it's one human to another, you're just having a heart to heart conversation and while at the same time conveying some technical information about the diagnosis and prognosis, but it is a very unnatural setting. So as an ER doctor, I often find cancer, let's say on a CAT scan when I'm working up a patient's symptoms. This is a person I've never met before.
Starting point is 00:09:10 They've never met me. I'm playing a role that I play every day to make a living. For me, this happens semi-often. And for them, it could be the worst day of their life. So there's this huge chasm between us, this stranger I've never met before and likely will never meet again. It's not surprising that a human might act unnaturally in such a situation. We're all in our jobs acting out this unnatural role, playing a role really, no matter what
Starting point is 00:09:37 our job is, and that the same goes for doctors and even in those most human moments when you are telling a patient some life-changing information. So in retrospect, it's not surprising that these sort of lines, these pre-written scripts help in that situation to sort of bridge that emotional, professional chasm from which both of me and the patient are coming at this very difficult conversation. You've thought a lot about pre-written scripts. Where do you see them in society, and what do they accomplish?
Starting point is 00:10:11 Scripts are everywhere. When you think about it for a second, you think, oh, we're just humans, and when we talk, it's human-to-human interactions, but our society and our lives are pervaded with scripts. When we greet people, we're following a script, and when we say goodbye, there's scripts between husbands and wives, there's scripts between friends, there's scripts between professional colleagues. There's things you don't say in certain contexts and that
Starting point is 00:10:34 you do say in others, whatever your job is, if you're in politics, if you're in the medical setting, you know, there's things you say, and there's things you don't say in those contexts. So we're kind of all following all these scripts. And, you know, it seemed repulsive to me at first to think, oh, there's this pre-written script and I'm just an actor on a stage following stage directions when I should be a human in the moment connecting with this other human. But pre-written scripts, pre-written actions and choreographed motions and gestures pervades every aspect of society. And I think when I thought about it, I realized it's actually a big part of being a human is a script and you're not just improvising and freewheeling it all day long every day.
Starting point is 00:11:14 We're all kind of following roles to some extent, though we may improvise on the script. Obviously, I'm not reciting the same exact words to every patient. It is a conversation. There is a back and forth. So it's sort of like you have the script, but then you sort of improvise on it to fit it to the specific context or the specific conversation that you're having. And that's kind of like how human life works
Starting point is 00:11:36 in society, I think. John, you believe in the power of scripts. Do you think we will be interacting increasingly with AI? AI seemingly empathetic or informative with scripts as opposed to interacting with other humans? Dr. Keltner I think we will. I think there's no other way. I think so many areas of life have reduced human-to-human interaction. You know. I sometimes use chatbots online to get certain banking tasks accomplished. I think most of healthcare can go that same way. Doctors are expensive. Maintaining facilities are very expensive. Healthcare is a huge proportion
Starting point is 00:12:17 of our national costs for the country. Reducing those costs will be great. Hopefully, in some ways, we'll increase access, decrease the cost. But as a side effect, there'll be less human interaction, there'll be more interaction with machines, with AIs. So it's kind of a brave new world we're entering. And hopefully, we can find the right balance without losing our humanity, even though we're interacting less and less with other humans. That is a scary new world. Does it matter that AI has no idea what we or it are even talking about if there are linguistic formulas for human empathy and compassion? Should we hesitate to use good linguistic formulas, no matter who or what is the author.
Starting point is 00:13:06 Certainly AI can be very helpful even without feeling any compassion itself. I don't think any of us strive for a world where all human compassion and emotion is driven out and only technical verbal scripts of compassion remain. Surely humans caring for each other, a doctor caring for their patient, a doctor feeling terrible about what they've just discovered on a CAT scan inside a patient's abdomen or skull. Surely that compassion must stay in the world and we must maintain it. And AI, you know, if you're just writing a form letter to a patient about some ho hum test result,
Starting point is 00:13:42 that's not that serious, I don't think tremendous compassion is needed. But certainly some is needed in these more human moments. I think it will take some adaptation and I wonder how far humans can take it. Traditionally, we talk to each other face to face. We hear each other's voice, which turned into the written word where you can send a letter across the country and you're not looking at the patient, which turned into sort of like telecommunications where we see each other, but we're across some distant geographic chasm. So the way we communicate with each other has changed so much. So I wonder how much AI communication we can tolerate. Maybe patients
Starting point is 00:14:19 won't actually miss their human doctors all that much. Most diagnoses I deliver are not life-changing. They're pretty ho-hum. They are, oh, you sprained your ankle, you didn't break it, or you broke it and didn't sprain it and you're gonna fall out with an orthopedist, or you have strep throat, or you don't have strep throat, you have a viral cause of your sore throat. You know, these are not life-changing conversations. They don't require tremendous compassion or brilliance in bedside manner at all. It's actually rare that I have to relative to other diagnoses that I have to deliver these life changing ones. So I think a lot of medicine can
Starting point is 00:14:54 change and people are not going to miss the more awkward conversations with their doctor about these sort of everyday not so dangerous diagnoses. So I think medicine is in for a lot of change. It does raise these more fundamental society challenges. Taking a step back on a more general level, should we worry about relationships between humans? Humans aren't always as empathetic as we could be.
Starting point is 00:15:23 For example, there's the classic story of the husband who comes home from work and he says to his wife, I had such a hard day at work, to which his wife, rather than being empathetic about his tough day, responds with, well, you wouldn't believe the day I had. Do you think that we as humans will become lonelier as relationships with other humans aren't perfect, they take effort and relationships with humans may not be as easy or as empathetic as interactions with an AI assistant or an AI companion?
Starting point is 00:16:01 I do think it will probably get harder to maintain human relationship, though I do think that is very important. I think already with the technology we have, even without AIs that imitate humans nearly perfectly, we're more isolated as time goes on since we can kind of do almost everything in our daily lives without ever leaving the house or often without even speaking to a human. We accomplish so many things through websites, let's say personal finances and banking and all these other things.
Starting point is 00:16:31 We don't interact with humans as much as we used to. Is it making us more lonely? Probably. As we interact less and less with humans, will we get lonelier? Probably. Hopefully, we'll find ways to compensate. We probably have to ramp up even the more human sides of our lives as we interact more and more with AIs. I don't see the interaction with doctors as hopefully a big part of people's social lives. Hopefully it's a small part of their lives. I guess if you have complicated, serious disease, you see quite a number of doctors and perhaps many specialists. And sadly for some people, that might be the bulk of their social interactions in daily
Starting point is 00:17:10 life. But hopefully humans can compensate for the kind of dehumanizing of more and more aspects of our lives by kind of ramping up the humanity of other parts. I guess we haven't done that super well lately, but hopefully we will. Hopefully we will. John, what are the three takeaways you'd like to leave the audience with today? The first takeaway I'd say is that as much as medicine feels like a very human endeavor, much of it is really just technical and a matter of customer service. And I think AI
Starting point is 00:17:44 is going to do splendidly at that side. The second takeaway I would say is that there's really no going back. There's only going through and going forward and that applies to the way technology will affect health care and many other aspects of life. My third takeaway is that health care really needs to get into the 21st century in the way that it delivers care and interacts with patients. As many people have noticed, interacting with your doctor's office can be rather dreadful. You have to sit in traffic, wait in the waiting room, get herded through your visit like an
Starting point is 00:18:21 animal and the communication can be terrible. You can wait for a callback for days and weeks or the results from your exams and this all seems kind of stuck in the 20th or even the 19th century in some ways. So while the technical side of medicine seems to be sprinting into the 21st century, the kind of customer service side of healthcare still seems rather dreadful and in need of updating quite dramatically. Thank you, John. This has been really interesting. And thank you for your work to bring medicine
Starting point is 00:18:55 into the 21st century. Thank you so much, Lynn. It's been a pleasure. If you're enjoying the podcast, and I really hope you are, please review us on Apple Podcasts or Spotify or wherever you get your podcasts. It really helps get the word out.
Starting point is 00:19:11 If you're interested, you can also sign up for the Three Takeaways newsletter at threetakeaways.com, where you can also listen to previous episodes. You can also follow us on LinkedIn, X, Instagram, and Facebook. I'm Lynn Toman, X, Instagram, and Facebook. I'm Lynne Toman, and this is 3 Takeaways. Thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.