Good Life Project - Future of Medicine: AI vs. Doctors, Who Wins? We ALL Do! [Ep. 2]

Episode Date: November 10, 2025

What if AI could help solve medicine's biggest blind spots?Harvard Medical School researcher Dr. Charlotte Blease reveals why doctors can only keep up with 2% of new medical research and how artificia...l intelligence could transform healthcare for both patients and providers. Drawing from her new book Dr Bot: Why Doctors Can Fail and How AI Could Save Lives, she shares fascinating insights about the future of medical care.Part of the Future of Medicine series exploring innovations reshaping healthcare as we know it.You can find Charlotte at: Dr Bot Substack | Website | Episode TranscriptIf you LOVED this episode, don't miss a single conversation in our Future of Medicine series, airing every Monday through December. Follow Good Life Project wherever you listen to podcasts to catch them all.Check out our offerings & partners: Join My New Writing Project: Awake at the WheelVisit Our Sponsor Page For Great Resources & Discount CodesWatch Jonathan's new TEDxBoulder Talk on YouTube now: https://www.youtube.com/watch?v=2zUAM-euiVI Hosted on Acast. See acast.com/privacy for more information.

Transcript
Discussion (0)
Starting point is 00:00:00 Hey, before we dive in, a quick note, the video from my new TEDx Boulder Talk just went live on YouTube. It's this love letter to making things with your hands in a world that's being eaten by screens, machines, and AI. And I share this story that I've never told publicly before. It'd mean the world to me if you'd go and check it out. You can watch it now on YouTube, just open up YouTube and search for Jonathan Fields and TEDx Boulder. Or just click the link in the show notes. Hey there, every Monday in November and December, we'll be featured sharing our Future of Medicine series, where we'll be spotlighting groundbreaking researchers, cutting-edge treatments, and diagnostic innovations for everything from heart disease, cancer,
Starting point is 00:00:39 brain health, metabolic dysfunction, aging and pain, and also sharing breakthroughs in areas like regenerative medicine, medical technology, AI, and beyond. It's a brave new world in medicine with so many new innovations here now and so much coming in the next five to 10 years. And we're going to introduce you to the people, players, and world-changing discoveries that are changing the face of medicine today and beyond in this powerful two-month future of medicine series. So be sure to tune in every Monday through the end of the year and follow Good Life Project to be sure you don't miss an episode. And today we're bringing you a conversation about AI and medicine that might forever change how you think about your
Starting point is 00:01:19 relationship with doctors, medicine, and the future of staying healthy. I mean, what if the key to healthcare, it wasn't about creating new drugs or surgical techniques, but about transforming the entire system through artificial intelligence. Picture a world where your doctor never misses a crucial detail in your medical history, where diagnosis becomes dramatically more accurate, and where care becomes more personal, not less because of AI. These aren't sci-fi dreams. They're happening right now.
Starting point is 00:01:47 My guest today, Dr. Charlotte, please, stands at the intersection of health care, technology, and human behavior. As an associate professor at Uppsala University and researcher at Harvard Medical Schools Digital Psychiatry Department, she spent decades studying how we can make health care better for everyone. Her new book, Dr. Bot, Why Doctors Can Fail and How AI Could Save Lives, offers a powerful window into how artificial intelligence might transform modern medicine. One of the things that really surprised me in our conversation was learning that doctors can only keep up with 2% of new medical research. I mean, think about that. It means 98% of new medical knowledge isn't making
Starting point is 00:02:28 it into your doctor's office. But that's just the beginning. We explore why patients often hide critical information from their doctors and how AI might help create a healthcare system that really speaks to all of these things and serves both patients and providers so much better. So excited to share this conversation with you. I'm Jonathan Fields and this is Good Life Project. We're having this conversation at a time where you're in Ireland, right now, I'm in the U.S., but all over, I think a lot of individuals, a lot of countries are really the re-examining the way that their medical system is functioning, what's working, what's not working, who is it serving, who is it not serving? You have a very strong point of view
Starting point is 00:03:15 about the state of medicine these days and how, in many ways, it's actually failing patients. Take me into this. I do want to sort of begin by saying, medicine is a great human success story. To some extent, one of the things I say in the book is it's a victim of its own success because we're living longer, we're living with more chronic ailments, and that all puts pressure on health care systems. So, as you've said, even in wealthy countries throughout the world, we're seeing that our systems are creaking. There's delays in getting medical attention.
Starting point is 00:03:54 And our clinicians are burnt out. We also don't have enough clinicians as well. So against that backdrop, doctors are overworked, burnt out. We know that that's sort of a petri dish for creating errors as well. So there are real challenges in giving patients the kind of accessible care that they need. One thing I do in the book because I don't want the book to be, and I don't want to advocate for less funding in healthcare. I think that's been a problem in many health systems.
Starting point is 00:04:27 But what I do say at the outset is imagine a health system that is the most lavishly funded. Patients could still get access whenever they wanted it, but they still had to access it in traditional ways. And I would still say there's going to be many problems. There's still going to be a ceiling effect on care as we know it. And that's because doctors, they're not gods. We want them to be gods, but they're not. And what I try to do is look at the very human limitations with health care, those psychological limitations with health care. But I also say, look, when you physically have to go to a bricks and mortar hospital as well, that's already going to create challenges and marginalise many patients, some of whom are in the greatest need.
Starting point is 00:05:17 Let's tease out a handful of the things that you just shared and go a little bit deeper into each one of them. one of them is this notion of Dr. Burnout and also mental health issues, depression, anxiety associated with that. Take me a little bit deeper into this. How is this happening? Why is it happening? And how is it showing up in patient care? It shows, so about 50% of doctors in the UK and the US, just under it in the US, recent studies found are, say that they're burned out. The figures were much higher during COVID. And about 20% of US doctors, are depressed. In the UK, about four in 10, GPs, family doctors say that they can't cope with their workload. So the burdens are absolutely enormous. Paperwork takes over, that's documentation
Starting point is 00:06:08 and administrative tasks take over more than 50% of a doctor's daily tasks. So that also adds to burnout. And a lot of doctors talk about date night with Epic. Epic is the largest. electronic health vendor in the US. So it's basically a case of doing the documentation after the patients have left or on the weekends, which incidentally isn't a great way to keep accuracy in the medical record. So that's also a ramification of burnout. Another aspect of it is doctors have to multitask. And multitasking is sort of a misnomer, psychologically speaking. People don't tend to multitask. For the most part, they're switching between tasks. And medicine is a field where that's ultimately what is expected of you. You've got to be a
Starting point is 00:07:00 sort of diagnostic wizard. You've got to extract the symptoms and do test with patients. But you also have to give that compassionate bedside manner. And you've got to do documentation. These are huge burdens on any individual. So the doctor is sort of like this one-man band. They're expected to be omni-competent and I argue that's really just too much to ask without expecting errors to arise and to go back to the issue of depression look in the states around one medical school graduating class that's about 300 to 400 doctors kill themselves every year so we're talking about really serious savage pressures on doctors those obviously do have implicated We've got the studies for the implications on patient care.
Starting point is 00:07:53 So doctors who are depressed or burnt out do report suboptimal practices. Give another example, doctors who are depressed are six times more likely to make medication errors. So there are real implications here. To be clear, we're not pointing the finger at doctors here. Not at all. We're not saying, oh, this is your fault. We're saying what you're saying, it sounds like. And tell me if I'm getting this right is we've got systemic issues.
Starting point is 00:08:18 here. The way that the training is set up, the way that the practice is set up, the way that often the funding mechanisms work and the requirements for administrative, it all just compounds and it piles on and piles on piles on. And this isn't a doctor being incompetent or malintended.
Starting point is 00:08:35 It's just the stunning nature of the pressure. There's an onslaught of what one human being is expected to do. That is, it sounds like wholly unrealistic. Well, that's 100% my take. And actually, one of the things I'm most pleased at hearing from doctors who have read the book is they say you get it. And I was really anxious that
Starting point is 00:08:54 I needed to sound sympathetic and I am sympathetic to doctors. That's sort of the point of the book. In some ways you can't expect it to talk about the g-wisory of AI. And you can't have a revolution unless you know what it's for. So the purpose of this book was to say, look, what are the problems to which AI might be a solution? I was very taken and had been for many years thinking about the need for a psychological portrait of the doctor-patient encounter, but also what doctors have to do. I mean, the way I flip, I sort of flip it by saying, I think it's amazing that doctors are able to do what they do as well as they do. But we do need to have a really open conversation about the fact doctors can feel us sometimes. We've got to have that without finger pointing.
Starting point is 00:09:44 It has to be sort of just an honest description of what we're expecting human beings to. to do. This shows up in two ways. One is, as you described, mistakes in diagnosis and treatment. If you're so overburdened or so overwhelmed, they're dealing with your own mental health issues, of course it's going to affect the way you think, the way you feel, the way you see, you way intake and synthesize, and that's got to at some point affect the way that you're actually diagnosing and treating. But then the second side, which you described, is healthcare providers are human beings also. And we want the experience for them to be. not just humane, but okay, like nourishing, not just survivable, but good. And we're not doing
Starting point is 00:10:25 justice to them in a lot of ways, too. Absolutely. And even beyond, I mean, one of the things I do talk about as well is even beyond the crushing and savage pressure that doctors are under within current health systems, I talk about the knowledge demands on doctors. And I made this calculation a couple of years ago. I looked at PubMed, which is this report. for the latest biomedical research. I've discovered that every, by the way, the figure's probably higher now. Every 39 seconds, there's a new biomedical article published.
Starting point is 00:11:00 And I worked out that if doctors even just read 2% of these latest findings, they'd be spending 22.5 hours per day. The task of trying to update your own medical knowledge is colossal. And this is where I think doctors really have the greatest burden. on them of any, if you like, white-collar professional, the expertise that's required and the constant updating of information is just absolutely mammoth. Yeah, I mean, 22 hours, that's even to read a fraction of what's coming out that's new. And then you pile on top of this, this notion that in this particular industry, the stakes can be
Starting point is 00:11:40 life and death or like profound, profound limitations in a person's life potentially for long windows of time and not just them, but whoever's affected by that. So it's the stakes, I feel like, that also are part of the pressure here. Absolutely. And I think this is why doctors are very understandably anxious about being accused of blame when error rises, because unless we have the premise that doctors are somehow gods and they feel this, this is completely inevitable. Having said that studies do show that the doctors tend to practice evidence. based medicine only around half the time. The way I flip it, as I say, and we've got a number of reviews that show that. And by the way, in some countries, it's going to be the, if you like,
Starting point is 00:12:28 adherence to optimal evidence-based practices is less. It's very hard to, when you've established your medical education, gone through medical school, your medical education tends to be set. So if there's omissions or biases within that education, it's really hard as you human to recalibrate, to keep up to date. You can in some ways, but it's very, very challenging to do it when you're actually on the job. And this is also why sometimes doctors don't practice the latest evidence-based practice. And it's also a reason why older doctors tend to be even less adherent to evidence-based practices because they're set in ways of prior learning. So if they're not practicing evidence-based practices, what are they practicing?
Starting point is 00:13:17 to see. They're practicing how they've been taught and in some respects it's a kind of an apprenticeship. So how you've learned what you learned in medical school, then you slowly put it into practice when you first see patients. And it's a practical learning as well. But it tends to be set and in some respects hardened by what you've learned. It's not to say it doesn't change. Of course it does and doctors keep up to date in various ways. Things will change. But it's very hard to keep it that really the latest. most modern evidence-based learning because as a human, you just can't do all of that while still practicing. It's, like I say, very hard to recalibrate your learning. Yeah, and like you said, it literally would take you 22 hours a day just to keep up with 2% of what's coming out. I mean, it's almost impossible. And yet I would imagine if you asked many, many doctors, they would actually tell you I'm practicing evidence based because it's based on the evidence that they learned on, but it's not current anymore. This is true. Another issue that makes things much more difficult for doctors is such a thing as medical reversals.
Starting point is 00:14:26 So latest evidence cannot see change. Certain kinds of practice or evidence or ways of treating patients, certain standards of care, we might sort of consider them to be unimpeachable. They've got evidence to support them. But newer evidence very often reverses or changes that standard. And that happens a couple of different. studies or researchers who have worked on this, if you look at sort of top-tier medical journals, you know, over 10 years and subsequent studies, the subsequent studies suggest medical reversals or significant changes around 40% of the time. So there you can see that also creates a further headache for doctors. Right. It's like a whiplash effect. It's like, okay, soon there's this new research out that says, this is the new standard of care. This is the new approach. based on the Evans, and then a decade later, 40% of those, it's kind of like, oh, we were just kidding.
Starting point is 00:15:23 It completely changes. Yeah, so this is always an issue. And then, of course, the idea is how do you, we can get to that later, but how do you integrate informational support with, for the doctor to use the latest findings? And then that's the issue of how do you integrate those kinds of clinical decision support tools within the workflow of the doctor. And that's really hard to get right as well. Yeah. And we'll be right back after a word from our sponsors. I want to stay on just exploring a little bit of what's going on currently, but I also want to
Starting point is 00:16:02 flip sides a little bit to the patient side. One of the things that you explore and then I've heard is this notion of how patients show up. And very often the medical paradigm now only allows a handful of minutes between a patient and a healthcare provider. So a lot of times you feel really rushed, you feel like, what do I say, what do I not say, where do I focus, what do I share, what do I not share? Tell me a little bit about the patient side of it here. Patient side of it is so critical as well. And I see, you know, speaking to that issue of what do you say in an appointment when you know you've got very little time. In the National Health Service in the UK, many years ago I saw in a waiting room there was a poster that said one patient equals one problem equals 10 minutes now you've got as a patient you're thinking what was the real leading problem that brought me here so that i only talk about one problem therefore how do i know you kind of need it puts the cart before the horse because you need to know have expertise in order to know if different clusters of problems are linked or what you bring up so there's real pressures on patients as you say
Starting point is 00:17:10 to decide when to visit the doctor. I don't want to overload listeners with statistics, but I found a very revealing American time use survey. So in other words, how people spend their time in America. And this was probably about 2020. And it found that adult patients take two hours out of their day to go for a 20-minute appointment with their doctor. So when you think about that, it is very disruptive.
Starting point is 00:17:37 And we all tend to know this. But from the doctor's side, you know, you just, you're either turn up or you don't turn up, but if it's a no-show, you know, that's disappointing and that's a bad thing. In some ways, you're sort of like, that's a bad patient. But from the patient side, it can be a logistical nightmare. I should flag up as well. If you're on a low income, this study found you spend even longer out of your debt, something like 28% longer than that two hours. So, you know, if you're a gig economy worker, you're not employed, you might be relying on public transport or, you know, There are other kinds of constraints, and you may just decide, you know what, to hell with it. I'm not going to go to the appointment. Maybe it's not important. You delay seeking help till it's too late. So the logistics of getting to the sort of bricks of mortar appointment, deciding what's important,
Starting point is 00:18:29 and also talking about symptoms, one thing I discussed in the book is talking about symptoms that may be socially sensitive, it could be mental health problems, or it could just be embarrassing symptoms. And some of these incidentally could be cancer red flag symptoms. The patients just don't want to flag up. And I do say in the book, and it sort of sounds hyperbolic, but patients are
Starting point is 00:18:51 literally dying of embarrassment. And I bring evidence to bear on that. And this is a side, I think, of the medical visit that from the doctor's perspective, and they may pride themselves in being the most friendly, compassionate physician in the world, and they may
Starting point is 00:19:07 well be all of those things. But Actually, there's a status difference in the medical appointment that means patients know, by dint of the fact they're consulting with an expert, that tends to incur a kind of face-saving behaviour. And people then tend to be a bit more subordinate. They don't want to reveal too much. They don't want to appear stupid. They don't want to ask too many questions. And they don't want to encroach on their doctor's time because patients know doctors are incredibly busy.
Starting point is 00:19:39 So you've got all of these other sides of the equation that tend to silence patients and also tend to silence the most marginalised patients. So the effects there are even greater for, again, people with less education, people with low incomes, patients who don't share the same race as their doctor, where there may already be some communication breakdowns or that don't share the same first language. Elderly patients. This is a problem as well that we need to discuss it. more, I think, that tends to be overshadowed, I think, in discussions about healthcare.
Starting point is 00:20:15 Yeah, I mean, it's so fascinating. And you would like to think, well, I'm, I'm, you know, if you're a patient, you're walking into a doctor's office. Well, I'm now under the care of a professional. I can say anything I need to say. I won't be judged. There, it'd be completely fine. But there are scripts running in all of our heads. We have social conditioning. We have familial conditioning that say, like, what is or isn't appropriate? How I still want to be seen in a particular way, even by my treating professional. So there's certain things we may or may not say or maybe, you know, you live in a smaller area where like the healthcare provider is actually just part of your community and somebody who you're going to see on a regular basis. And there's
Starting point is 00:20:51 something sensitive and you feel really uncomfortable saying it to your neighbor three doors down. It's a very human thing. It's not malicious. It's just, it's very understandable. It's completely understandable. And, you know, you get, you still, the other side of it is people get a boost. from being with their doctor too, there is a certain psychological boost because you're in close proximity to someone who is in that context higher status than you. And so psychologists who work on status psychology talk about those sort of power differentials and being close to somebody who's prestigious does give you sort of a warm, fuzzy feeling. Patients give their doctor's presence, you know, and there's actually guidance about that as well. There's not many professions or
Starting point is 00:21:36 occupations where there's actual guidance to say, look, you cannot exceed this amount for giving this person a present or a gift. But yeah, there are issues here that with the best will in the world are still going to arise because of just how human psychology has evolved and how we behave in certain contexts. And we can sort of tell ourselves, look, I should behave differently. I shouldn't be subordinate. I should say it as it is. But it's actually really challenging to do that in a pressurized encounter on both sides for both parties. Right. I mean, human nature is human nature. And in a 7 to 10 minute or 15 minute at the outside interaction, especially if it's somebody you don't really know very well, I mean, it's also really hard to have trust in such a
Starting point is 00:22:24 short amount of time. I remember talking to a friend once who's a therapist, there's this common phenomenon that often therapists call the doorknob moment, which is you go through a full, your 55-minute session with a therapy client and then they leave and as soon as their hand touches the door and they're about to walk out the door they kind of turn to the therapy and say oh one more thing and like that's the one thing that they actually came for and it took them until the final seconds to finally serve like feel comfortable enough to surface the real thing that they were there for and when you only have a handful of minutes there's not even time for that this is a real issue and in fact this happened to my my own father my late father when
Starting point is 00:23:03 when he had bowel cancer and he was embarrassed I think he went to the doctor with an ingrown toenail or something like that and then, you know, spent the visit discussing this very minor issue and then at the end as he left he turned and said, look, I'm having
Starting point is 00:23:19 changing my boil habits and that led to him getting through subsequent tests and all the rest of a diagnosis of bowel cancer but that's pretty typical. That kind of, if you're like, syntax of the visit is really, really normal. People just, they're embarrassed to talk about these things in front of other human
Starting point is 00:23:38 beings. And as you say, in particular, people that they may really respect and want that individual different well of them. So yes, these are psychological obstacles. Let's shift gears a little bit. This is sort of the state of medical care. And as you said, there are a lot of well-intended people and players here. But the nature of the broader system is just making it really hard for providers to show up and give the way they want to give and patients to show up and get what they need. When we use in the lens out now, we sort of look at current times and we bring in the conversation around AI and how it might integrate into the experience of practice and medicine. And we think about the problems that we just noted. Talk to me about what we're seeing
Starting point is 00:24:24 now in terms of the possibilities on the diagnostic side, on the treatment side, just on the broader systemic side of how AI may step in and start to help us reimagine some of these issues. AI is very good at seeing things that humans can't see. It does pattern recognition at scale and largely what doctors are doing when they've learned, they've undergone their medical education, is they're sort of translating their learning into seeing. It's a form of pattern recognition. But, and it's very instinctual as well, AI can do this in ways that encompass vast amounts of pattern recognition and seeing kinds of patterns that as humans,
Starting point is 00:25:14 instantly updating patterns in ways humans can't do that easily. So give the example, if we said, you know, keeping up to date is a real headache. Consider rare diseases. There's about 7,000 rare diseases worldwide, about 250 are identified. every year. So you can imagine it as a doctor, it's one thing knowing the typical conditions that you might see, but then there's rare illnesses. And I say my family, there's a rare illness as well,
Starting point is 00:25:42 which isn't all that unusual for families to have some kind of a rare illness or genetic illness. But in my family's case, my eldest brother waited 20 years for a diagnosis. And that's not all that uncommon either, people having these sort of long delays until they see a doctor who might recognize the cluster of symptoms. So when you've got AI with something like diagnostics like that, it can come to a rare illness diagnosis very, very rapidly. And we've seen that already with this newer generation of tools. They're called generative AI tools. Many of your listeners will know about these, but like talking to the internet on steroids, these kinds of tools. But they learn from vast troves of publicly accessible information. If you take
Starting point is 00:26:28 just even that example, there was a study conducted by Austrian researchers. It fed rare illness symptoms into chat GPT and within eight responses got to 90% of the diagnoses for the rare illness. So you consider how shorter that diagnostic odyssey could be. It's a case of minutes rather than decades. So there's a sort of use opportunity. Now, I want to sort of caveat that by saying, look, how patients enter information is going to be critical here for how these tools are effective. And we need many more real world studies of how patients interact with these tools in order to see how good they are. But it would at the same time be completely cheerless to deny that that's very impressive. And we're seeing many of those kinds of studies emerging.
Starting point is 00:27:21 Just to make sure I'm wrapping my head around this, we can basically take an AI model. and train it on all of the available information about all potential rare diseases, everything that we know, all the diagnostics, basically anything that exists in any database, anywhere, research, clinical information, and then the AI can top that. So when somebody types in a series of symptoms in the right way and hopefully prompted and intelligent way,
Starting point is 00:27:55 that it's able to draw on this vast database which no one individual could possibly have to much more rapidly identify, especially rare diseases or illnesses in a matter of potentially minutes that might have taken years or decades or maybe even never been diagnosed by one individual. That's certainly the potential.
Starting point is 00:28:16 And I mean, if you consider something like you take sickle cell disease, if you live in West Africa, a country like Nigeria, it's a very common condition, very common genetic disease, like 1 in 50 births or something. If you move to the States, it's 1 in 350. But then if you go to Europe, it's less common. So it's like 1 in maybe 4,000 or 3,500, something like that.
Starting point is 00:28:41 So that becomes then classified as a rare disease in Europe, but it wouldn't be in the States, it wouldn't be in Nigeria. So you've also got these contingencies if you're a human doctor. It depends where you're trained. and there's a certain luck that comes with that as a patient when it comes to a rare illness. And this is where drawing on AI, there are many opportunities. But I also want to say how we train the AI is going to be really critical as well. So if it's still fed data that it has omissions or biases or is only regionally representative from one area of the world,
Starting point is 00:29:19 you're going to have some of these same issues replicate and we know that. They're going to persist in the terms of. of biases and the mistakes. But it's always a case, Jonathan, of sort of assessing, is it better than what we've got? That's a key question. So as humans, we tend to hold AI to a much higher standard than we do ourselves, and including doctors as well. So it's an issue of saying, look, actually what is better for patient outcomes or what's better for offering information? And it's going to be a trade-off. Yeah, I mean, that's so interesting, right, because we look at AI and we're like, oh, but it's not perfect. You know, it's only 95.
Starting point is 00:29:54 percent accurate. But then if we compare it to the typical human doing the exact same task, maybe a human is 65 percent accurate. You know, so it's like we have to kind of like make a more legitimate comparison here. And that does tend to do what happens in the studies that we're much more forgiving of humans. And of course, trust comes into all of that too. But certainly we do tend to hold it to a much higher bar, which is interesting. But maybe that's something we need to grapple with that kind of bias as well. Yeah. I think you're seeing this in a lot of AI, just more general
Starting point is 00:30:28 applications and self-driving cars. Yeah. People are looking at accident rates and things like this. But if you compare that to the data on just human cause accidents, from the limited data I've seen actually AI so far, the latest generation is probably a lot safer.
Starting point is 00:30:44 Yeah. But we hold it to a different standard. We want it to be perfect, whereas we're just like, oh, we're human. We're just good enough is good enough. Yeah. and you know there are I mean in some ways it's sort of quaint that we do that and we could look at it and say it's kind of funny or why do we do that but another way that the bias actually does have real consequences because at its sort of systemic level if we decide that we prefer for example humans doing a particular task but the error rate is much higher and it does lead to harms or even mortality you know this is a very serious ethical dilemma that we've got to inspect our own bias on. up. And we'll be right back after a word from our sponsors. GoLyProFal. So you have probably seen a million ads for hair products and wondered if it actually works. My wife Stephanie did too until she found NutraFal. After entering
Starting point is 00:31:37 menopause, her hair started thinning and the shedding was really upsetting. So she tried so many different products, read everything she could, and nothing really made much of a difference. And then she started taking Nutrafall Women's Balance. And over the past five years or so, it's made a real impact. Her hair feels healthier, looks fuller, and she's had far less shedding and breakage. Nutrafall is the number one dermatologist recommended hair growth supplement brand, trusted by over one and a half million people. You can feel great about what you're putting into your body since Nutrafall hair growth supplements are backed by peer-reviewed studies and NSF content certified, the gold standard, and third-party certification for supplements.
Starting point is 00:32:15 See thicker, stronger, faster-growing hair with less shedding in just three to six months with NutraFal. For limited time, NutraFal is offering our listeners $10 off your first month's subscription and free shipping when you go to NutraFal.com and enter the promo code GoodLife. Find out why NutraFol is the best-selling hair growth supplement brand at NutraFal.com. That's spelled N-U-T-R-A-F-O-L.com promo code Good Life. That's Nutifol.com promo code Good Life. You mentioned also just the way that it's trained, and we started talking about rare diseases and how it can be incredibly helpful there. But I would imagine even just for generalized conditions, you know, if an AI model is able to be trained on all of the data
Starting point is 00:33:01 for all available conditions and then integrate every single new publication, every new study into it in real time, which, as you said, for one human being to digest even 2% of what comes out that's relevant to them would take 22 hours a day. So impossible task, but an AI can do all of that for everything in real time, that it's able to draw on a database that is just, not just for rare diseases or conditions, but for literally anything that is just informed and a profoundly different level than a human being.
Starting point is 00:33:36 Yeah, I think that that's right. But again, I would come back to the issue that publications, if we take the vast ways of publications, they're not all going to be of the same standard. Right, right. There's always going to be those omissions. There's always going to be training biases. People tend to recruit certain demographics into clinical trials.
Starting point is 00:33:57 So there can be this need to redress major biases and omissions within that diet that's fed the machine. So I would say in that sense, what's good about this question is we've got to remember that we talk about sort of machine learning and AI. But actually, humans can play a huge role in deciding how we train. how we model these tools, and that's going to be critical as well. And that's why humans will be in the loop in some sense, because they're going to make critical decisions. And we have to be thinking about that. And as you said earlier, if in a 10-year window, 40% of research that's published ends up being reversed, you know, a decade later, but we're relying on this data being put into AI to train it in real time that really complicates things, because maybe then it is giving us
Starting point is 00:34:47 information that a couple of years down the road is going to be shown to have been wrong. Yeah, but it's, again, it's no less a problem for the human. Right, right. And then it goes, there's a really nice article that was published called, the title of the article was, the answer is 17 years, what is the question? And the question is, how long does it take to move from bench, so clinical research to medical practice, 17 years? Wow.
Starting point is 00:35:15 It's almost a generation. So this is where AI can start to speed things up and we may be able to adapt quicker. So interesting. It's like we're in, it feels like we're in such early days right now, but we've also had a lot of development in the years before. When it comes to AI, one of the problems that we've talked about also is the notion of the patient experience and a patient sharing what they need to share,
Starting point is 00:35:42 feeling like they have the time to share what they need to. to share and actually giving all the relevant information and symptoms and experiences so that a doctor actually can take it all in and make the best possible diagnosis and treatment recommendation. And I would imagine also patients want to be seen and heard and felt like they're treated with dignity and given the time to do this. How does AI impact this side of things? I have so much to say about this hard to know where to begin. So basically I'll start by saying patients pour their hearts out in machines. And we have known this since the 1960s.
Starting point is 00:36:19 So as soon as there has been the introduction of a computer within a medical context, actually in the same year, in 1966, a doctor called Warner Slack did it. He was a medical doctor. And I write about this in the book. He put patients in front of this sort of monster-sized computer, the idea to take a kind of medical history.
Starting point is 00:36:40 And what he noticed was that the, patient in front of the machine was just slagging it off and saying, look, you've asked me this already, but he was laughing at making fun. But he was also disclosing more. But the very same year, there was another computer scientist called Joseph Weisenbaum, based at MIT, and he devised this sort of therapy computer program that meant that the patient was sitting. It was called Eliza, and you may have heard of it, but more people talk about Eliza than they do this other medical history taking program. But basically his secretary, Joseph Weisenbaum's secretary was sort of, she was interacting with this program and it was sort of this mimicking what a psychotherapist might say.
Starting point is 00:37:24 And she got so engaged with the early chatbot, if we can call it that, that she actually asked, Wisenbaum, look, can you please leave the room? Because she was having such an intimate conversation with the computer. So we've known, and subsequent studies have shown, patients just really they feel more comfortable and they tend to say more to technology. They divulge more for among the reasons that we said at the top of the program, which is they don't have those cues of status that tend to inhibit or interfere with what they might be, patients might want to say. It's almost like people feel much more comfortable just being open and honest.
Starting point is 00:38:06 And I would imagine also because the machine's not saying there's no clock ticking with the machine saying like we're five minutes into our 10 minute visit. can we like move this along and you're not concerned about wasting the machine's time and you're also not concerned about being judged because there's not a human being. All of those things and also doctors tend to interrupt patients and it's very hard for them not to do that when they're under pressure but it's also because again they're the more dominant party in the visit and whether somebody is sort of a higher up they're more likely to dominate to take the floor to interrupt. Technology doesn't do that. There are many positives but
Starting point is 00:38:42 nothing's ever straightforward in life. People tend to say more, they may be more at ease, but then the other hand, they are saying more. So therefore, AI can be a very potent extractor of our most sensitive medical information. And this is where, you know, if you've got young people turning to chatbots, you know, they can be giving away very sensitive information,
Starting point is 00:39:06 personal information to big tech. So there's a whole other cluster of, challenges clearly that emerge with AI when it comes to the fact that people just are very much at ease with it. I want to circle back in just a bit to sort of like the ethical issues that arise with this especially around data. The idea of somebody having this confessional effect almost with AI, I think is really fascinating. And another maybe more nuanced element, I wonder if there's any data on this is because so many of the symptoms that we go to our health care providers with can be traced back to stress to mental health issues,
Starting point is 00:39:48 to relationships, to lifestyle issues. And oftentimes we feel like we have nobody to unburden those with. We don't want to feel like we're complaining. Maybe we don't feel like we're comfortable or have easy access to mental health professionals. But I wonder just the fact that we can, we have this non-judgmental thing in front of us where we can just fully unburden and be honest and open and share whatever we need to share. If that alone has an effect of relieving stress, anxiety, which then has a trickle-down effect on physiological symptomology. That's a great point. And it could be that, I mean, one of the things I've researched before is the placebo effect. And the placebo effect is the placebo effect.
Starting point is 00:40:38 is quite a potent effect where if you expect to feel better, it actually can influence, it sort of dialed down experiences of pain, it can mitigate depression to some extent and anxiety. It's not a cure-all, but it certainly can have a significant effect on certain symptoms. And what's interesting about AI is it tends to be, this is one of the challenges with it,
Starting point is 00:41:04 current models like ChachyPT tend to be quite obsequious, They tend to be people-pleasing in their biases. So it may be, and we've all entered, I mean, anyone who's played around with these tools can see just how utterly polite, unfazed they are compared to humans. And sometimes if I'm quite abrupt with it, I feel slightly guilty afterwards. It's like you tape in, I'm so sorry. Yeah. Yeah, so you can't help but treat it a bit like a human with human attributes. But certainly in response, it's very steady.
Starting point is 00:41:36 it doesn't get, it looks like it's not being, you know, rattled by anything, and of course it can't be, it's just AI, but again, we've tended to personalise it, but that on top of the fact, so the range of compassionate responses, and on top of the fact, it's always there, and it does tend to give people pleasing responses, I suspect what you've said is right, that it may well have a positive physiological effect on some patients, we don't tend to talk about that that much, right? now because there's obviously other concerns with harms with these tools, but I have written about this as well, but the fact that it actually might be a vehicle for elevating placebo
Starting point is 00:42:18 effects in some cases. But I don't know of any studies on that so far. Yeah, you just mentioned the concern with harms. Talk to me about that side a bit too. Yeah, recently there's been a huge spate of articles and attention given to this issue of young people turning to these chatbots for counselling and advice and even in some cases asking for and that in itself is worrying but in terms of asking for advice about how to commit suicide so in which case you can get past guardrails with these toes and it may actually offer instructions to self-harm. So it could be a case of, you know, you don't directly ask it, but you say I'm writing a story in school and I want to have this fictional character is looking for what would be the right.
Starting point is 00:43:13 So there's ways people can sort of put the right context in and their prompts to get around these kinds of challenges and then they can get advice about, and we know of cases where this has happened. So the issue of harm here, especially when people, younger people, in America, by the way, younger people are glued to their devices, but I saw Pew Research a couple of years ago, about 50% of young people in America are almost permanently on a device. I mean, it's frightening. So the dependency there, too, with some of these chatbots for advice,
Starting point is 00:43:49 there's certainly openings for concern, not just on the quality of what is given, but seeking out advice and these tools giving it to you whenever that's not what you need. professional help or in this case still professional help until we can find ways to get around or guardrails for these kinds of tools. So we need much more conversation about this kind of uptake. Younger people tend to be faster adopters of technology and we've got to find ways to offer parents advice, young people advice about when it's appropriate to use these tools. And I think that brings us also around nicely to this conversation that you referenced earlier
Starting point is 00:44:30 which is the ultimate solution is very likely not one or the other, it's the integration of humans with AI. And there's been interesting research on this also. And I think the study they got the buzz recently was this study that showed that, you know, they looked at physicians working alone, AI working alone, and then AI plus physicians. And raise a lot of eyebrows because the outcome was that actually, you know, people thought, well, AI plus physician would be the ultimate. You get the benefit of both. And actually, the outcomes performed worse than AI alone. But it's more nuanced than this. So take me into this. It's much more nuance. And actually, this is where I think we, there's been a kind of a conversation within medical
Starting point is 00:45:18 schools where discussion about AI has cropped up. And I've been kind of hanging around medical schools for about 15 years now. But basically, the idea is, look, folks, don't worry. technology is coming but we're all going to have the best of all worlds here because the technology will be great it'll take some of the burdens of and doctors and AI will kind of work well together that has always been a mistake because there's been a sort of siloed and this is where medicine sometimes sticks in its own track it's very I mean it's like typically with academic fields people stick with what they know but if you take a step back and you look at research in other fields like psychology, again, for many years we've known, that's just not how humans respond to technology
Starting point is 00:46:05 and particularly if you're an expert. So again, for a number of years, actually going back to the 1950s, it's been known that whenever you take a sort of domain expert and you ask them to reflect on what an algorithm is saying, people, domain experts tend to hold their noses to what the AI says. It's what called algorithmic aversion. On the other hand, lay people who don't have any expertise aren't really squaring up to the algorithmic output, and they tend to defer to it more. So you've got this sort of algorithmic aversion, which helps to explain. It's sort of an overconfidence, too, on the part of the expert, because they're like, well, I've been trained in this, you know, I went to medical school for 10 years into hell with this technology. So it's
Starting point is 00:46:56 an implicit kind of instinct that you're an expert, you know something, therefore you're not necessarily going to listen. That tends to hurt the accuracy of the AI, and that's a bad thing where AI could be beneficial. So what we've seen is a replication of that within these recent studies, where if you just simply leave the AI alone, it's now gotten so good that you're better leaving it alone in some cases. And that actually might be what the future of medicine looks like. And that's what I basically predict and anticipate in the book. The technology is the worst it's ever going to be. And then we have to ask the question, what roles do humans have here? What is the identity of a medical doctor now? What is the point of the medical doctor when it comes to diagnostics?
Starting point is 00:47:44 Or what's the right relationship? If we've got a human in the loop, how do they work? What kind of expertise do they need in order to be humble enough to defer to the AI when that's necessary, but also to ensure that there's the right kinds of accountability as well. So it's really much more complicated. I'm glad I definitely welcome that question. Yeah, and it feels like we don't really have an answer to that right now. I think what the answer is right now is we've got a profession that's in flux and is going to, many of these white-collar professions are going to be in flux for a sustained period because as I argue and the trajectory of these technologies is improving and it's an affront to our sensibilities to see them it sort of start to encroach an expert and
Starting point is 00:48:33 knowledge based domains but we've also got to remember look when AI or robotics came for factory assembly line workers for many blue color professions sort of a case of well those are those were efficiencies and we can now have greater production or whatever. And I think what's tended to happen with white-collar professions is including medicine, there is this, a sort of a sense of a right to practice, there's a sort of prestige that comes with it, and there's a status, and that's going to be very difficult to give up. And so I think that there's going to be, for some years to come, a kind of a reckoning with, and it will ultimately be a case of, are there going to be alternative ways to get healthcare? If there's not some sort of a major shift in how current
Starting point is 00:49:25 systems are working, I think we may see that there's going to be sort of big tech will emerge with different kinds of models, and then we're going to have a case of patients voting with their feet because people will see what works for them. And I think that also brings up the issue of access, you know, so right now, if, you are in a position to have ready access to a wide variety of highly skilled providers and you have the ability to have insurance and that there's coverage, you know, like that's one category of people. But then if you're in, you know, like a healthcare desert or if you're in a country or a place where it's really hard to get access to super high quality health care
Starting point is 00:50:03 that has the latest information and again, and there isn't, you know, like very good coverage, then maybe this also has the effect of levelling the playing field to a certain extent, in long term at least. It could well be the case. It's the readiness. I mean, of course, there's going to be an issue there with digital literacy.
Starting point is 00:50:22 So not everybody who has access to these tools, not everybody has, even if they know how to use the tools or have a digital device, they may suffer from data poverty. So there's issues like a broadband access. So there's going to be issues with digital divides as well.
Starting point is 00:50:40 Having said that more and more people are getting online globally, so nearly six and ten of the world's population has an internet-enabled mobile device. So it is going to be a case of people will increasingly have more ready access to versions of expertise that they just didn't have before. And the key question that I sort of rehearsed repeatedly in the book is the idea that when we're having, discussions about how good AI is. As you said, it's not against concierge care or people
Starting point is 00:51:15 have the very best of the very best. Very often times across the world, even in rich countries, not having access to any healthcare at all, and just wanting to sort of triash, even if you can access the doctors, like, well, is it worth going? What could this possibly be? Getting on to some of these tools could assist with that, but also it may increasingly assist with sort of a second opinion, not just a first opinion, but a second opinion service with doctors as well. Again, that's an issue where patients are very reticent to be seen to question their doctor and where these tools can sort of offer supplement and offering guidance and advice. I think a lot of us probably still feel, you know, we may actually go to our favorite chatbot
Starting point is 00:51:58 right now to just if we're feeling something going on and type it in so you can get a quick answer, but we still want to know we have the person that we can go to available to us. Like, we don't, nobody wants doctors or healthcare providers to go away. I think that's probably pretty commonly agreed to. But there is this yes and thing where it's like, what is the role of the doctor and what will the relationship be between the doctor, the AI and the patient moving forward? Yes, and it's really into it because nobody now wants that. But what, I mean, it's how we're used to receiving care.
Starting point is 00:52:30 And I see, so surveys show people, they still do want, they're happy for AI. Most people are now increasing. happy for AI to be used by doctors, but the doctor's the overseer of it. Now, again, that I think will change in the years to come when the technology gets better because it's what we're used to and future generations may feel a bit queasy about it. They may say, my God, how do people go to a human doctor? How did they manage just do so well? Again, I'm not criticise it, but I think future generations will see the world, they'll experience it very differently than we do. may look back as this is an artifact of medical history
Starting point is 00:53:10 that we consulted with doctors not mean that there may not, that there won't be humans necessarily involved, but the idea of the doctor is this kind of one-man band, this sort of godlike figure, I think that that will change and we're going to need new medical idols. Yeah, it's such a great point. Like, this is how we feel now, but a decade from now,
Starting point is 00:53:29 a generation from now, they may look back at like today and say, oh, how uninformed or how silly that time was. Possibly. And I'd see if you look at the history of medicine, you see that a lot of innovations were resisted. So antiseptics, hand washing, anesthesia is a big one. Anesthesia is a really interesting one because doctors actually resisted it because they had honed their skills to work very quickly. So even though they could literally hear the cries of pain of patients, it was a case of, but we have expertise that we've learned. We've learned to do this very fast. So it was seen as undercutting that. expertise. But yes, there's been resistance to a lot of innovation, penicill and vaccinations, clinical trials, where there was resistance and where there was a sort of delayed progress. And it may well be that AI could be part of that, where patients again may be the ones and sort of more external figures to the culture of medicine may take up these tools in ways that doctors may resist for a variety of reasons. some of them very good reasons I will add too, because again, change is hard when you're under pressure. But that's sort of the point. Humans just find it hard to adapt. We do. I'm raising my hand here also. You know, I think one of the things that I've heard also as a rebuttal as well, but AI hallucinates all the time. And I think, well, yeah, but if you look at the amount that
Starting point is 00:54:57 it hallucinated two years ago versus today, it's dramatically different. And this is just a matter of time before we kind of train the models that they're better and better. and better. And again, we're comparing that against perfection rather than against the typical well-trained individual. And we have to make a real comparison here, not against like absolute perfection. I completely agree. I think the real worry there is that these kinds of arguments tend to be in, as you say, a kind of a vacuum. And it's important to critique the AI. Absolutely. We've got to pay attention to that. And we've got to be completely vigilant to that. In a way, we've got to be completely hypercritical across the board, which is something I
Starting point is 00:55:35 say that we're almost romanticizing how good humans are and we're not contextualizing when we discuss the AI. So we've got to be really careful about saying, first of all, what's the AI for? And to have that discussion, you've got to say, well, it might be for improving these particular areas where we're just not as good. Yeah. One last question here before we wrap up. And this is the data. There's a data, there's the training set, but increasingly as patients' own data gets fed into these models and becomes a part of the training set. One of the concerns here is, well, if our data, literally every time we show up at a health care provider, everything we say and do and everything that the provider thinks and all of our testing is fed in to train a bigger model,
Starting point is 00:56:23 which is beneficial because then everybody gets the benefit of everybody else's input, should we be freaked out about that? I think we should have more conversation about it. I think these are societal level conversations about what are those sort of trade-offs. What are we giving away? And as you say, for many it may be a case of we're happy to give away private or sensitive medical information if it makes care better. So it benefits us and it benefits other people. But then the wider issue may be one of confidentiality. So if your privacy may have gone in some sense, but it's a case of keeping your information.
Starting point is 00:57:03 more confidential from other parties who could be exploiting it through a big tech pipeline. And that's where people get nervous. I think particularly nervous because it's a case of future ramifications of giving it away. Am I going to be exploited when it comes to some future version of healthcare coverage, employment, policing, all kinds of areas of our lives that big tech already is exploiting us on?
Starting point is 00:57:33 Yeah, and I think they're very real concerns, and we're early in the conversation. You know, it's just, and I'm so interested and curious to see how it all evolves, and it's happening so quickly. But as much as the concerns are there, at least for me, the level of excitement and possibility is really, is so much higher. Same for you, or? Yeah, definitely. I tend to be an optimist, but with a spike of ice in my heart about it all, because I would say it's easy to be a cynic, but actually it's hard. harder to remain, it's more constructive to be an optimist, but that doesn't mean you're not paying attention to all the challenges. We've got to pay attention to the challenges, but
Starting point is 00:58:12 we've also got to work hard to overcome them. And I think there are many benefits for humankind through the use of AI, but we also have to confront very big issues as well, environmental costs, privacy, what kind of society we want to live in. And we've got to be constructive about working around all of those problems if we want to avail of these tools. Completely agree. It feels like a good place for us to come full circle. So I always wrap every conversation here on Good Life Project with the same question, and that is, if I offer up the phrase to live a good life, what comes up? Balance, I would say, a nice balance between doing meaningful work, and I think the world of work and the nature of work is going
Starting point is 00:58:54 to change a lot with AI, but it's a balance between friendship and living well and not missing the point of life. Thank you. Hey, before you leave a quick reminder, this conversation is part of our future of medicine series. Every Monday through December, we're exploring breakthrough treatments, diagnostics, and technologies transforming health care from cancer and heart disease to aging, pain management, and beyond. If you found today's conversation valuable, you won't want to miss a single episode in this series.
Starting point is 00:59:28 And next week's conversation is with Dr. Adele Khan, where we'll explore how groundbreaking treatments, like a special kind of stem cell called muse cells and peptides are revolutionizing medicine offering hope for conditions ranging from chronic joint pain to neurodegenerative disease. And we'll dive deep into why these therapies work, who they're right for, what to look out for, and what the future holds as these treatments become more accessible. Be sure to follow Good Life Project wherever you listen to podcast to catch every conversation. Thanks for listening. See you next time. This episode of Good Life Project was produced by executive producers, Lindsay Fox and me, Jonathan Fields.
Starting point is 01:00:08 Editing help by Alejandro Ramirez and Troy Young. Christopher Carter crafted our theme music. And of course, if you haven't already done so, please go ahead and follow Good Life Project in your favorite listening app or on YouTube too. If you found this conversation interesting or valuable and inspiring, chances are you did because you're still listening here. Do me a personal favor. A seventh second favor. Share it with just one person. I mean, if you want to share it with more, that's awesome, too, but just one person even, then invite them to talk with you about what you've both discovered, to reconnect and explore ideas that really matter, because that's how we all come alive together. Until next time, I'm Jonathan Fields, signing off for Good Life Project.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.