Freakonomics Radio - 661. Can A.I. Save Your Life?

Episode Date: January 30, 2026

For 50 years, the healthcare industry has been trying (and failing) to harness the power of artificial intelligence. It may finally be ready for prime time. What will this mean for human doctors — a...nd the rest of us? (Part four of “The Freakonomics Radio Guide to Getting Better.”) SOURCES:Bob Wachter, professor, chair of the department of medicine at the University of California, San Francisco.Pierre Elias, cardiologist, assistant professor of biomedical informatics at Columbia University, medical director for artificial intelligence at NewYork-Presbyterian Hospital. RESOURCES:A Giant Leap: How AI Is Transforming Healthcare and What That Means for Our Future, by Bob Wachter (2026)."Epic Systems (MyChart)," by Acquired (2025)."Detecting structural heart disease from electrocardiograms using AI," by Pierre Elias and Timothy Poterucha (Nature, 2025)."What Are the Risks of Sharing Medical Records With ChatGPT?" by Maggie Astor (New York Times, 2025)."Will Generative Artificial Intelligence Deliver on Its Promise in Health Care?" by Bob Wachter and Erik Brynjolfsson (JAMA, 2023).The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, by Bob Wachter (2015). EXTRAS:"The Doctor Won’t See You Now," by Freakonomics Radio (2025)."How to Stop Worrying and Love the Robot Apocalypse (Update)," by Freakonomics Radio (2024). Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:04 Over the past few episodes in this Freakonomics Radio Guide to Getting Better, we've looked at a variety of things that may produce a longer and healthier life. Nutritional supplements, faster drug approvals, figuring out the secrets of the gut microbiome. And today, in the final episode of this series, we'll look at something that intersects with all of those things and maybe a trillion more. Today's topic? How artificial intelligence will change health care.
Starting point is 00:00:32 And why is the health care system in need of? change? If you look closely, you'll see a bizarre split. The advances in medicine and medical technology over the past century have been mind-blowing, but the way these advances are delivered to actual patients can be also mind-blowing, but in a bad way. I have the ability to put a patient on heart-lung bypass, where their organs are literally failing and we're able to keep them alive. It's truly some of the most ambitious technology humanity has ever created. And yet, the way that I find out that someone had a heart attack is still through a pager, and then I have to go and say, hey, who here is having the heart attack? The healthcare system has so much technology slop that it can
Starting point is 00:01:20 be hard to see just how good the actual medical technology is. But that may be about to end. If you think about it, this is the biggest experiment in the history of medicine. and the experiment is already underway. The moments where I feel like I'm really doing science is when I genuinely do not know the answer to the question, but I know it's important to answer. Today on Freakonomics Radio, AI and a giant leap into the future of health care. This is Freakonomics Radio,
Starting point is 00:02:00 the podcast that explores the hidden side of everything, with your host, Stephen Dubner. We could probably make 10 episodes looking at AI in health care, but if we want to do it in a single episode, which we do, it's helpful to speak to someone who is able to frame the biggest questions well, someone like this. I'm Dr. Robert Wachter, although he says we should call him Bob. And I'm professor and chair of the Department of Medicine at the University of California, San Francisco. And what does that job entail?
Starting point is 00:02:40 What that is is running a large department of about 1,000 doctors, everything from geriatricians and primary care doctors to cardiologists and oncologists. and we do research, education, and take care of lots and lots of patients. You're still a practicing clinician as well? Is that true? Correct. About one month a year, I do this thing, a field that I actually started, called hospitalist. So about one month a year, I take care of very sick people in the hospital. What's your medical specialty by training? I trained in internal medicine, then did fellowship training and epidemiology and policy and ethics.
Starting point is 00:03:12 But I'm an internal medicine doctor, which in the old days meant you took care of patients in clinic and in the hospital, and then in part because of the specialty that I kind of cooked up about 30 years ago, those things have gotten divided, and we have separate doctors, for the most part, who take care of hospitalized patients. That's what I do. And how did you become a hospitalist and then kick off this field of hospitalists? Was it just because you were doing internal medicine in a hospital, and you kind of expanded that practice? Yeah, I had a boss who's a very smart strategic guy who said the way we organize hospital care is the way we've done it for 100, years and that can't be right. Let's think of a new way of organizing hospital care. Because at the time,
Starting point is 00:03:52 the typical model was your doctor who took care of you in clinic, also took care of you in the hospital, which makes some sense from a continuity standpoint, but just can't work. It's got a physics problem. You can't be in two places at the same time. And if you think about it, the fields of emergency medicine and critical care medicine didn't exist 50 years ago. Then people decided there needed to be a separate specialty with a specialist being a generalist who's a specialist in this place. So we developed this idea of a separate doctor to be the hospital doctor, and lo and behold, it became fastest-growing specialty in history. Within a few minutes of speaking with Wachter, you get a sense of how his brain works.
Starting point is 00:04:32 He is drawn to categorical sorting and operational competence, all of which has been particularly useful in his latest extracurricular endeavor. It's a book, his sixth, called A Giant Leap, how AI is transforming healthcare and what that means for our future. When you look at health care, people sometimes say to me, why are you people such Luddites? Are you kidding me? Come to a modern hospital. We have technology everywhere. Go to the radiology department, cardiology, surgery.
Starting point is 00:05:00 But we have not used general purpose technologies to transform the way we do our work. We use it to transform the way we do a procedure or the way we treat a disease. And thank goodness for that, because we're much better at. that than we used to be. So why is health care delivery still so sloppy? There are a lot of reasons that we are pretty static. The fixed costs are very high to get into the business. It's almost impossible for a startup to build and launch a new hospital. The incumbents are quite powerful, although you could argue that's true for a lot of other industries, but doctors, nurses, etc., are powerful. The economics are really funky. If Amazon or Netflix or you name your favorite disruptor
Starting point is 00:05:41 comes up with a better mouse trap, the relationship is largely between a customer and the vendor, and the customer says this is better or cheaper or whatever, I'm going to buy it. In health care, you have this assorted mishmash of insurance companies, businesses, government, and also because health care is so important, and we have the capacity to kill people, if we don't do it right, it is highly regulated, which is yet another barrier for innovators to come in and disrupt us. We like technology, but we like it in very, very specific ways. we have not embraced it as a mechanism to make care better and safer and less expensive. You call your book a giant leap. I want to understand this concept of the giant leap.
Starting point is 00:06:22 My sense is you're arguing that healthcare has failed to take advantage of technological progress to the degree that most industries have and that you're hoping that AI in all its many forms will help us kind of leap over that sluggish period into a next better phase. Is that about right? Yeah. The quote I like is Hemingway's quote from the sun also rises now a hundred years ago. One of the characters goes bankrupt and another character says, how does a man go bankrupt? And famously, he says two ways gradually than suddenly. So that's us. I mean, I think we have the gradually part down pat. We now have computers, which is great, but we are the largest users of fax machines in the country. We finally have ditched the pagers after the drug dealers did.
Starting point is 00:07:08 They were way ahead of us. So, yeah, we are very sluggish in adopting new tools, but we have gone digital. I wrote a book 10 years ago called The Digital Doctor, which was really about our transition from paper to digital. That book is a very grumpy book. It's like, how the hell did we go from paper to digital? and in some ways make things worse. In some ways, make the lives of both patients and doctors harder,
Starting point is 00:07:35 just digitizing the record. Helped in certain ways, got rid of doctors' handwriting, the kind of perennial joke. You know, when I do an electronic prescription, it can land at Walgreens or CVS. That is massively better and safer. Two people can look at the chart at the same time. There are lots of good things about it,
Starting point is 00:07:52 but it was not enough to transform medicine, and in some ways, as I said, it made it worse. The giant leap really is the kind of, combination of the magic of the new AI, meeting a health care system that's in desperate need of change and everybody knows it. We really are about to have our suddenly moment when health care is actually transformed after tiptoeing our way toward this over the last 10 or 15 years to make it better and safer, more accessible, more satisfying for everybody, both patients and clinicians. And I think eventually less expensive, although that's harderesque. My sense is that in writing
Starting point is 00:08:29 this book, you, a busy and accomplished person, decided to become even busier and accomplish something else. And it seems as though you sort of got yourself a graduate degree in healthcare AI by speaking with all these healthcare administrators, tech firms, investors, et cetera, et cetera, et cetera. Can you just talk about what this journey slash process was like for you, why you decided to undertake it, and then who you actually did spend time speaking with? The things I was reading were written by technologists, and I don't think they understood the big picture, the policy, the politics, the economics. And so my wife, who's a journalist and writes New York Times, said, the only way you're going to get this right is to do it journalistically.
Starting point is 00:09:13 And I said, what does that mean? She said, you're going to have to go and talk to a lot of people. Who did I talk to? I tried to find interesting companies and interesting people doing cool stuff. And when I spoke to them, I asked them, who else should I speak to? And they told me about other interesting people. I know the world of clinical medicine well. I know the world of academic medicine well and medical education. I live in San Francisco, so I'm surrounded by technologists. I advise a bunch of tech companies. So in each of those areas, I knew a fair amount to get started and knew some of the players, but I had to go deeper. The first chapter in the book is called An Overnight Revolution, 50 Years in the Making. Can you just talk for a moment about what happened?
Starting point is 00:09:53 during those 50 years, the successes, the failures, and why it's been such a slow boil? Yeah, I can do it quickly if we talk successes, and it'll take longer to do failures. Slow boil is a couple of things. One is people are treating AI and healthcare like it's new. It is not. In the 70s and 80s, AI became a thing, and there was a lot of interest in medicine and artificial intelligence. if you think about it, what does a doctor do? What did I spend eight or ten years going to school and residency and fellowship learning to do? It's be intelligent to take a whole body of information, symptoms and lab tests and all that match it against a body of information, the medical literature and textbooks, and come up with a diagnosis and a treatment. So AI was very exciting, but the AI of the day was not ready for prime time for a few reasons.
Starting point is 00:10:44 First of all, it was the old if then AI, if a patient has a sore throat and swollen lymph nodes and a fever, they probably have strep throat or mononucleosis. That works fine for very simple problems, falls apart very quickly faced with the complexity of real medicine. The second was all of our data was on paper. Therefore, if you wanted to use these fancy new AI machines, you had to go to a separate computer and type everything in. So both of those caused the field to flame out, and AI went away for medicine for about
Starting point is 00:11:13 40 years. Was this imaging as well or no? It was early for imaging. Imaging started in the 80s and 90s. It was largely around the cognitive work of doctors. Part of the problem was they started on the hardest problem, and the hardest problem is diagnosis. I remember speaking to one of the early leaders at the time who was a professor at Stanford. These are not dummies.
Starting point is 00:11:35 These are MDs and PhDs and computer science. He said, why did you focus on diagnosis is the first thing to tackle? He said, we weren't naive about the complexity. It was just the most interesting problem. You could understand that. were innovators, they were at the cutting edge, they really weren't thinking about practicality. That was an important lesson for today. You don't start on the hardest problem and one with the highest stakes and one if you get it wrong, you can kill somebody. You start on low-hanging
Starting point is 00:12:02 fruit. You need to get buy-in and get trust from everybody, patients and doctors and nurses. I think we're not making that mistake this time, but that flamed out. Then IBM Watson beat the Jeopardy champions in 2011. You may remember that moment when Watson, a supercomputer trained to play Jeopardy, competed against a pair of human Jeopardy champions, including Ken Jennings. I've never said this on TV. Chick's dig me for 200, please, Jimmy. Kathleen Kenyon's excavation of this city mentioned in Joshua showed the walls had been repaired 17 times.
Starting point is 00:12:39 Watson. What is Jericho? Correct. 400, same category. This mystery author and her own. archaeologist hubby, dug in hopes of finding the lost Syrian city of Erkesh. Watson? Who is Agatha Christie? Correct. Watson won $77,000 in that competition. That was a nice payday, but of course Watson cost billions to develop and IBM had much higher ambitions for it than
Starting point is 00:13:05 winning at Jeopardy. I remember watching that and thinking, well, we're all toast. And when Watson then tried its hand in health care, which was the first industry that it tried to work on, it completely flamed out. IBM did enter Watson into some high-end partnerships with M.D. Anderson Cancer Research Center, for instance, but Watson just didn't turn out to be very useful. Some of its answers were obvious, others dubious, and it was very expensive. In the end, IBM dismantled Watson, keeping some parts and selling off the rest. Again, not ready for prime time. And then the sort of big deal in medicine was about 15 years ago, we all went from paper records to these huge software systems called electronic health records. In 2008, fewer than one in ten American hospitals or doctors' offices had an
Starting point is 00:13:58 electronic health record. By 2016, fewer than one in ten did not. In the space of a very short amount of time, we went from basically a paper-based industry where the idea of using advanced data analytics and machine learning and all that was impossible because all the data was on pieces of paper to an industry that had its information in digital form. What was disappointing about that was many of us, including me, naively thought that's the ballgame. If we get our data in digital form, we'll be ready to innovate and do all this stuff like Amazon and Netflix and Apple and medicine will be better and safer and cheaper. It didn't work. The lesson that I took from that era was a term coined by Eric Bernhoffson at Stanford
Starting point is 00:14:43 called the productivity paradox of IT. The idea that you take some fancy new information technology, you bring it into an industry and snap your fingers, and you will quickly transform the industry to make it better and more productive. But that almost never happens. Never happens. Does not work. And the paradox is it looks so good on the PowerPoint slides, the ads that were used to sell it to us.
Starting point is 00:15:09 It doesn't work, and it doesn't work, partly because the technology needs to get better and all the iterative versions 12.7 need to happen. But much more importantly, the industry needs to transform the way it thinks about its work, organizes itself, the culture, the governance, and we didn't do that. In 2012 JAMA, the Journal of the American Medical Association published a crayon drawing, probably the first time I'd ever did that, from a seven-year-old girl who wanted to see her pediatrician. What it shows is the girl sitting on the exam table, moms next to her, sisters in the corner. and in the other corner of the room is the doctor with his back to the patient typing away.
Starting point is 00:15:44 It's a beautiful drawing. There's one thing the girl got wrong, which is she portrayed the doctor's having a smile on his face. I can tell you that no doctor was happy about being transformed into a data entry clerk. And patients noticed it. They went to see their doctors and their doctor had the head down typing away. And why did that happen? Because the computer became this enabler of all of these outside entities who used to have no ability to influence what the doctor did because I was scribbling on a piece of paper, now had a way of making me check 12 boxes about, did I examine nine body parts, and did I ask you if you wear seatbelts, do you exercise and all that? All noble questions, but now there was a forcing function that you could make the doctor record all this stuff, and so people did. And
Starting point is 00:16:31 importantly, when we send a bill off to the insurance company, the amount of money we get paid is partly related to the nuances of how I record the note. Which creates some perverse incentives right there. Totally, totally ridiculous incentives to say the right words in order to get the best bill. And then a few years after that federal legislation mandated that patients could not only see their basic information and maybe their medications, but actually could read my note and see their X-ray results and see their lab results. there was absolutely no information to help the patient figure out what any of that meant or even to make an appointment. Other than to maybe forward the results back to the doctor and say, I'd like an explanation, please, which just sludges up your inbox even more.
Starting point is 00:17:20 The companies did what seemed logical. They put a little button at the bottom of the screen that said, send a message to your doctor. Lo and behold, patients being normal human beings, click that button all the time. electronic health records have led to a huge increase in what is called pajama time for physicians. We talked about that in an episode called The Doctor Won't See You Now, number 650. The American Medical Association, in a recent survey, found that roughly 20% of physicians spend eight or more hours a week outside the office wrestling with electronic health records. But it seems that a new day may finally be dawning. The first really widespread use of AI in healthcare now, and really the one that took over very quickly in a year or two, is what's called an AI scribe or ambient intelligence. Every doctor at UCSF now has access to a tool where if you're coming in to see me, I put my phone down on the desk, say, is it okay if I use this to create my note?
Starting point is 00:18:23 Press a little button. And it records our conversation. At the end of a conversation, I press a button, and there is your note. And this is not just a transcript. This is an assimilated transcript, you might say, yeah? A transcript would be worthless because you said, well, doctor, I'm having chest pain and maybe 10 minutes later you would tell me you're having shortness of breath and your right leg hurts. Those things go together in the note. They don't go 10 minutes apart in the note. Maybe between them you told me about your faccaccia recipe or how much you love your grandchildren and how Tommy's soccer game went last week. That generally does not go in the note. So the note, has to weave all of that together into a template that we are comfortable with. And these tools now do it extraordinarily well. It saves me maybe a minute of time, not that much, but more importantly,
Starting point is 00:19:14 I'm no longer that doctor's looking down at my keyboard during our time together. I'm looking at you and really engaged in the conversation. This has really been the first AI tool that took medicine by storm. And I think quite smartly on the part of the healthcare organizations, doctors and nurses, but also the companies, because it's an easy win. It's something that satisfies everybody. The risk is relatively low. And for doctors, it's like, oh, my goodness, I don't need to retire next year. This time, in part because we've screwed up digital transformation in health care so many times, I think everybody's coming in with their eyes open. And a little bit more strategic, you don't threaten the doctor saying we're going to take over your job. You are the doctor's
Starting point is 00:19:55 friend. You're going to make their lives easier and better. And you're not going to do anything that if stuff goes wrong, you're going to kill somebody. That is a digital scribe. That is reviewing the chart for me. That is helping me create my bill. That is helping a patient schedule an appointment. It's all that kind of low-hanging fruit. So does that mean that you, when you're seeing patients in the hospital now, will take a chart and feed it through your favorite AI agent and ask for a summary and walk into a room much more prepared? Yes, but not my favorite AI agent because one thing we can't do and shouldn't do is take your medical record and stick it into a public version of chat GPT. Fair enough. So what version do you use? We use, we're now about to you at UCSF. We have a partnership with chat GPT, so we have a version that's inside our firewall. It's actually within my electronic health record. I take a look at your record and I see it's longer than I have time for. And I click a little button and it will summarize a 600 page.
Starting point is 00:20:54 document in 30 seconds, the way it will summarize a 600-page book in 30 seconds. Give me an example of how that has worked out for you so far? It just makes my life easier. And if I'm seeing you and you have a past history of having had a blood clot 20 years ago, and that's on page 397 of your 600-page record, and I miss that, I may not make the right decision about whether you need a medicine to try to prevent a blood clot if you're going to be in the hospital. One of the points I make over and over in the book, I use Biden's old line,
Starting point is 00:21:29 don't compare me to the almighty, compare me to the alternative. So even if this chart summarization is imperfect, and the data says right now, it's very good, but not 100% perfect. I recalled in the book a patient I saw a long time ago, the patient had a history of a pulmonary embolism, a blood clot to the lung,
Starting point is 00:21:47 which is really a bad thing to have had and probably means you're going to be on a blood thinner, which can be dangerous for the rest of your. your life. I happen to have a few minutes before I saw the patient. I'm doing a little head scratching like, oh, that's funny. The patient had a history of a pulmonary embolism. The patient had no risk factors, no family history. That's kind of unusual. And so I'm flipping through the chart. And finally, I found where this history of a pulmonary embolism, which we often shorten as PE came from. The other thing we shorten as PE sometimes is physical exam. And the patient,
Starting point is 00:22:19 20 years ago had a physical exam, which the doctor labeled as PE and wrote the patient's physical exam under that. The next doctor, probably in a rush, looked, saw the initials PE, and on the patient's problem list, now the patient had a pulmonary embolism. And that stuck to the patient like gum on a shoe for the rest of their life if I hadn't caught it. So for all our concern about hallucinating or bullshitting by AI, human intelligence is quite fallible, we should say. Intelligence and time. This was not a matter of someone being. not intelligent, it's just there's no way to get the work done that needs to be done. Is it cutting down on pajama time? Absolutely, absolutely.
Starting point is 00:22:58 And in a way that is very meaningful for physicians to the point that it has led them to be open to, okay, that was great. What's the next thing? Okay. So what is the next thing for AI in healthcare? We were just told in medical school, we can't detect these forms of cardiovascular disease using this test. But we ask ourselves, could AI do exactly that? That's coming up after the break. I'm Stephen Dubner. This is the Freakonomics Radio Guide
Starting point is 00:23:26 to Getting Better. If you haven't heard the earlier episodes in our series, they are sitting right behind this one in your podcast queue, and we will be right back. Bob Wachter, who is chair of the Department of Medicine at the University of California, San Francisco, has been telling us that AI has recently been proven super helpful to health care providers by acting as a digital scribe and cutting down on other paperwork. So that's great. But how about some more ambitious uses of AI in healthcare? For that, we will go to Pierre Elias.
Starting point is 00:24:10 He did research with Wachter at UCSF, took leave from medical school to work for an AI healthcare startup, then went back and got his medical degree in 2016. And what is Elias up to now? I'm a cardiologist at Columbia University. I'm an assistant professor in biomedical informatics. And I'm also the medical director for artificial intelligence for New York Presbyterian Hospital. Okay, that's the part I want to hear more about.
Starting point is 00:24:33 What does that mean exactly to be medical director for artificial intelligence at a big urban hospital chain like that? So my center develops, validates, and deploys AI technologies to help us find patients with diseases so that we can take better care of them. We run the largest cardiovascular AI screening program in the country. How big is it? And is it just your organization in all its branches? or does it go beyond your organization? The majority of our work happens within our organization. This is eight hospital centers, 180 clinics in the greater New York area. But we really do try to make a lot of this work exist outside of those walls.
Starting point is 00:25:10 I co-founded a consortia called Train Cardio, the Task Force for Research Advancement and AI and Cardiology. This is 20-plus institutions around the country where we regularly validate each other's work, collaborate on large projects, and when possible, freely share. that information or that data with the world so that other people can build upon it. When you talk about building out this network of people like yourself, give an example of a particular type of project. A number of years ago, I got a call in the middle of the night from an outside hospital, and they said, we have a patient that we think we need to send to you urgently. This was a gentleman who had shown up three months before in their emergency department,
Starting point is 00:25:50 and he had shown up with some chest tightness and shortness of breath. They checked him out. They They ran a battery of tests and they said, listen, you're not having a heart attack. We hear a murmur in your heart. You should go see a doctor about that, but you're fine. You should go home. He feels better. He goes home. He's waiting to see his primary care doctor.
Starting point is 00:26:08 Two months go by, and he has an episode of the same sensation, this chest tightness and shortness of breath. And he goes back to that same emergency department. But this time, he's in respiratory distress. They end up having to send him to their intensive care unit. They have to intubate him and put him on a ventilator. and at that point they do an ultrasound of his heart, and they see that he has severe valvular heart disease.
Starting point is 00:26:31 We have four valves in our heart, and like the plumbing at home, it can get rusty or leaky, and if it gets really severe, it can become life-threatening. Columbia is a world-leading center in valvular heart disease, and so they were calling me in the middle of the night, saying, we really need to send this guy to you. I said, absolutely. He came, and I spent the rest of the night working on him,
Starting point is 00:26:49 but unfortunately by the morning, he was in multi-organ failure. He arrested and he passed away. This was a gentleman who otherwise had no past medical history was otherwise healthy, and I will never forget having to sit down with his partner and say, I'm so sorry, there's nothing I can do for him. The thing I became convinced of was if we had just known about this disease, we would have been able to do something about it. He could have gotten a same-day outpatient procedure, and I think he'd be alive today. And it's hard to imagine that you couldn't have known about the disease when he had been,
Starting point is 00:27:20 A, in an emergency department at a different hospital, and then B, sent to you. So how rare or difficult to detect is this disease? This is the fundamental challenge in all of medicine is you can't treat the patient you don't know about. Oftentimes, we're waiting until patients develop symptoms, but for many diseases, symptoms are a late presenting case. I became obsessed with this question, which is we don't have a screening test for the most common cause of death in the world, which is most forms of cardiovascular disease. And the reason for that's relatively straightforward. The way that we diagnose most forms of cardiovascular disease today
Starting point is 00:27:57 is either too invasive or too expensive to do at a population level. Too invasive would be what? Cardiac catheterization where we poke you in your arm or your groin and then we shoot some dye into the vessels of the heart to take a look at them. How often is that performed on a healthy patient? Never. You would never do a cardiac catheterization on a healthy patient. patients would be presenting with symptoms like chest pain or inability to exercise before you would consider doing a cardiac catheterization. Okay. And the too expensive option then would be what?
Starting point is 00:28:28 It would be an echocardiogram, which is an ultrasound of the heart. An echocardiogram costs a few thousand dollars. It's an hour-long procedure where they look at the heart in a bunch of different angles. And so we end up in a situation where the way we diagnose cardiovascular disease is either too expensive or too invasive to do at a population level. So we wait until patients oftentimes present with symptoms, which is late in their disease course, and patients have worse outcomes because of that. Okay, so all you need to do is what then? All you need to do is magically find a way to create a cheap, ubiquitous test that can screen for the most common cause of death in the world. And I became obsessed with this. I asked myself, well, is there anything that we're doing today that could fill
Starting point is 00:29:10 that role? And what came up with was the humble electrocardiogram. So if you've watched, watched any medical drama and you see the squiggly lines in the background that go beep, beep, beep, that's an electrocardiogram where we measure the surface electrical activity of the heart. If you have an Apple Watch, you can do a one-lead electrocardiogram right now. We are taught from medical school onwards that you cannot diagnose those diseases with an electrocardiogram. It's simply taught as not possible. But we asked ourselves, could AI do exactly that? We built one of our first AI models, and we were shocked to find that it worked really well.
Starting point is 00:29:43 We tested it on nearly 20,000 patients in a retrospective data set, and we found this AI model could outpredict me in trying to find valvular heart disease from the electrocardiogram. And then we asked ourselves, well, could we find all forms of structural heart disease from an electrocardiogram? And that set us off on this journey of the last five years, where we built this technology called Echo Next, which looks at an electrocardiogram and tells us, does this patient have structural heart disease or not? Did you identify that relationship, or is it just that the machine learning helped you find the cases in which one was related to the other? I didn't. Only AI did. And for a long time, we thought one of the Holy Grails would be, hey, you know, the AI would see something. It would tell us what it saw. We could teach it back to doctors, and then they would go and see that. But it doesn't work that way. Modern AI techniques still don't do a very good job of explaining what is it that the AI sees. And the AI does. not think the way a doctor does. A doctor has a very specific way of interpreting this sort of medical data, and we've shown in a series of studies and experiments, the AI doesn't care about that. The AI is not thinking the way we think. It's doing its own thing, and we can't fully explain
Starting point is 00:30:55 what it's doing, but what we do know is it can see things that we can't, and it can accurately predict patients who have structural heart disease much better than I can. We took 13 cardiologists from around the country, and we had them look at 3,000 electrocardiograms, and then we asked AI for these electrocardiograms, does the patient have structural heart disease or not? So we asked a yes, no question. Random chance would be 50%. The cardiologists were at 64%. Oh, boy, that's not very good. It's not very good. But, you know, the cardiologists weren't surprised. We didn't think that the cardiologist would do very good. They didn't think they would do very good. Then we asked
Starting point is 00:31:34 AI, and it was 78%. The cardiologists were as far away from random chance as they were from the AI model in being able to predict which patients had structural heart disease or not from their electrocardiogram. Did you then use the two in tandem, the AI and the humans? We did, and the cardiologists got a little better, but they weren't as good as AI alone. The cardiologist with AI were 68%. What's funny is myself and my co-creator Tim Pororuka, we did the survey as well, but we recused ourselves, from the results, and we were no better than the other cardiologists, even though we built the model. Wow. The really shocking finding, though, was half of the patients that the AI model thought were high
Starting point is 00:32:15 risk for undiagnosed structural heart disease don't go on to get an echocardiogram in the next year. So what that told us is there was smoking gun evidence that there was a lot of people out there with undiagnosed structural heart disease. And this is like clinically significant disease. This is the sort of stuff where if I told any doctor, I believe that a patient had heart failure or severe valvular heart disease, we would all agree that we need to clinically act upon this now. And that led to us running something called cactus.
Starting point is 00:32:42 This is the largest cardiovascular AI screening trial in the country. It's happening in eight emergency departments in the greater New York area. Any patient who shows up in any one of these emergency departments, if they get an electrocardiogram, they're automatically being screened by the AI model for undiagnosed structural heart disease. One of the most striking was a young man who had been, been exposed to smoke during the L.A. fires and then had recently moved to New York. He had shown up
Starting point is 00:33:10 with some shortness of breath in the emergency department, and they had presumed that this was asthma and bronchitis, and they had sent him home with an albederol inhaler, saying, follow up if you need to, but nothing else is necessary. It turns out that Echonex told us this patient was very high risk for undiagnosed structural heart disease, such a high risk that we actually got the procedure done urgently, he had severe heart failure and he was ultimately found to have a rare genetic mutation that put him at a one in four chance of dying before it would have been diagnosed. He ultimately underwent a heart transplant and is home with his family today. When you look at your colleagues, who's embracing AI or deep learning as you sometimes call it, who's ignoring it,
Starting point is 00:33:54 who's maybe actively opposing it and so on? Many people in health care, they don't have an AI reluctance. They have a technology reluctance that is a learned response from previous data, meaning any time someone comes with some new technology, it often makes their life harder. This new technology is supposed to help us optimize billing, but requires you to spend less time directly taking care of the patient. This is why I think it's really important for healthcare practitioners to be part of the process of creating this technology. Do you feel like you're sort of at the front edge of a movement? 1,000%.
Starting point is 00:34:32 No one was talking about this, you know, seven or eight years ago when I started doing it. It's been 10 years of people saying, no, that's not possible, slamming the door on your face, and very much having to take a step out on the bridge while you're building it.
Starting point is 00:34:47 The moments where I feel like I'm really doing science is when I genuinely do not know the answer to the question, but I know it's important to answer. And if it works, it would be fundamentally groundbreaking breaking to the way we think about practicing medicine.
Starting point is 00:35:03 Coming up after the break, as any new technology spreads, there are the inevitable winners and losers. The tech companies are playing this as we have no interest in replacing the doctor. We really want to be a co-pilot. We want to be your wingman. But they obviously do want to replace the doctor. I'm Stephen Dubner. This is the Freakonomics Radio Guide to Getting Better.
Starting point is 00:35:25 We will be right back. Many companies are investing many billions of dollars to build out new AI infrastructure in healthcare for all sorts of applications. Clinical treatment and risk prediction, drug discovery, revenue and staffing operations, on and on. There's also one massive incumbent to consider the Electronic Health Record Company, Epic, which claims to maintain at least one record for 325 million people. Here again is Bob Wachter, chair of the Department of Medicine at UCSF, and the author of A Giant Leap, how AI is transforming healthcare and what that means for our future. Epic won this market. This was a little tiny company started by Judy Faulkner in the basement of an apartment off the University of Wisconsin.
Starting point is 00:36:24 She became the most successful female entrepreneur, probably in American history. It's really remarkable story. I think they won because they were the best. The best was partly integration. Judy's theory of the case was we're not going to bolt on 37 different tools by a bunch of different companies. We're going to own the entire thing, and that is going to allow us to provide an integrated solution. And medicine is so complex and there's so many moving parts that if you don't have an integrated solution, the thing's not going to work very well. I do wonder how much of this kind of sclerotic nature
Starting point is 00:37:00 of the electronic health record market and the attendant difficulty in using those data to move forward, as people have been trying to do in the past but not succeeding very much. How much of that is due to the fact that Epic already has a lot of success by playing the status quo
Starting point is 00:37:16 and that they don't have much incentive maybe to innovate or to let others play with their data in a productive way? I think all of that is true and generally monopolies are bad. I think it's going to become more true in the coming years because of AI, than it has been true over the last 10 years.
Starting point is 00:37:35 I don't think Epic is the main part of the problem up until now. And the reason I distinguish the past and now is once AI became a thing, so much of our data is in the form of narratives in a medical record, which until generative AI, you know, was really unstructured data that was not useful, you could analyze your hemoglobin and your creatin. and your creatinine and an EKG finding, but not my note, which might be a page long of narrative, and try to sick the old kind of AI on this is a 62-year-old man
Starting point is 00:38:13 with a history of congestive heart failure who comes in with shortens of breath and chest pain. Impossible. The AI couldn't deal with that. So it was really not computable until recently. Now that it is, and now that generative AI creates the capacity for all sorts of magic. the idea that Epic is going to own the entire enterprise and you're going to need to use Epic built tools for all of the different use cases of AI and there are going to be hundreds.
Starting point is 00:38:40 I think that's going to really slow things down. And the government is forcing Epic to become more open and more amenable to bolting on third-party tools. So who do you think will ultimately win the AI healthcare platform wars? Do you think it'll be the big incumbents like Google and Microsoft? Do you think it'll be the well-funded AI startup? So it maybe be someone or something else? I was on Google's Healthcare Advisory Board when Eric Schmidt came in and said, this is one of the biggest things that people search on.
Starting point is 00:39:10 We're going to figure this out. And they started building a version of their own electronic health record. About a year and a half later, Eric came in and disbanded us. He said, this is too hard for us. And I said, wow, it's too hard for Google. That must be pretty hard. Google, Amazon, Apple, Nvidia, Facebook. Microsoft, all have designs on healthcare.
Starting point is 00:39:30 They know how important it is. They know how big an industry it is. They all seem to screw it up time after time. It's not that they don't have enough smart people or enough resources. It is that so much of health care is local baseball. How payment works, how doctors think, how things are organized. They're just so far away from the day-to-day workflow. The real winner here probably is epic, meaning that this advantage of incumbency of having all the data
Starting point is 00:39:58 in it already. And so far, they are building AI tools for basically everything. They're probably not as good as a tool built by a third-party company that's focusing on this one use case, but the advantage of having an integrated tool and building tools that are good enough. And if I'm a hospital, I'm trying to decide, do I wait for EPEX tool? All I have to do is turn a switch and it's on, and I'm sure EPEX can be in business in five years. Or do I buy a tool from this really cool, new startup down south of market in San Francisco, but I'm not sure they're going to be in business. A healthcare organization like mine, their default setting is to buy it from Epic rather than to buy it from the third party. I assume that everyone in the world has tried or is trying to buy Epic?
Starting point is 00:40:41 Everyone in the world has tried to buy Epic. And the answer from Judy Faulkner is now 82 is that it is not for sale. She does not accept any investment. Is there a next generation of Faulkner leadership? It's probably an internal person because the culture of the company is pretty insular. But they seem like they want to remain private for the foreseeable future. Not just seem in her will. It says that it must remain private. It goes to a consortium of current employees and her family and cannot be sold. And how do you feel about that? Judy had a partner in the early days and the argument they had was over this. He said we need to accept VC funding. We're not going to grow fast enough. and she argued that we need to own the entire process here
Starting point is 00:41:26 if we're going to create this integrated system where all the pieces fit together. And he interviewed a couple of years ago, he said, obviously she was right and I was wrong. They have been a massively successful company, and the product they produce is quite good. We use it, and I'm reasonably satisfied with it. I think over time, there's no way that a single company can produce the best AI tools for use cases
Starting point is 00:41:49 that span the range from an AI scribe to an AI diagnostic support system, to an AI tool that deals with the insurance company, to an AI tool that facilitates clinical research. It's just no way that one company sitting on farmland 10 miles from Madison can possibly do that. Their ambition is to do that. I think the world is a better place if this is more open to third-party innovators bolting in, and that's going to require more federal push to do that because that is not in the company's So what do you see as the proper role of government in regulating AI and healthcare generally? Well, I think that's one. One is to create a level playing field and to be sure that innovators have a fair chance to succeed in an environment where there is a tendency toward monopoly, in part because of this idea that one company owns all your data.
Starting point is 00:42:41 The issue of regulating AI tools, I have a chapter on it, and I came up with the very unsatisfying answer of this is, really hard. And our existing structures, meaning the FDA or the Joint Commission, which currently accredits American hospitals, are not fit for purpose for this tool. The FDA can regulate a new radiology tool because they've regulated devices forever. They've regulated pacemakers and defibrillators, and they've regulated drugs. They can regulate a tool that gives a static answer, the same answer three years from now that it gives today. And who, who, you know, implementation, the way we put it in at UCSF versus the way it's put in in a rural hospital 100 miles from here, probably isn't going to change all that much about the way a torvastatin
Starting point is 00:43:31 works on your blood pressure or a CT scan reader works in giving you an accurate reading. But most of AI is not like that. Most of AI can shape-shift tomorrow compared to what it did today. Most of what it's doing is giving answers or insights based on the literature. How is that different than regulating a textbook or a medical journal? So I don't think we have even begun to think about what the regulatory infrastructure looks like to get this right. I think the Trump administration is going too far in an anti-regulatory way, but I do think the risk of over-regulating this in the short term is higher than the risk of under-regulating it. There are sufficient guardrails in the health care system, maybe with the exception of direct-to-consumer use of these tools.
Starting point is 00:44:17 But in terms of my institution, a place like UCSF, is a $15 billion business that has a terrific reputation. It's surrounded by a whole lot of malpractice attorneys. We're going to be really careful about bringing an AI into our system until we're pretty darn sure that it's going to work and be effective and not hurt anybody. There are a lot of existing guardrails, along with just professional conservatism, that I think if you try to have government regulate every single decision-support. tool. That's impossible. There's no way they can keep up with the speed of innovation here. So I think we've got a lot of work to do to figure out the regulatory environment, but in the short term, I think a relatively light regulatory touch is the right way to go. So that's what Bob Walker has been thinking about, how to lay out a plan for a full-on merger between AI and healthcare. And let's say that
Starting point is 00:45:08 merger happens. What should we expect? What are some of the most wonderful benefits to us, the users, of the healthcare system. I went back to Pierre Elias to ask about that. I see the ability for every patient to receive exceptional quality of care, and in a cost-effective and affordable way, we make sure that the right steps are being taken for each patient so that they're being effectively triaged, being screened for diseases the way they need to,
Starting point is 00:45:39 and wherever there is uncertainty about what should happen next, we actually are able to quantify what that degree of uncertainty is and help patients navigate through that care journey without second-guessing what is actually going on. How do you see the role of the physician changing and I would assume this means that medical education should be changing quite a bit too, yes? I think the role of the physician continues to be
Starting point is 00:46:06 that of the person on the journey with you. I think about all of the scariest, and most important things I've done in my life, and I think about the people who were there to guide me through it. Those people leave indelible marks on you and help guide you through this challenging process. It may not be that mechanically they're having to do all of the work. It may be that the important thing is helping you reflect
Starting point is 00:46:36 on what this journey means, helping find the places where there are lingering questions and uncertainty, and helping prepare you for what the next steps are going to be. I practice cardiology in a different way than I was trained to because I believe we have to think about the way we practice medicine in new and novel ways, and AI is going to allow us to do that. I think for now it would be a mistake to take too much of clinical reasoning and the facts of medicine off the plates of trainees.
Starting point is 00:47:08 That's Bob Wachter again. Because I think you could easily enter a death spot, where the AI is better than the doctor because the doctors are getting worse. You're talking about deskilling now, yes? I'm talking about deskilling. Is the deskilling argument, which is that, you know, if physicians continue to rely on technologies like AI, they will lose the ability or the skill to actually do what they used to do. Is that an argument being made by the physician incumbency to scare off AI?
Starting point is 00:47:37 Partly, but not completely. You know, there's good deskilling and bad deskilling. I will admit to you, I have deskilled on map reading. I can no longer read a map. The question about deskilling in medicine is complicated. There are parts of deskilling, for example, the physical exam were definitely not as good at as we used to be. There are elders who lament that partly because the physical exam was about its clinical value and partly about the laying on of hands and sort of the connection between the doctor and the patient. But I think it gets romanticized. I would rather have a cat scan than my lung exam to try to figure out what's going on in your lungs or your abdomen.
Starting point is 00:48:18 So there are certain parts of deskilling that just happened because the new technology is better than what we used to do, and you no longer need the technology. Last year, a study published in the Lancet Gastroenterology and Hepatology Journal, one of my favorite journals for light reading, looked at this question of potential deskilling because of AI. Very experienced gastroenterologists who did this procedure called colonoscopy looking up into people's colones were given an AI colonoscopy tool, which puts a little box around lesions inside the colon that it deems suspicious and find some things that the doctors will
Starting point is 00:48:57 miss. They had access to the tool for three months. The doctors liked it and they benefit from it. Then the tool was turned off. Their performance on doing their colonoscopy. fell significantly after the tool was turned off. These were doctors who had an average of 10 years of experience doing this procedure. So in just three months of exposure to this AI crutch, they got less good at this thing that they had been doing for 10 years. These questions of how the AI interacts with
Starting point is 00:49:26 human actors in really complex systems where the stakes are high and the AI is getting better every minute and the humans are not, I think are really fascinating. Let's say that AI in healthcare delivery continues to improve and augment the lives of patients and physicians the way that you're describing in this book, the way that you hope it does. I'm curious what you see that looking like, and especially how the role of the physician changes. We did an episode a few years ago about what are called co-bots, robots that are collaborative. This was in nursing homes in Japan. They were these big physical robots that could help lift a patient clean and so on. And the finding from research around them was that the healthcare workers actually loved it, A, and B, were able to lean into
Starting point is 00:50:16 what they as humans are good at, which is dealing with patients on a human level rather than just moving them around and getting them to the bathroom and stuff like that. So if we use that as the sort of model here, let's say for a physician like yourself, or maybe someone a generation or two younger, if AI unfolds a way that you're hoping it does in 10 years, let's say, what, What does the physician get to do that maybe they're really great at now that some of the burden has been lifted by AI? I think the fundamental question of AI in healthcare is not creating my note or reviewing my chart. It is computerized decision support. It is the AI helping me make the best decision for you as a patient based on things about you, but also about the medical literature, which evolves, changes very, very quickly.
Starting point is 00:51:05 and the best decision is not only the one that provides the best outcome, but the one that's the most cost effective. It's been said that the most expensive piece of technology in a healthcare system is the doctor's pen, which of course is no longer the pen. It's the keyboard. Where this really will have an impact is I'm seeing you in my office or in the hospital. And it's not like I now pull out my phone and say this is a 52-year-old man who comes in with this, this and this. the AI is already reading your chart. It already knows all those things about you. And it's suggesting, in real time, suggesting diagnoses and suggesting what the right tests would be and what the right treatments would be. Now, do you need me in that setting? I think so, but we'll have to see. I think you need me to interpret all of this, to be a tiebreaker when there's a tough call, to deal with some complex, sometimes ethical issues, to weigh your own preferences. as a patient or family. There's a lot of complexity in this that I think goes beyond the kind of decision-making support
Starting point is 00:52:10 that Waze gave me this morning when I drove into the studio. The tech companies are playing this as we have no interest in replacing the doctor. We really want to be a co-pilot. We want to be your wingman. But they obviously do want to replace the doctor. I think for the foreseeable future, the complexity of medicine, the stakes of medicine, the regulatory environment, and periodically, I do have to say to a patient, you have Alzheimer's disease or you have cancer.
Starting point is 00:52:38 I don't think patients are going to accept the idea that a bot's going to tell them that. Although you do right about the fact that AIs can do better in the empathy realm than humans. That was one of the shockers that in those early years, which is really only two years ago, when we saw AI passing the medical licensing board or doing well on really tough clinical cases, it was like, okay, it's pretty smart. And then studies began to come out saying if you did a blinded trial of a patient actor being given answers either by doctors or by AI, they often preferred the answers from AI. And the AI appeared to be more empathic. Of course, the AI has no empathy, but it can fake it really, really well. Some of my doctor friends have been telling me that their patients will come to them, having used chat or some other AI bot to go through some scenarios or symptoms or whatever.
Starting point is 00:53:31 and the docs seem generally pretty happy about it. And this is, to me, in stark contrast to a trend from 10 or 20 years ago where direct advertisement began on television for pharmaceuticals where patients would come and say, okay, I just heard about this thing. I just need you to sign me up and give it to me. And they feel now, at least this is for my friends, that the AIs are becoming a pretty decent research tool for the patient to then bring. to them the expert. I'm curious if you're seeing that, how you feel about that trend generally. Seeing a ton of it. I mean, they were doing a version of this with Google, but these tools are better. And so the answers they're getting are better. I think it's net positive. I think anything that democratizes health care is going to be good, assuming the answers are reasonable and correct, and assuming that patients, when they need to see a doctor, still see a doctor. And I think that's the open question here. Okay, Bob, here's what you've said that you hope AI can do, produce better outcomes for patients. lower costs, and add some relief for beleaguered doctors and nurses. Then you say, however,
Starting point is 00:54:39 that the success of all this will depend on history, politics, economics, pride, regulations, leadership, lawsuits, guilds, culture, workflows, inertia, greed, hubris, vibes, and zeitgeist, as much as biographical processing units, diffusion models, and neural networks. In other words, the tech can work. But then we get those layers of people who may feel that their realms are being infringed upon. So the way you describe it there, it sounds like what some people like to call a wicked problem, which is basically unsolvable because there are so many constituencies and so many of those constituencies have incentives that are cross-purpose with the other constituencies. So when you take a look at the big picture, how much optimism do you have?
Starting point is 00:55:25 Do you think that the upsides of this technology will be able to be successfully integrated into health care delivery itself? Or do you think that AI becomes yet another piece of the mess that is the U.S. healthcare system? I would interpret that very, very, very long sentence as saying it's not just about the power of the incumbents, although it's a very real part of it. It's about the complexity of medicine. It's about the regulatory environment, which for important reasons, says there are certain things that we're going to restrict people's ability to do or require that a human does it rather than a butt. It just says that this technology can be really, really spiffy and still not deliver because so much of it depends on humans and their systems and their governance and their culture and their own self-interest. I harken back to the old yogi Berraism. in theory, there's no difference between theory and practice and practice there is.
Starting point is 00:56:24 I think that's what we're going to see. In practice, that's where the rubber is going to hit the road here. I think it's going to be net very positive. Patients think that too. There was a Gallup survey last year. When people were asked, their attitudes about AI, they were really negative about its impact on jobs, on the political system, and I sort of feel the same way. The one area they felt positively about it was in medicine.
Starting point is 00:56:46 I think that's partly because the AI is really good and partly because the system is so screwed up. Everybody recognizes that we are in desperate need of reform and health care and our typical go-to response in medicine when we can't do what we need to do is we just hire more humans. A, we can't afford it. We're already 20% of the GDP and bankrupting businesses and people and governments. But B, we can't even find the humans, even if we could afford them, at least for the foreseeable future. This is going to help me do my job. Help me be more of the doctor or the nurse I want to be, help me focus on the patient. So that leaves me optimistic over the next 10 years. Will there be jobs for doctors? 20, 30 years from now? Well, unless I live to 120,
Starting point is 00:57:30 that's no relevance to me. But, you know, I think there will be. That again was Robert Wachter, whose new book is called A Giant Leap. We also heard from Wachter's one-time mentee Pierre Elias at Columbia University. My thanks to both of them. I learned many things in this episode. Hope you did I especially liked learning a little bit about Judy Faulkner and Epic. Now I'm hoping we can bring her on the show sometime. This is the final episode in our guide to Getting Better series. Let us know what you thought. Our email is Radio at Freakonomics.com, or you can leave a review on your podcast app.
Starting point is 00:58:11 Also, if you want to keep up with everything we do around here, you can sign up for our newsletter at Freakonomics.com or at Steven Dubner. dot substack.com. Coming up next time on the show, for the Super Bowl, we will tell you why NFL running backs don't get paid the way they used to. And then in a new two-parter, we will look at what it really means
Starting point is 00:58:34 to cheat. People like to call it cheating. You can call it that. I'm not sure who was cheated, but that's just what it was. If you won the Tour de France while doping, but everybody else was also doping, were you the cheater? And what would happen
Starting point is 00:58:50 if there were no rules against doping. My goal is to bring about the 10th age of mankind, the enhanced age, where everyone has the opportunity to become enhanced. That's coming up soon. Until then, take care of yourself. And if you can, someone else, too. Freakonomics Radio is produced by Stitcher and Renbud Radio. You can find our entire archive on any podcast app, also at Freakonomics.com, where we
Starting point is 00:59:15 publish transcripts and show notes. This episode was produced by Delvin Abouaji, and Ed. edited by Ellen Frankman. It was mixed by Jasmine Klinger with help from Jeremy Johnston. Special thanks to Rochelle Walensky for background research help. The Freakonomics Radio network staff also includes Augusta Chapman, Eleanor Osborne, Elsa Hernandez, Gabriel Roth, Elaria Montenacourt, Teo Jacobs, and Zach Lipinski. Our theme song is Mr. Fortune by the hitchhikers, and our composer is Luis Guerra. What kind of cheese do you have on a bagel, though? Munster.
Starting point is 00:59:51 Oh, no kidding. I'm a big Munster fan, too. It's the best. The Freakonomics Radio Network, the hidden side of everything. Stitcher.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.