Making Sense with Sam Harris - #162 — Medical Intelligence

Episode Date: July 3, 2019

Sam Harris speaks with Eric Topol about the way artificial intelligence can improve medicine. They talk about soaring medical costs and declining health outcomes in the U.S., the problems of too littl...e and too much medicine, the culture of medicine, the travesty of electronic health records, the current status of AI in medicine, the promise of further breakthroughs, possible downsides of relying on AI in medicine, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

Transcript
Discussion (0)
Starting point is 00:00:00 Thank you. of the Making Sense Podcast, you'll need to subscribe at SamHarris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one. Welcome to the Making Sense Podcast. This is Sam Harris. Okay, I'm in lovely London, getting ready to record some podcasts. It really is lovely. The weather is perfect. That makes London especially nice. So, I'm going to record a housekeeping here here and then get out of my hotel room. A few things to say that have no relationship to today's podcast.
Starting point is 00:01:13 I am recording this right after the Andy Ngo assault in Portland, a few days after. That has played out on Twitter. This strikes me as entirely the product of Twitter or of social media in general. This is like a physical manifestation of all that is crazy online. I think these protests probably wouldn't occur. Andy Ngo, the journalist who was attacked, probably wouldn't have been there. All of the acrimony and insanity that one witnesses in the aftermath would have no forum. It's a very strange phenomenon. I'll catch you up for those of you who don't know what I'm talking about. Andy Ngo is a journalist and editor at Quillette, which is an online magazine that's often unfairly described as being conservative. It's conservative in the way that the IDW,
Starting point is 00:02:14 the Intellectual Dark Web, is conservative. It's really just a centrist magazine that has spent a lot of time criticizing the insanity on the left. So it is branded by the left, certainly the far left, as conservative, if not enabling, of fascism and racism and xenophobia and Islamophobia. All of those things have been alleged. Now, I don't know Andy. I think I met him once very briefly. He covered the release of the documentary Islam and the Future of Tolerance, which depicted my collaboration with Majid Nawaz. I don't know his personal politics, and his politics are absolutely irrelevant to what happened in Portland.
Starting point is 00:03:03 I didn't contribute much to the resulting cacophony on Twitter. I posted one thing, but I'll just say a few things here. So what has been happening in Portland, apparently, is that Antifa, the so-called anti-fascist cult has been demonstrating periodically and allowed to do so with real impunity by the mayor, Ted Wheeler. And my one tweet on this topic tagged him. It seems to me he's been totally irresponsible in the scope he has given to these protests. You may have seen video with Antifa stopping traffic and pulling people out of cars. It's madness. It's a complete breakdown of social order. And in the video where you see Andy Ngo attacked, that's what you witness, a complete breakdown in social order. And apparently the
Starting point is 00:04:06 police in Portland have been told not to intervene by the mayor. Anyway, this is the kind of story that will be picked up by the right wing. You know, Andy Ngo will be on Fox News talking about his attack. One can only hope that mainstream sources like the Washington Post and the New York Times will talk about Antifa honestly here. Antifa is often described as a group of people who are protesting the extreme right. Well, they may be doing that, but they're also attacking innocent bystanders and journalists. So what we have here is a group that imagines it opposes fascism, but they behave just like fascists. And perhaps this is no surprise, if you travel far enough to the right or to the left on the political spectrum, you find yourself surrounded by sociopaths. And Antifa, while there may be some blameless
Starting point is 00:05:13 members of this movement, seems to be chock full of sociopaths, at least judging from their handiwork that you can see attested to in these videos. But anyway, the response to this phenomenon, which again is a total breakdown of civil society, right? You've got people who are attacking non-violent bystanders in a context which, again, appears to be a pure confection of social media, because most of the people in these protests, most of the members of Antifa you see, are also filming. I mean, everyone has their phones out or their cameras out filming themselves to broadcast this online. It is a bizarre moment. Anyway, the video that shows Andy getting attacked starts after the attack has occurred. I mean, there's a few other videos, so you can sort of triangulate on this, but
Starting point is 00:06:11 the video that's widely being shown is one which starts after he's already been hit at least once, and then you see someone run up and hit him twice in the face as hard as he can. And then I think the same attacker then returns a moment later to kick him in the groin twice as hard as he can. There's a few things to point out about this. When you punch someone in the face as hard as you can, especially when they're not prepared for it, I mean, you just blindside them. There is absolutely no guarantee that you're not going to kill them, right? I mean, people get hit in the face, knocked out, they fall down, they hit their head on the pavement, and they die,
Starting point is 00:07:02 right? This happens. It's not a high probability way to murder somebody, but it's not an especially low probability way of doing it either, right? Especially if you know how to throw a punch. I mean, if you knock someone out cold and there's only concrete to catch their fall, you can certainly kill someone this way. So you should be morally prepared to deal with that aftermath, right? To know that that's what you're doing, and to know that you may very well spend a long time in prison as a result of what you've done. And I might add, in prison, you might meet some real neo-Nazis and aspiring fascists to keep you company. And that's actually what one hopes for these people in the video.
Starting point is 00:07:54 If you think this is effective political work so as to get people to worry more about authoritarianism to worry more about authoritarianism and about the heavy-handedness of the state and about the rise of the far right, it has absolutely the opposite effect. You know, you see a few videos of Antifa. You want the far right to show up, and you certainly want the state to clamp down on this kind of behavior. And you certainly want the state to clamp down on this kind of behavior. This has absolutely the opposite political effect. It will guarantee four more years of Trump at a minimum for this kind of thing to become more commonplace. And what's especially damaging is for the left to get this so wrong ethically online. I mean, here you have leftist journalists from,
Starting point is 00:08:49 you know, Slate and Vice and other organizations supporting this attack on Andy, at the very least blaming him for having brought it on himself, right, for being there. Why were you there in the first place? You knew that all your prior coverage of Antifa caused them to hate you, right? This is just so wrong-headed. If the left can't get this right, if liberals can't get this right, we have some very dark days ahead. Anyway, back to the attack. So he gets punched in the face twice. He gets kicked twice. Then he gets milkshakes and eggs thrown at him and dumped over him. These are not people who have hit him in the face themselves. These are people who, upon witnessing a totally non-violent person get punched in the face hard twice and kicked in the groin. Their contribution to this moment is to then hurl a milkshake or an egg at him
Starting point is 00:09:54 or some other object. He gets hit with other things as well. It's not clear from the video. I'll also point out that the person who punched him in the face was wearing black gloves. A lot of these guys wear these tactical gloves that have reinforced knuckles. You know, some people ride motorcycles with these gloves, but these are also gloves that members of the military wear. It's not like getting punched in the face with a naked fist. Imagine kind of hard plastic knuckles being built into vinyl gloves. So that only makes things worse. So watch the video and rewind it and just follow each beat in it. You'll see a few people trying to protect Andy, but this whole thing is so ugly, and it could get so much worse so quickly.
Starting point is 00:10:50 There's been some discussion about whether or not the milkshakes that were being thrown at Andy actually had quick-drying cement in them. Cement apparently is quite caustic and therefore can burn you. This stuff is being thrown in his eyes. Right? So I don't know if that was the case, but the whole thing was ghastly and made especially so because in the aftermath, you saw people, people who have reputations they should worry about, who have reputations they should worry about defending this violence and ridiculing anyone who complained about it. Or they'll immediately pivot to, well, what about, where were you during Charlottesville, right? Or putting kids in cages at the border is worse, right? That whataboutery completely misses the point. Yes, there are many things to complain
Starting point is 00:11:47 about and worry about. And I spend a fair amount of time talking about what's wrong with Trump and what could become far worse with him, given another four years. And I'm also concerned about the far right. But I'm concerned about the complete breakdown of moral intelligence in the mainstream left at moments like this. This is a crystal clear and very dangerous violation of the most basic norms of civil society. Attacking a journalist, beating him, and publicly humiliating him for merely covering a public protest, it should be impossible for liberal people to get their analysis of this wrong. And yet, they reliably do. Anyway, that was the big thing that happened in the last few days.
Starting point is 00:12:47 It bears absolutely no relationship to the topic of today's podcast, and now I will move on. Today I'm speaking with Eric Topol. Eric is a world-renowned cardiologist and the executive vice president of the Scripps Research Institute. He's actually one of the top 10 most cited medical researchers and the author of several books, The Patient Will See You Now,
Starting point is 00:13:16 The Creative Destruction of Medicine, and the book under discussion, Deep Medicine, How Artificial Intelligence Can Make Healthcare Human Again. And we do a deep dive into the current state of medicine. Deep Medicine, How Artificial Intelligence Can Make Healthcare Human Again. And we do a deep dive into the current state of medicine. We talk about why we have soaring medical costs and declining health outcomes in the U.S. We talk about the problems of both too little and too much medicine. Talk about how slowly the field has adopted useful technology,
Starting point is 00:13:50 and then we get into the current status of AI in medicine and how it could completely transform the field, for the better mostly, but also in ways for the worse. Anyway, I found it a fascinating conversation. I felt it brought me up to speed with these rapid changes. And now without further delay, I bring you Eric Topol. I am here with Eric Topol. Eric, thanks for coming on the podcast. Oh, great to be with you, Sam. So if I recall correctly, we met at a whole genome sequencing conference, and I was impressed both with the promise of sequencing the genome at that point and also impressed in the aftermath that there seemed to be almost nothing to do with the information.
Starting point is 00:14:41 It felt like it was a few years too early. I mean, are we at a point now where if we had met at that conference, there'd be more that would be actionable? Are we still in kind of a place where there's not a lot to do with one's whole genome being sequenced? Well, it's definitely improving. So whereas when we first met, it might have been less than 1% chance it would be actionable, now it's getting up to 5%. So it's definitely getting better, but we still have a ways to go. And it'll take having like a billion people with whole genome sequencing and all their data to finally make it very informative. Well, it is cool, but we're sort of, I mean, we're going to talk about this in some depth
Starting point is 00:15:20 in response to your new book, Deep Medicine, where you're talking about how we can use AI, not just with respect to genetics, but really all of medicine. But before we dive in, what's your background as a physician? I'm a cardiologist. I started in practice in cardiology in 1985. So I've been kind of an old dog 30-some years now. Yeah, and then you started the Scripps Institute for Translational Medicine? Yes, that was back in the beginning of 07. It was basically a new, broadened mission of Scripps Research, which had been since 1923 a basic
Starting point is 00:16:07 science institute. And this is really the applied limb, which is giving it a lot of translational medical research capabilities. Right. So I guess start with a big picture before we get into the high-tech discussion here. It does seem that medicine is broken in many ways, and our discussion will mostly be focused on the U.S. In the U.S., we spend, you know, you have this from your book, $11,000 per person per year on medicine, and, you know, that's still climbing. In 1975, I think it was something like $550. And yet our outcomes don't compare very well with the rest of the developed world. How do you account for that? And how do you view the rising expenditure and seeming plateauing, or in some cases, declining outcome measures? Well, you're absolutely right about the numbers, Sam.
Starting point is 00:17:05 And I think the basis of this, which is outcomes of not just lowered life expectancy now in the U.S. three years in a row, which is unprecedented, but also extends to all the important metrics like infant mortality, childhood mortality, maternal mortality, and on and on. So when you look at
Starting point is 00:17:26 why has the model in the U.S. gone south, you start to see, well, there's two likely explanations. A big one is that we have major inequities in our care. We don't provide care for all citizens, unlike all the other countries that are being compared with. The other extreme is that we overcook, that we do too much. So the people who have coverage, they get over-tested, over-treated, and that leads to all sorts of problems, and including bad outcomes. So we've got lots of serious problems. and including bad outcomes. So we've got lots of serious problems. Yeah, well, I must say I feel like I have a fair amount of experience with the latter problem of too much medicine
Starting point is 00:18:15 or at least too much medicine being offered. And it's often said that we have the best medicine in the world if you're well-off or well-connected. And yet, I always find it incredibly humbling and fairly depressing how hit or miss my encounters with medicine are. I'm not a doctor, but my background in neuroscience gives me a better-than-average position as a consumer of medicine. But I also find whenever I get put into the machinery of the medical system, whether it's because I'm sick or because someone close to me is sick, one of my kids is sick, rather often I experience a fairly tortuous adventure where,
Starting point is 00:19:02 as you said, either too much medicine is offered or it could be drugs with serious side effects that are kind of dispensed with a totally cavalier attitude. Risky procedures are recommended almost reflexively. And, you know, there's a whole process of declining to go down this path rather often. And then, as you know, most conditions are self-limiting, and then you feel totally justified for having declined. And then, you know, there's experiences where, you know, scary diagnoses are given only to be overturned by a second opinion, and diagnostic tests are ordered where it's revealed that there really is no thought as to basically the doctor was going to recommend
Starting point is 00:19:46 the same treatment or the same lifestyle change regardless of what showed up on that particular test. I mean, it's just, I find my encounters with medicine weird almost, you know, more often than not. And this is, and I consider myself to be probably in the most fortunate possible position with respect to being a consumer of medicine, and yet with a possible exception to your own, where you're a celebrated physician, right? You're a physician with... You're not just an average physician, you're a very connected one, and you've made significant contributions to your field, and yet you open your book with a totally harrowing encounter with your own medical history.
Starting point is 00:20:33 I'm sure you've talked about this a lot because you open your book with it and medical malpractice, which you as a physician still, it seems, couldn't protect yourself from. Right. Well, Sam, it was harrowing. That was a good word to assign to it. I was having a knee replacement. It was almost three years ago now. And I had thought it would be pretty straightforward because I was pretty physically fit and thin and relatively young compared to a lot of people who have knee replacements. And I had referred many patients to the same orthopedist, so I had some confidence. But what happened was I had a disastrous post-operative complication, which I didn't even, I'd never heard of the word arthrofibrosis.
Starting point is 00:21:28 And part of that really was I had a high risk that I didn't know about because I had a congenital condition called osteochondritis dissecans, which set me up for that. So this really was horrendous. You know, I couldn't sleep. I was in pain. I was taking opiates. And I showed up with all this really bad state with my wife to the orthopedist about a month after the surgery. And he said to me, I need to get some anti-depression medications. And I said, what? So this is like the shallow medicine, robotic. I mean, here's a human expert who did the surgery.
Starting point is 00:22:14 That wasn't the issue. It was the post-operative care. And I think that's telling. I think that almost everyone now who I talk to has had either on their own or their family members, loved ones, have had a roughed up experience. And that's what it was for me. Yeah. So maybe this doesn't account for your experience. I mean, on some level, there's a fair amount of bad luck there.
Starting point is 00:22:42 I mean, and also just, I mean, obviously the diagnosis was missed or your risk potential for that complication was missed. And we can talk about the way in which AI might make that less likely to happen. But I don't know, it feels like there's just a problem in the culture of medicine. I mean, medicine is kind of a priesthood. I mean, it's like the way people relate to doctors, it's a far less straightforward transaction with respect to the use of another person's expertise. And it's difficult to navigate for almost anyone because in part it's the subject matter. I mean, you're dealing in many cases either with life and death questions or a legitimate concern about significant disability or suffering or risk.
Starting point is 00:23:34 And I don't know, we know so much about how impossible it is for people to navigate their own cognitive biases. for people to navigate their own cognitive biases. I mean, we know that physicians are making diagnoses based on their clinical experience in ways that really distort the, you know, I mean, their sense of probability and the accuracy of diagnosis is way off. I mean, this is something you touch in your book
Starting point is 00:23:59 by reference to Danny Kahneman and Amos Tversky's work. There's something about the culture that, again, we haven't yet introduced robots into the equation here, but I mean, can you say anything about that? I mean, my impression here is fairly inchoate, but I just realized that there's, I mean, just the process of, you know, getting second opinions is often weird, and what you do with opinions that can't be reconciled. I mean, how do you see the effect of putting on a white lab coat on the conversation and the relevant cognition?
Starting point is 00:24:36 Right. Well, you're touching on this medical paternalism, which is the sense that, you know, doctor is a know-all entity. And that wasn't as big a problem decades ago when there was a lot of trust, there was presence, there was a deep relationship, and really an intimacy, an inner human bond. But what's happened over time is that paternalism has sustained. And at the same time, there's very little time with patients. It's very much a lack of presence because, you know, doctors are looking at keyboards and they really don't have the time to cultivate a relationship. So it's gotten much worse. It's the same problem, the basic problem of the kind of authority, control, don't question my opinion. What do you mean you need a second
Starting point is 00:25:33 opinion when everyone should be entitled and feel very comfortable to have that second opinion? But this doesn't fit in any longer because there's not a relationship. It's eroded so seriously over the last three or four decades. It's interesting. Despite how much we're spending on medicine each year, and again, the costs are just going up and up, the field is actually very slow to adopt new technology. And this is something that we've all noticed, the transition to electronic health records, which has seemed somewhat dysfunctional and somewhat haphazard. As far as this adoption of tech, medicine is, apart from the introduction of some new scanner from time to time, it seems more like the FAA dealing with old equipment than it looks like Silicon Valley dealing with the latest breakthrough in consumer tech.
Starting point is 00:26:39 How do you view medicine and tech in general? Yeah, it's a pretty sad story. A lot of people think digital medicine arrived with the electronic health record, and that was an abject failure, a disaster, because when those were introduced, they were set up for billing purposes without any consideration of how that would affect either patients or doctors or other clinicians. So really, that was actually the motive. It wasn't to be able to aggregate information better? No, no.
Starting point is 00:27:14 It was just to have really good billing to not miss things. It's amazing. And it's not really ever improved. It's the most clunky, pathetic software across all the different companies that are in this business. And that had led to doctors becoming data clerks and has been one of the most important aspects of why there's such profound burnout in the medical field, with more than half having expressed that they are burnout, but also over 20%, even with clinical depression and the highest numbers of suicides ever in the medical profession.
Starting point is 00:27:53 And is there anyone tracking just the actual use of doctors' time with respect to this new technology? Has the experience of being a doctor been more of one dealing with records and insurance and all the rest and, you know, year by year? Exactly. So what's happened, I mean, a most recent study was that 80% of the time that medical residents were spending without any contact to patients because they were working on electronic health records and administrative tasks. And all the recent time studies
Starting point is 00:28:31 that have really delved into this show a two-to-one or greater ratio of time away from patients. So this electronic health record, which is unfortunately the precursor of bringing the digital world into the medical profession, has backfired. It's really been a serious hit to the care of patients. Mm-hmm. And what about other technology like diagnostic imaging? And I remember,
Starting point is 00:29:02 you know, I've had a few adventures in cardiology, which is your wheelhouse, you know, like a CT scan, you know, calcium score scan. And it's, again, I have found the way in which this imaging has been dispensed to me. I mean, you know, I've done it and, you know, happily, I guess, you know, I would probably be telling a different story if something scary and actionable were found and I had felt my life was saved by it. But the way this was dispensed to me was just kind of cavalier enough. And it was just like, we now have this new tool, let's use it. And there was nothing, and I got to the end of the process and it was really, And I got to the end of the process, and it was pretty clear that it just didn't make sense, in my case, to have done this. And so how do you view just these intrusions of new machines, which could be very useful, but are either used in cases where there's just no reason to use them?
Starting point is 00:30:06 where there's just no reason to use them. And I guess we should also talk about the prospect of type 1 errors here, where people get false positives, which then they go chasing with yet more intrusive procedures and incur other risks. Exactly for that, too. The problem here is we've got a lot of good technologies, but they're misused. They're overused. So the example you gave of a calcium score with a CT scan to see whether or not you may have coronary disease, that test is terribly overused. I have never ordered that test. And mine was worse. I had an angiogram. I didn't just have the ordinary CT. of their calcium score, even though they have no symptoms. Or others that have been told their lives have been saved because they were whisked away from the CAT scan to then have an angiogram and stents or even a bypass operation. So, you know, cardiac cripples have been a result of some of these scans with patients without any symptoms. And it's really unsettling. So this is an exemplar
Starting point is 00:31:23 of so many tests that we have today that they can be helpful in certain individuals, but they can be very harmful as well. And these particular harms, so I guess there's two problems here. We have the underuse or lack of availability of medicine to people who really need it and who have substandard care in a first world society, our own, that doesn't compare favorably to the rest of the developed world. But then here we're talking about the high class problem of having a more consumer relationship to advanced medicine, where you have access to what are ostensibly the best doctors, the best hospitals, the best information, the new scanners. And although, although even there,
Starting point is 00:32:12 I mean, just, just, just to give you a reference point for this, this angiogram. So like I went to a, you know, a highly regarded cardiologist on the assumption that, you know, whatever scanner he would be putting me in would be the latest and lowest dose of radiation scanner. And then I get the scan and I see the amount of radiation delivered. And I just check this with a friend who's a physician who has access to similar doctors. And he said, yeah, if I had ordered the scan, you know, you would have gotten, you know, one third the amount of dosage there. So it's like, I'm not quite sure why that you got put in that scanner. And just the fact that there's that kind of variance, I mean, not, you know, I'm not especially paranoid about this. I understand that this doesn't raise my cancer risk all that much. But the fact that in the most prestigious networked circles, there could be that kind of variance is just bizarre to me. Well, you've just touched on something as a pet peeve of mine, which is why don't we tell patients when we order a test or say they should have such a test
Starting point is 00:33:18 that uses ionizing radiation about how much radiation they'll be exposed to. That is, we don't have to use the millisieverts units. We could say it's equivalent to how many chest x-rays. All right. So this physician who I will not name, but whose name would be known to you, as part of his pattern, I asked the perfunctory skeptical questions about whether this scan was necessary and what my dosage would be. And he said, well, yeah, it's analogous to you taking 10 flights to Hong Kong this year. Has someone told you that you shouldn't go to Hong Kong 10 times this year? And I said, no, no, that sounds fine. I mean, it's a lot of Hong Kong, but I can do that.
Starting point is 00:34:02 But then when I actually saw my dosage and did a little arithmetic, it was more like, you know, 150 to 200 flights to Hong Kong this year. Right. Right. So, I mean, so it's just, you know, again, I guess I could be an airline pilot this year and it's okay. But still, it's just to have that wrong by orders of magnitude, it's just bizarre. have that wrong by orders of magnitude, it's just bizarre. Well, and also, if you take it by number of chest x-rays, when you tell a patient that's like 2,000 chest x-rays, they say, no, no, I'm not doing that. So if we just were real about, and the other thing you mentioned, I think has to be underscored as well, is that there's so much variability
Starting point is 00:34:45 in the exposure of the radiation. So we have, again, this is out of paternalism. You're rare because you actually asked your doctor, but most patients just go and have the scan. Right. And so this is something that's just not right because this is information that everybody should be entitled to and they should be part of the decision of whether they want to accept that type of exposure to radiation. Okay, so let's bring in the robots. How did you get interested in AI? When do you date your awareness of it as a possibly relevant technology for you? date your awareness of it as a possibly relevant technology for you? Well, you know, I had been working in the prior times on digital medicine.
Starting point is 00:35:32 That was a creative destruction of medicine. If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org. Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along with other subscriber-only content, including bonus episodes and AMAs and the conversations I've been having on the Waking Up app. The Making Sense podcast is ad-free
Starting point is 00:35:55 and relies entirely on listener support. And you can subscribe now at SamHarris.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.