Making Sense with Sam Harris - #467 — EA, AI, and the End of Work

Episode Date: March 30, 2026

Sam Harris speaks with William MacAskill about effective altruism, AI, and the future of humanity. They discuss the post-FTX recovery of the EA movement, global health and pandemic preparedness, the l...imits of quantifiable ethics, the intelligence explosion, risks of concentrated AI power, what a post-scarcity world might look like, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

Transcript
Discussion (0)
Starting point is 00:00:06 Welcome to the Making Sense Podcast. This is Sam Harris. Just a note to say that if you're hearing this, you're not currently on our subscriber feed, and we'll only be hearing the first part of this conversation. In order to access full episodes of the Making Sense podcast, you'll need to subscribe at samharris.org. We don't run ads on the podcast,
Starting point is 00:00:25 and therefore it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one. Well, McCaskill, thanks for joining me again on the podcast. It's great to be back on. Yeah, I don't know how many times this is, but it's many. Yeah, I think this is maybe number four on the main podcast.
Starting point is 00:00:45 Yeah, yeah, awesome. Well, you are my go-to guy on so many ethical questions, but effective altruism being the frame under which we think about these things. You have the 10th anniversary of your book, Doing Good Better, has just come upon us. There's a new edition. That's right, yeah. And what's changed about the actual text? So in the text, the statistics are updated.
Starting point is 00:01:09 And there's a new foreword, which is responding to some objections and reflecting a little bit on the last 10 years. Right. Growth in some of these ideas. Well, the last 10 years have been eventful for EA. Let's, I think the last time we spoke, we dealt with much of the controversy around Sam Bankman-Fried and F.T.X and all of that brain damage. Is there more to say about that? I mean, how is the EA movement slash community doing now and what has been the net effect of all of that? Yeah.
Starting point is 00:01:37 I think the main thing to say is that, obviously, that was a huge hit. It was like a huge knockback. But now, if you're looking at influence of the ideas, you know, really what matters, then there's been an enormous restoration of growth. So if you look at, for example, how much money is being moved to effective non-profits, that figure, I mean, it actually grew just kind of steadily, even through these periods of, you know, drama and, in cryptocurrency and implosions and so on. But over the last year, best guess is it grew about 50% is closing in on $2 billion a year now. And that's not just from, you know, a small number of
Starting point is 00:02:20 large donors. Actually, it's across large donors and small donors and so on. Similarly, if you look at giving what we can members, so people who pledged 10% of their income, that had year-on-year growth of about 20 or 30 percent. Similarly, if you look at people engaging with effective altruism as a movement via conferences and so on, that is also growing really quite healthily. So I think the overall story is that, yeah, that was a huge hit, but the underlying ideas are very good. Yeah. That means that maybe things are a little bit less in the public eye, but people are still being convinced by the importance of giving more and giving more effectively or using their career to do good. And I think that's got momentum all of its own. Right. Well, let's talk about those
Starting point is 00:03:05 pieces. I mean, for me, the biggest change in my life that I ascribe to effective altruism in general, and I mean, this is, and your influence in particular, has been the pledge. I mean, just deciding in advance to give a certain amount of money or a certain percentage of money, in this case, you know, 10% of pre-tax earnings, just knowing that that, on some of that money isn't even mine, you know, when it comes in the door, because it's been pre-committed to causes that seem important. That's just, I mean, it's an enormous kind of psychological change and just a life benefit, and it's just, you know, I've discussed this with you before, but, I mean, it's just fun and virtuous and just, it just seems good all the way around. The places where I remain
Starting point is 00:03:47 uncertain whether EA has all the wisdom it should have to inform the conversation is around just what constitutes effectiveness. I mean, how we think about that. that way, like the list of causes that are on the menu if you're EA versus, you know, causes that are almost by definition not on the menu. I think you're in your current thinking, you're arguing that we should expand the footprint of philanthropic targets beyond what is traditionally thought of as obvious EA causes. Maybe let's just start there. So when people think about effective altruism and its causes, what is the short list of causes that are obviously on the menu? Yeah. So firstly, thanks for
Starting point is 00:04:25 bringing up your 10% pledge. And one of the amazing things looking back at, you know, the last 10 years, including from our first podcast was 10 years ago, was the impact that you taking the pledge and being public about it has had, where we're now up to 1,200 10% pledges that have come from people who follow this podcast. And over $30 million of donations that have moved. So we're talking about, you know, thousands of lives saved there, which is pretty cool. Yeah. Hopefully we haven't undone those benefits with something else I've done on the podcast.
Starting point is 00:04:59 Fingers, fingers. But in terms of, yeah, areas of focus, so a huge one is global health and development. And still, that's where most of the philanthropic money that gets directed goes. So, so just maybe I'll touch each of these as you send it over the net. The obvious cynical retort to the wisdom of that is people should be more concerned about. suffering that's close to home. You know, America is kind of retrenching now under the influence of not just the orange menace in the Oval Office, but lots of people who, if they weren't EA, they were EA adjacent in Silicon Valley. I mean, all the tech bros who kind of went MAGA are, to my eye, at least, building a kind of iron wall of cynicism against many of the values that, you know, you've just begun to articulate.
Starting point is 00:05:54 And one brick in this wall is certainly this notion that philanthropy doesn't really work. You know, sending money to Africa is just kind of foolish. It's, you know, I mean, you might be helping people, some identifiable people. But, I mean, we've really doged all this so effectively now under the wisdom of Elon and his in-cell cult that, you know, we just saw that all of this. These are just all criminals who were wasting our money over there with you. USAID. The money should be used at home, and it should be used for the most part. I mean, philanthropy is just a boondoggle. It's just, we should be just building businesses that are effective and solving problems that we want to solve. And this seems to be the genius of Silicon
Starting point is 00:06:33 Valley and it's top people now. So this first claim that global health is such an obvious target and that the differential value of every dollar over there is so much more than it is here that, you know, you can do so much more good with a single dollar in sub-Saharan Africa. then you can do it in Menlo Park. There's the argument, but what do you say in the face of the cynicism? Yeah, I mean, I think this rise in cynicism is a terrible shame. And in fact, I think it will probably result in hundreds of thousands or millions of lives lost. So here are some things that are true.
Starting point is 00:07:12 Building companies. Let me just, sorry, interrupt you again. But let me just add that you might have seen the Lancet study that suggests that Elon dismantling of USAID will cost 14 million people to die unnecessarily in the next five years from infectious disease, 4.5 million of whom are under the age of five. Now, I mean, those numbers, I think, I mean, I would bet my life that the tech bros will be, you know, frankly incredulous when they hear those numbers, but let's discount them by a factor of 10. I mean, let's say it's only 1.4 million people, 450,000 under the age of five, right? It's still enough evil.
Starting point is 00:07:48 So mind-boggling numbers. Yeah. Exactly. And so it's true that building companies can be a great way of improving the world. It's also true that much aid can be ineffective, even sometimes harmful. That is just not true for the most effective global health and development interventions, which have saved hundreds of millions of lives over the course of the last 50 years. Even the leading aid skeptics like Bill Easterly will proactively say, of course, I'm not talking about global health. That has had enormous benefits. And when you look at the most effective organizations, you can show with high-quality evidence,
Starting point is 00:08:25 randomized controlled files, that these save lives. And in fact, the donations that have gone via Givewell have saved hundreds of thousands, best guess, over 340,000 lives now. This is at a cost of about $5,000 per life, whereas in the United States, a typical, or, you know, good cost, low cost, to give someone one year of life. is about $50,000. So you're looking at kind of in the United States, giving someone an extra month of life for $5,000
Starting point is 00:08:55 or saving a child's life for $5,000 in a poorer country. Right, right. Okay, so global health, what's the next area? A next big one is animal welfare, in particular farm animal welfare, where every year, about 90 billion animals are raised in factory farms and slaughtered. And the conditions they live in are truly associated.
Starting point is 00:09:18 These are the worst off animals in the world, such that, in fact, I think when those animals die, that's the best thing that happened to them because their lives are full of such suffering. And there are things we can do to have enormous impact. So, organizations within the kind of broader, effective altruism ecosystem, championed and then funded corporate cage-free campaigns, going to big retailers and restaurant chains and advocating for them to cut out the use of eggs from caged hens. and there were many pledges to do so. Ninety-two percent of those pledges have been fulfilled. Now, every year in the United States alone, there are three billion chickens that would have been brought up in caged confinement
Starting point is 00:09:59 that instead have at least somewhat significantly better lives. And that was on the basis of really quite small amounts of money. We're talking about tens of millions of dollars for these campaigns. So if you're concerned about the well-being of non-human animals and what are the just worst off creatures in the planet? Well, the amount of impact you can have per life there is just absolutely enormous. I think that factory farming is one of the worst atrocities that humanity is committing today, and sadly it's getting worse every year,
Starting point is 00:10:31 but we can make this extraordinarily large impact on it in absolute terms. So, yeah, so this is one area where perhaps my own cynicism creeps in. I worry that any focus on suffering beyond human suffering, it risks confusing enough people so as to damage people's commitment to these principles. So, I mean, I'm not the zero defensive factory farming coming from me here, but when I see a philosopher who's clearly, you know, EA or EA adjacent, arguing on behalf of the welfare of shrimp and claiming that maybe, you know, maybe the worst atrocity perpetrated by humans is all of the mistreatment of shrimp because they exist in such numbers and, you know, live such terrible lives. one imagines, though I don't really have strong intuitions about what it's like to be a shrimp. I just feel like those kinds of arguments, and this is where kind of the kind of vegan dogmatism
Starting point is 00:11:24 can come in, like you can occasionally find a vegan who's arguing that we need to actually do something, you know, with the state of nature, so to protect the rabbits from the fox's kind of arguments. This begins to look like a reductio at absurdum of just the whole enterprise. I mean, you're like, okay, okay, you know, I feel like people then declare, on some level ethical bankruptcy. They said, like, okay, I'm just going to worry about me and my family and my friends and figure out what to do on the weekends because these philosophers have gone crazy. They're telling me that I have to worry about shrimp now. And I worry that the same thing is now in the offing. We'll talk about this when we talk about AI when we start talking about the possible suffering of digital
Starting point is 00:11:58 minds. Now, I'm not actually prejudging the intellectual case you can make for the plausible suffering of shrimp or the likely suffering of some digital minds, but, or even if not now in the in the future. But I just think if we're going to push the conversation to a place where we're asking people to care about how NVIDIA's latest chips feel, you know, in some configuration, it's going to be, again, whatever is true, remaining agnostic as to what is true or will be true once we, you know, build more powerful AI. I mean, I just think even the Dalai Lama is not going to be able to shed a tear about digital minds. That's a, an epistemological boundary. But even if it's not epistemological. I think it's an emotional boundary for most people, at least for the longest
Starting point is 00:12:44 time. Okay, great. So lots to unpack there. And so I actually personally, I'm not convinced by the shrimp argument, but the thing I want to defend is people really taking ethical, including quite weird, seeming ethical ideas seriously and find a reason that through for themselves, where perhaps, you know, there are some groups which should be just really thinking about PR and how ideas will be received and kind of trying to build some kind of broad coalition on that basis. But I think some people just need to be trying to figure out just actually what is moral reality at the moment, what might be we be missing.
Starting point is 00:13:27 So there's this historical period that I got very obsessed with in writing my last book, which is the early Quakers, which led to the British abolitionist movement. It actually led to abolition of the slave trade and then of slave owning globally, in fact, for shuttle slavery. And boy, those people were weird early on. Like at the time, I mean, the idea that it would be immoral to own slaves was regarded as laughable. Let alone, many of them were vegetarian, and that was just absurd. What next they'll be saying that women should have the vote?
Starting point is 00:14:01 they should be pacifists, which they also were. And looking back at ideas that we now think of as utterly morally common sense, like equal rights for women or like the idea that's utterly immoral to own slaves, let alone completely absurd things like men having sex with men or something, these are things you would have been mocked for, maybe even regarded as kind of the pulpit, you know, apobius for suggesting. You can also add to that the picture that was given. to us by Descartes and others that, you know, animals as complex as, you know, dogs and apes could experience no pain, right? So they would just vivisect dogs by nailing their feet to boards
Starting point is 00:14:42 and then just performing, you know, surgery on them while alive. Yeah, or even torturing cats for entertainment. Right. Yeah. Yeah. So we have this long track record of humanity getting morality wrong really quite badly. And those people who pushed, early on for those changes being a, yeah, what I call a model weirdo. And I think at least some groups need to be in the business of really trying to figure this out. And maybe that means that lots of people will say, okay, I'm into effective giving, but not effective autism. That comes with all this baggage. And then I'm like, I don't really mind about labels. I don't really mind it. There may be, yeah, there are other people that can just take some parts and
Starting point is 00:15:27 leave others. But I think this kind of coldland of ideas and intellectual and like model exploration and seriousness, including when it comes to esoteric ideas like Schlimb or like digital minds or perhaps something else, I think is something important and something I, you know, I really would like to protect in fact. Right. Okay. So you got global health and animal welfare. What else is canonical EIPA? Yeah. So another is pandemic preparedness, which, you know, again, inviting this book and thinking about the last 10 years when I was first on the podcast, you know, we had these more speculative areas like pandemic preparedness of AI. Who knows if that's going to happen?
Starting point is 00:16:08 Exactly, on either counts. And, you know, that's something I'm personally particularly excited about because it's just the things that we can do are so slam dunk. And even despite a pandemic that killed tens of millions of people caused trillions of dollars of damages. You know, what sort of lessons did the world learn? Maybe people became more skeptical of vaccines. Yeah. Yeah. Yeah. Yet, things that could absorb, you know, take a lot of money, not enormous amounts globally, but hundreds of millions to billions. We could have mask stockpiles. We could build and deploy lighting that kind of sterilizes the air. Often these things look good,
Starting point is 00:16:49 even if you're just concerned about the economic impacts of colds. Right, right. We could be monitoring wastewater for any sort of new viruses. These protect against regular normal pandemics like we've seen throughout history, but they also protect against novel pandemics, where we have the ability now to create and build new viruses, new pathogens. At the moment, that ability is constrained to people with sufficient skills in a handful of labs, but the equipment needed to do so is getting, is not that expensive and it's getting cheaper all the time. The knowledge needed to do so is becoming more and more democratized. And this is something that we really want to get ahead of because it's really not that unlikely to me, maybe I'd say one and three, that we will just see
Starting point is 00:17:39 waves and waves of new pandemics as a result of people tinkering with viruses in their, you know, ultimately in their basement and it leaking out. So you're, you're imagining, just like endless lab leaks, or you're imagining that plus biological terrorism? I'm thinking the most likely thing is lab leaks, where obviously there's this big debate about COVID, but let's just put that to the side. Leaks of viruses from labs are just extremely common. In fact, they average, I think for every 100 person years of people working in at least the highest security labs, a virus leaks out.
Starting point is 00:18:16 So in the United Kingdom, the foot and mouth disease, which I remember from the kids seeing just millions of like cow carcasses being burned. Yeah. That was because of a, that was the result of a lab leak. Right. Where the same lab, in fact, leaked the virus two weeks after getting reprimanded for the beginning of before. It's actually just very hard to contain viruses, and so small mistakes can lead to leaks.
Starting point is 00:18:39 But yeah, it could be that. But in even worst case scenarios, yeah, bioterror attacks or just the threats of that. So North Korea could have a lot more bargaining power on the world. stage if it could credibly say and is in fact reckless enough to say, well, I have these bioweapons, we could release them. Yes, we would suffer mass casualties too, but, you know, I'm the dictator. I don't mind so much. Okay. Well, so what else is on the list beyond pandemics now? Yeah. And then the biggest one I would say as a kind of final category, though there are many other categories too, including kind of scientific development, scientific innovation, certain kind of pro-growth,
Starting point is 00:19:20 like sensible pro-growth policymaking as well, but is issues around AI, where again, this has been a worry for many years, was regarded as, you know, when I wrote doing it better, utterly sci-fi, you know, something for the year 2100 perhaps, but not for now. I mean, I still remember that. Like when I gave my AI talk at TED, which was exactly 10 years ago, I remember, I mean, I just as a kind of rhetorical device, just said, for argument sake, let's say we're not going to get there for 50 years, right? But I remember when I said that, I wasn't predicting that time frame, but it seemed totally plausible to think it might take 50 years. There's no one talking in terms of 50 years as far as I can tell now.
Starting point is 00:20:01 Yeah, exactly. And I was the same. I just had these huge other bars on when AI could come. And it's been a lot faster than I expected. I think you sent me a link or in one of your articles there was a link to this stat that as of like 2022, AI research, researchers forecast that it wouldn't be until, it was something like, you know, 5% or 2% of AI researchers thought that AI would win the Math Olympiad in like, by like 2025. I mean, it was just not, I mean, it was a total outlier position, but that's exactly what
Starting point is 00:20:37 happened. Yeah, exactly. So both machine learning experts and forecasters have all been taken by surprise. Yeah. By just how fast progress has been. in particular on domains related to reasoning, so mathematics, coding, and so on. And we're now in this very strange situation where actually the progress in AI capabilities is remarkably stable over time, which is what I would say stable exponential progress,
Starting point is 00:21:08 where there are gains in how much computing power are just being thrown at AI for training, for experimentation, for inference. there are gains in algorithmic efficiency, so how much of a punch can you get from that computing power? And then when you look at how does AI perform, whether that's on benchmarks or in terms of the time horizon of human equivalent tasks, so a task that might take a human three minutes or 30 minutes or three hours, that just occupies this relatively smooth exponential trend where at the moment AI is for software engineering can do tasks that human would typically take a few hours to do. That seems to be doubling something like every four to six months.
Starting point is 00:21:57 So in, you know, think about it a year's time, maybe you've got AI that can do what would take a human a week. Year after that will be a month and so on. And so that really changes the dynamic of how to think about AI, whereas 10 years ago it's much more based on kind of abstract arguments, how do agents behave. and so on in general. Now we can do experiments on AI systems to get a sense of how they act, what the risks are, what the potential benefits are.
Starting point is 00:22:27 And we can have a lot more confidence than we used to be able to have on when certain capabilities are coming. And in particular, the really scary point in time is when the AI loop feeds back on itself and you are able to automate via AI the process of doing AI research itself.
Starting point is 00:22:47 And they're good arguments, and my organization has done some kind of deep dive investigation into this question for thinking that you get this big leap forward and capability at that point in time. All right. We're going to jump into AI in a minute. I think that'll be the entire second half of our conversation. But you used a phrase a moment ago that caught my attention. You said something about positive growth or it just flagged for me that almost invariably our discussion about ethics and our discussion about EA, in particular. is kind of negatively valence. We're just talking about the risks that need to be mitigated,
Starting point is 00:23:22 the suffering that needs to be alleviated. But there's this other side of the question always, when you're talking about human flourishing, we also need to think about the positive goods that remain unactualized. And a failure to actualize them is also another cost, right? And I think I've seen people argue that it's, in many respects, it could be,
Starting point is 00:23:47 a larger cost. I mean, there's a, and I think there's an asymmetry in our thinking and in our experience where, where suffering gets weighted more heavily, I mean, which is to say that the, the worst pains are, are worse than the best pleasures are good, right? However, you want to grammatically finish that sentence. But I do that, I mean, when you, when you think about what's possible for us on the good side of the ledger and how, you know, I mean, just, we, we know nothing about the horizons of the good, really. I mean, how good could human life be and what are the, you know, how can we wait the opportunity costs of the present? I mean, the things we're doing now that prevent us from actually exploring, you know, the deeper reaches of human flourishing
Starting point is 00:24:30 and the ability to make a society that is, that allows for us to spend time there, as opposed to just putting out fires and figuring out how not to kill one another. That's also part of the calculus. Absolutely. So medicine often has this idea that it just wants to the store normal functioning and the point of medicine is to if someone is below normal, we'll get them back to normal. But it doesn't care at all about going from normal to very good. Yeah, so you're not going to be in the Olympics. We just want to get you out of bed. Yeah. Except what counts as normal functioning obviously changes over time. And it is true, I think, that in the world today for present day people, you can often have more of an impact by preventing suffering than by kind of enhancing
Starting point is 00:25:16 people to have even more well-being. But that's a contingent fact. And I do think that future generations will look back at our lives today and think, oh my God, they missed out. They didn't have good. And then insert goods like X and Y and Z in the same way as, you know, take our lives and imagine a different society where no one experienced love. And you'd think, well, that's how impoverished that society would be because of this absence of a good. And so I do think that when we're looking towards the future, we should be trying to think, yeah, not merely just how can we eliminate obvious causes of suffering, but actually how can we perhaps have a life that's radically better today than today,
Starting point is 00:26:01 where the best days in my life are hundreds of times better than a, you know, typical day. I would like more of that. I would like more of that for everyone. Right. Yeah. So I do think in those terms a lot when I look at the kinds of things that capture our attention, certainly in politics these days, I do view almost everything as an opportunity cost. And so this actually brings me back to my initial question and concern around EA in specifying how we think about effectiveness. I mean, so the E and EA is effectiveness, effective altruism. And insofar as there's a bias toward the quantifiable and a bias toward hitting the targets that we just described,
Starting point is 00:26:43 things like global health or pandemic risk, et cetera, or just existential risk more generally. I worry that we're sort of blind to obvious problems that are, you know, the intervention into which would be hard to quantify, certainly in advance, but they're blocking everything. I mean, like if you could imagine a project that would have, you know, and this doesn't even sound like an expensive one, but if we could have done something in advance to have inoculated the tech bro slash Manosphere podcasters against the charms of Trump and Trumpism, right? I mean, it's like Joe Rogan and the All-In podcast and Theo Vaughn and all these guys who put Trump on for hours at a stretch and didn't ask him a single skeptical question and just normalized his idiocy and dishonesty for just a vast audience.
Starting point is 00:27:30 I mean, I think it's not too much to think that that, you know, since he only won by whatever, of 1.5 percent. That was a, among the many things that perhaps overdetermined his victory, that was one of those things. And there you wouldn't have happened. And then you just look at what an opportunity cost, our current politics, and you know, America's current retreat from the world are disavowal of value. I mean, all the values we're talking about in this podcast, America as a country has completely disavowed them. I mean, we don't, we don't care what other nations do. We certainly don't care about climate change. I mean, there might be five people on Earth now who have the bandwidth to think about climate change. We don't
Starting point is 00:28:04 care about nuclear proliferation, and I think we're, you know, our retreat from the world is going to usher in a new era of that, so that if you're talking about existential risk, you know, that seems like a bad thing. The, I mentioned Elon and his dogeon, you know, if the Lancet is even remotely right over how many people will needlessly die as a result of that alone. I mean, that's, again, that was all downstream of a bunch of dummies talking to Trump in ways that could have easily been prevented if they only knew to prevent them. But like, that's not a project if, you know, it's not the most realistic thing that you would target with philanthropy, but it is the kind of thing that, you know, if you could have gotten your hands around that lever,
Starting point is 00:28:43 that's arguably more important than anything that's on Givewell's website right now, right, given the opportunity costs we're looking at in the unraveling of American values and American politics. So I just, I'm wondering how you think about being charitable and allocating resources in the context of problems. that often have that shape, just like the shape of what social media is doing to us and our capacity to cooperate about it, to solve any problem. If you'd like to continue listening to this conversation, you'll need to subscribe at samharris.org. Once you do, you'll get access to all full-length episodes of the Making Sense podcast.
Starting point is 00:29:25 The Making Sense podcast is ad-free and relies entirely on listener support, and you can subscribe now at samharris.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.