Making Sense with Sam Harris - #361 — Sam Bankman-Fried & Effective Altruism
Episode Date: April 1, 2024Sam Harris speaks with William MacAskill about the implosion of FTX and the effect that it has had on the Effective Altruism movement. They discuss the logic of “earning to give,” the mind of SBF,... his philanthropy, the character of the EA community, potential problems with focusing on long-term outcomes, AI risk, the effects of the FTX collapse on Will personally, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Transcript
Discussion (0)
Welcome to the Making Sense Podcast.
This is Sam Harris.
Okay, just a little housekeeping here. Over at Waking Up, we just introduced playlists,
which has been our most requested feature. Took a while to do that, but that seems like a very
auspicious change. You can create your own retreats. You can create playlists for any
purpose. Despite the name of the app, there's a lot of content there that is very good for sleep.
So you could create a sleep playlist. Many of us fall asleep to audio these days. So thanks to the
team over at Waking Up for producing that feature, among many others. The app is continually improving.
And what else? If you haven't seen Coleman Hughes on The View promoting his book,
that is worth finding on YouTube. Coleman was recently on the podcast discussing the book,
The End of Race Politics. He went on The View to do that.
As you might expect, he was bombarded with a level of moral and political confusion that
is genuinely hard to deal with in a confined space when one is short on time. And I have to say,
he really did a perfect job. I mean, it was absolutely masterful.
So it's worth watching in case there was any doubt in your mind about Coleman's talents.
If ever there were a commercial for the equanimity that can be achieved through mindfulness,
that was it.
So bravo, Coleman.
In the last housekeeping, I acknowledged the death of Danny Kahneman. I went back and
listened to my podcast with him, recorded at that event at the Beacon Theater in New York
about five years ago. I was pleasantly surprised. It's often the case that live events don't
translate into the best podcasts. I really thought this was a great conversation, and Danny was really worth
listening to there. So that was episode 150. If you want to revisit it, I really enjoyed hearing
it again. Okay, today I'm talking to Will McCaskill. Will is an associate professor in philosophy
and a research fellow at the Global Priorities Institute at Oxford University. He is one of the primary voices in a philanthropic movement known as Effective Altruism, and the co-founder of three
non-profits based on EA principles, Giving What We Can, 80,000 Hours, and the Center for Effective
Altruism. He is also the author of several books, including Doing Good Better, Effective Altruism and
a Radical New Way to Make a Difference, and most recently, What We Owe the Future.
However, today we don't talk much philosophy. Rather, we do a post-mortem on the career
of Sam Bankman Freed and the implosion of FTX, and look at the effect that it's had
on the effective altruism movement. When we recorded
last week, Sam had not yet been sentenced, but he has since, and he was sentenced to 25 years
in prison, which is not as much as he could have gotten, but certainly more than the minimum.
I must say, that strikes me as too long a sentence. You'll hear Will and I struggle to
form a theory of mind of Sam in this podcast. We discuss the possibilities at some length,
but when you look at some of the people who don't get 25 years in prison for the malicious things
they do, I don't know, it does not strike me as a fair sentence. Perhaps
I'll talk about that more some other time. Anyway, Will and I talk about the effect that this
fiasco has had on effective altruism, the character of the EA community, potential problems with
long-termism. We have a brief sidebar discussion on AI risk. We discussed the effects of the FTX collapse on Will personally
and other topics. There's no paywall for this one, as I thought everyone should hear what Will has
to say on this topic. As you'll hear, despite the size of the crater that Sam Bankman Freed left
on this landscape, I consider the principles of effective altruism untouched. And while I've
always considered myself
a peripheral member of the community, you'll hear me discuss the misgivings I have with it
once again here. I've discussed them on previous podcasts as well. I just think the backlash
against EA is thoroughly wrongheaded. And Will and I talk about that. As always, if you want to
support the podcast, there's one way to do
that. You can subscribe at SamHarris.org. And if you can't afford a subscription, you can request
one for free. Occasionally I hear rumors that someone has requested a free subscription and
didn't get one. That should never happen. So check your spam folder. Something has gone wrong. Just request again if that happens to you.
We don't decline any of those requests.
And now I bring you Will McCaskill.
I am back here with Will McCaskill.
Will, thanks for joining me again.
Thanks for having me on.
So we have a lot to talk about. I've been wanting to do a post-mortem with you on the Sam Bankman
Freed FTX catastrophe. I don't think that's putting it too strongly, at least in EA circles.
So we're going to talk about what happened there, your perception of it, what it has done to the optics around effective
altruism and perhaps effective altruism itself. Where should we start here? I mean, perhaps you
could summarize what Sam Bankman Freed's position was in the EA community before the wheels came so fully off. Where did you meet him? He certainly seemed like
a promising young man who was going to do great things. Perhaps we should take it from the top.
Sure, I'm happy to. And yeah, he did, from my perspective, seemed like a promising young man,
even though that's very much not how it turned out. So I first met Sam all the way back in 2012.
I was giving talks for a new organization I'd set up, 80,000 Hours, which were about how you can do
good with your career. And I was going around college campuses speaking about this. And I can't
remember who, but someone put me and Sam in touch. I think he had been quite active
on a forum for people who are interested in utilitarian philosophy and so ideas like
earning to give had been discussed on that forum and as a result we met up for lunch and he came
to my talk. He was interested in a number of different career paths at the time,
so politics, earning to give was one. Perhaps we should remind people what earning to give means
because it's really the proper framing for everything Sam was up to. Sure. So earning to
give was the idea that rather than, say, directly working for a charity, instead you could deliberately
take a career that was higher paying, something you were perhaps particularly good at, in order to donate a significant fraction
of your earnings, where depending on how much you made, that might be 50% or more.
And the core idea was that, well, you could say become a doctor in the developing world,
and you would do a huge amount of good by doing that. Or you could earn more and donate enough to pay for many doctors working in the
same cause, and thereby perhaps do even more good again. And this was one of the things that I was
talking about at the time. And he found the ideas compelling. You know, we discussed it back and
forth at the time. I next met him
something like six months later at a vegan conference. And he told me, we hadn't been
much in touch in that period, but then he told me that he'd got an internship at Chain Street,
which is this quantitative trading fund. And, you know, that was very impressive. I thought
he seemed just this very autonomous, very morally motivated person. Animal welfare was his main focus at the time. He said later he'd also
asked some animal welfare organizations, would they rather his time, would they rather they
work for him, or would they rather that he go make money in order to donate it to them?
And they said, we'd rather have the money. And so he went and
did that at Janestreet, but then subsequently left and set up a trading firm called Alameda
Research that was a cryptocurrency trading firm. And then a couple of years later, an exchange,
as in a platform where others could trade cryptos at currency, called FTX in 2019.
Those seemed to be incredibly successful.
So by the end of 2021, he was worth tens of billions of dollars.
The company FTX was worth $40 billion.
And he seemed to be living up to his kind of claims.
He was saying he was going to donate everything, essentially
everything he earned, 99% of his wealth, something like that. And through the course of 2022,
had actually started making those donations too. Had donated well north of $100 million.
But then as it turned out, in November, it seemed like the company was not all that it
seemed. There was what you could call a run on the bank, except it wasn't a bank. So there was
a loss of confidence in FTX. A lot of people started withdrawing their money. But the money
that customers had deposited on the exchange that should have been there was not there.
And that should not have
been possible. It must have been the case, therefore, that Sam and the others leading FTX
had misappropriated that money in some way, all the while saying that the assets were perfectly
safe, that they were not invested. That led to the complete collapse of FTX. A number of people,
so three other people who are high up at FTX or Alameda, so Caroline Ellison, Gary Wang, and Nishad Singh, they all pleaded guilty to fraud a couple of months after the collapse. Sam did not plead guilty, but there was a trial at the end of last year, and he was found guilty. What is his current state? He's in jail. Is he awaiting an appeal? Do you have up to the minute information on his progress through the criminal justice system?
Yeah. So he's in jail and he's awaiting sentencing, which will happen next week, I think.
So I guess one thing we should talk about is some theory of mind about Sam, what his intentions actually were insofar as we can guess about them.
There are really two alternate pictures here which give a very different ethical sense of him as a person and just the situation so many people were in in giving him their trust.
Perhaps we can just jump there.
Do you think this was a conscious fraud? I mean, perhaps there are other variants of this,
but I'll give you the two that come to mind for me. Either this was a conscious fraud,
where he was quite cynically using the concepts of effective altruism, but his heart was never really in that place,
and he was just trying to get fantastically wealthy and famous and misappropriating people's
funds to that end, and it all blew up because just bad luck on some level. So he was kind of a
Bernie Madoff-style character running something like a Ponzi scheme or some unethical variant of misappropriating people's funds.
Or alternately, and I think quite differently, he was somebody who, based on what he believed about the actual ethics of the situation and probability theory,
actual ethics of the situation and probability theory, he was taking risks that he shouldn't have taken, obviously in the end, given the outcome, but they may well have paid off.
And he was taking these risks because he wanted to do the maximum amount of good in the world with
as much of the resources available that he could get his hands around. And he was just placing,
in the end, some silly bets that he was allowed to place in a totally unregulated space.
And it catastrophically failed, but it was by no means guaranteed to fail. And he was, on some level, a good guy who was ruled by some
bad or at least unrealistic expectations of just how many times you can play a game of roulette
and win. Perhaps there's some middle position between those two cases. But what's your sense of his actual intentions throughout this whole time?
Yeah, so this is something that I've now spent many months over the last year and a half
really trying to understand. I didn't know about the fraud or have suspicions about the fraud at
the time, so my understanding of things here is really me trying to piece
together the story on the basis of all that's come out as a result of the trial and media coverage
over the last year and a half. One thing I'll say kind of before we talk on this is very easy once
you start getting into trying to inhabit someone's mental state to start saying things
where, you know, it sounds like you're defending the person or something.
And so, yeah, I just want to be clear on just how bad and how harmful what happened was.
So, you know, a million people lost money.
The scale of this is just unbelievable.
And actually recently the prosecution released Twitter messages that Sam had received during
the collapse.
And they're really heartbreaking to read.
Like one is from a Ukrainian man who had fleed Ukraine and in order to get his money out,
put the money on FTX.
Another person who was going to be made homeless, had four children, he needed to feed.
And so...
On that point, we'll just linger for a second.
I had heard that a lot of the money was getting recovered.
Do you know where that process is and how much has been recovered?
Yeah.
So as it turns out, all customers will receive all of the financial, all of the money they put on the exchange as measured in terms of the value of what they put
on the exchange in November 2022. So the value as of that date as opposed to the amount they
initially put in at whatever date? Yes, but also as opposed to the amount today. So often people will say putting
Bitcoin on the exchange. Bitcoin is now worth more than it was then. The standard narrative of this
has been that that has happened because crypto-historism and Anthropic, a particular investment
that was made, has done well. My best understanding is that actually that's not accurate. That has helped, but even
putting that to the side, even as of September last year when there had not been a crypto rise,
customers would have been made whole. So the issue was not that money was taken and then just lost
in the sense of spent or just lost on bad trades or something. Instead, the money was
illegally taken and invested into assets that couldn't be liquidated quickly.
So already we're, I think, a far distance from someone like Bernie Madoff, right? Who was,
whatever the actual origins of his behavior, whether he was ever a legitimate
investor, for the longest time, he was making only sham investments and just lying to everyone
in sight and running a proper Ponzi scheme.
Yeah, that's right.
Bernie Madoff was committing the fraud for about eight years.
And yeah, he was needing any time a customer, customer or sorry any time a client of his wanted
to withdraw money he would raise more money in order to give it back right to give the fiction
you know what should have been there to the customers which is a ponzi screen yeah alameda
and ftx they this is one of the things that's so bizarre about the whole story and even tragic
is just that the companies themselves were so bizarre about the whole story and even tragic is just that
the companies themselves were making money in fact large amounts of money but what but the customer
asked it so in that sense they were not ponzi schemes but the customer assets held on ftx that
should have been there should have been you know bank vaulted separate from everything got used by
alameda research the trading firm in a way that should not have been bank vaulted separate from everything, got used by Alameda Research,
the trading firm, in a way that should not have even been possible.
And so you were asking on how to interpret the story, and you gave two interpretations.
One was that effective altruism was a sham.
His commitment to that was a sham.
He was just in it for his own power, his own greed. The second was that it was some carefully calculated bet that, you know,
may have been, was illegal, had good intentions though, or perhaps twisted intentions, but didn't
pay off. My personal take is that it was neither of those things. And obviously I'll caveat, you
know, I've followed this a lot because I've really
tried to dissolve the confusion in my mind about what happened. But I'm not an expert in this.
It's extremely complicated. But I think there's a few pieces of evidence for thinking that
this just wasn't a rational or calculated decision. No matter what utility function
Sam and the others were following, it did not make sense
as an action.
And one piece of evidence is actually just learning more about other white-collar crimes.
So Bernie Madoff being one example, but the England scandal and many others too.
So there's this Harvard business professor, Eugene Salters, who's written this really excellent book called Why They Do It, about white-collar crime.
And he argues, and it's on the basis of interviews with many of the most famous white-collar criminals, and he argues quite strongly against the idea that these crimes are a result of some sort of careful cost-benefit analysis. Mainly, you know,
in part because the cost-benefit analysis just does not make sense. Often these are actually
really quite wealthy, really quite successful people who have not that much to gain, but
everything to lose. But then secondly, looking at how the decisions actually get made, the word he
often uses is mindless. You know, it's like people aren't even paying attention. It might be that, you know, this was true for the CEO
of McKinsey gets off, gets off a call from the board of Goldman Sachs, I think. And immediately,
23 seconds later, calls a friend to tell him about what happened in the board meeting in
a way that was illegal insider trading. This was not a carefully calculated decision. It was just, it was irrationality. It was a failure, a failure of intuition rather than
kind of reasoning. And that's my best guess at what happened here as well, where, yeah, I think
what happened, it seemed, is that, I mean, yeah, there were so many...
Actually, let me layer on a few points that could certainly bias some people in the direction of what's called long-termism, which I think we'll get to, but it comes down to just how to integrate rationally any notion of
probability, especially probabilities where one side of the decision tree represents some
extraordinarily large possible gains, right? So I believe Sam at one point was accused of believing, or he may have said something along these lines, that if the expected value is such, you know, that you could, I forget if it was in terms in positive terms or negative with the risk of ruin on one side with a sufficiently
large expected value of positive outcome on the other. It's just like if you have a chance to win
a million dollars on one side and lose 100,000 on the other, you should just keep tossing that coin because your expected value is 50%
of a million on one side and 50% of losing $100,000 on the other. So your expected value is
$450,000 every time you toss that coin. But of course, if you only have $100,000 to lose,
you toss the coin, you can lose everything on your first toss, right? And so he
just seems to be someone who was looking at the expected value proposition somewhat naively and
looking at it with everyone else's money on the line. Or at least that's, you know, certain things
I've heard said of him or by him suggested that was the case. So perhaps bring in your beliefs about how one
should think about probability, and it kind of ends justifying the means thinking that many
people believe has corrupted EA more generally, and Sam is just kind of the ultimate instance of
a cautionary tale there. Sure. So yeah, one thing that's just absolutely
true was Sam seemed unusually risk tolerant and was unusually risk tolerant. And at the outset,
like when the collapse happened, I absolutely was worried that perhaps what had gone on was
some sort of carefully calculated fraud, carefully calculated willingness
to break, just to break the law in the service of what he thought was best. You know, I was worried
that maybe there would come out a spreadsheet that, you know, did a little cost-benefit analysis
of fraud and was clear for all to see. I think, yeah i think there are just good reasons for thinking
that's not what happened and let's i'll discuss that first and then let's come back to the
important points about attitudes to risk and ends justify the means reasoning but just briefly on
why that's i think not what happened so one is just the overall plan makes so little sense if it was a long con, like a con kind
of from the very start.
So if that was the case, then why would they be trying so hard to get regulated?
Why would they be so incredibly public about what they were doing and, in fact, very actively
courting press attention, or in fact, having
Michael Lewis, one of the world's leading financial writers, following them around,
having access. Why would they be associating themselves with EA so much as well, if that was
also what they seemed to care about? And then the second thing is, it's just absolutely agreed by everyone that the companies were a shambles. There was not even the most basic accounting, not the most basic corporate controls.
where they were very worried because it looked like there was a $16 billion loan from FTX to Alameda, and they thought that was really bad.
It turned out that was a bug in the code, and actually it was only an $8 billion loan.
And they were apparently elated at the time.
So they didn't know how much their assets were to within $10 billion.
So they didn't know how much their assets were to within $10 billion.
And in fact, it was at that time that they discovered that they had been double counting $8 billion.
So customers for FTX, one way they could put money on the FTX exchange was by sending money
to Alameda that Alameda then should have given to FTX.
sending money to Alameda that Alameda then should have given to FTX. And it seems that in June of that point, of that time, they realized that had not been happening. Alameda had thought that money
was within Alameda, legitimately, I guess, and FTX had thought that money was there. And now I'm not
going to claim I know that wasn't like conveniently overlooked or something,
but at least Caroline on the stand testified that, you know, Sam might, she at least didn't know that Sam knew that that money prior to that point in time was in Alameda when it,
whereas it should have been in FTX. And that's the way almost all the money flowed from FTX to Alameda. There was also a lending
program, which was really focused on by the prosecution. But that actually, yeah, we can
go into that in more detail as well. I think that's actually not where the action was in terms
of how the money moved. And if you're just like really get into the heads of the people at this
time, okay, let's now suppose they're the most ruthless consequentialists ever. And they want
to just make as much money as possible. Let's say that's just dollars raised. They're not risk
averse at all, which wasn't even true, but let's even assume that, why on earth would they take that money
and then invest it, 5.5 billion of it, into illiquid venture investments? So companies,
basically. It was obviously posing enormous risks on them, and the gains were really quite small
compared to the potential loss of not only everything in the company, but also the huge harm that it would do
to the rest of the world, to the effective altruism movement itself. It really just makes
no sense from a utilitarian perspective at all. And that's why when I try and inhabit this mode
of them making these kind of carefully calculated rational decisions. I think
it just, there's too many facts that seem in tension with that or inconsistent with that,
for that to be the kind of best, at least my best guess at what happened.
So what do you think was actually going on? Did you ascribe this in the end to some version of
incompetence combined with a dopaminergic attachment to just winning at
some kind of gambling task? Yeah. I mean, I see the deep kind of vice that ultimately drove this
all as hubris, where they were not very experienced. They were very smart. They grew a company in what
was for a while an impressive way.
And Sam in particular, I just think, thought he was smarter than everyone else. And this was
something that I didn't like about Sam. I noticed like during the time he was, he would not be
convinced of something just because other people, even if everyone else believed X and he believed
Y, that wouldn't give him no pause for doubt.
And so I think he got kind of corrupted by his own success.
I think he felt like he had made these bets that had paid off in the spite of people being
skeptical time and again.
And so he just thought he was smarter.
And that means that very basic things like having good accounting, having the kind of
adults, professionals come in who could do risk management, actually point out what on
earth was going on with where different stashes of money were.
Because this is another thing that shows just how insane the whole venture was.
At the time of collapse, they just didn't know where most of their assets were.
You know, there would be hundreds of millions of dollars in a bank account somewhere,
and they wouldn't even know it existed.
The bank would have to call them to tell them, by the way, you've got these assets on hand.
Again, if it was a carefully calculated ploy, you would want to know where all of your assets
were in case there was a mass withdrawal of customer deposits.
And so I think,
yeah, that hubris also kind of not a risk calculation, but maybe an attitude to risk
where many people, I think when they're in the position of, you know, really quite rapidly
running a multi-billion dollar company would think, holy shit, I should really get some
experienced professionals in here whereas that was and be
quite worried about you know have i attended to everything have i basically just have i got this
company under control and i think at that point they did you know that was not at all how sam and
the others were thinking and then the final thing to say just is that this isn't saying that they didn't commit fraud from June onwards after this hole has been discovered.
I think then it becomes pretty clear that, you know, there are just brazen lies to try to get
out of the position that they've put themselves in. I think there are also other cases of things
that seem like clearly fraud, though they are not of the kind of $8 billion scale.
And this was fraud, you think, to conceal the hole in the boat that was putting everything at risk?
Or this was fraud even when things appeared to be going well and there was no risk of oblivion evident?
I mean, what was the nature of the
fraud, do you think?
Again, flagging that
there's probably lots that I'm saying that are wrong
because it's complex
and I'm not confident. There's lots of different
stories. My guess is
both. In the trial,
one thing that came up, and I'm surprised
it didn't have more attention,
was that FTX advertised
it had an insurance fund. So if its liquidation engine, basically a bit of technology that meant
that even if a customer was borrowing funds on the exchange in order to make basically a bet on the
exchange with borrowed money, you know, on other exchanges, you could easily go negative by doing
that. And that would mean other users would have to pay to cover that loss. FTX had this
automatic liquidation engine that was quite well respected. But they said, even if that fails,
there's this insurance fund that will cover any losses. However, the number that was advertised on the website
seems to have been created just by a random number generator. So that seems like really
quite clear fraud. And I don't know, it hasn't been discussed very much, but seems totally
inexcusable and was applied even when the going was good but then the big fraud the
eight billion dollars it seemed like that was that really started kicking in from june of 2022
onwards though i'll also say like you know we can talk about my interactions with the people there
it seemed like looking back i think it seems to me like they did not know just how badly the
situation, bad the situation they were in was. But yeah. Yeah. Well, let's talk about your
interactions and let's focus on Sam to start. I mean, you bring up the vice of hubris. I mean,
I only spoke to him once, I believe. It's possible I had a call with him before I did a podcast with
him, but he was on the podcast once and this was very much in the moment when he was the
darling of the EA community. I think he was described as the, I guess, he was the youngest
self-made person to reach something like $30 billion. I think he was 29 and had $29 billion
or something at the time I spoke to him.
And again, the purpose of all of this earning was to do the maximum amount of good he could
do in the world. He was just earning to give as far as the eye could see. I didn't really
encounter his now famous arrogance in my discussion with him. Maybe he just seemed smart and well-intentioned.
And I had no reason to, I knew nothing about the details of FTX apart from what he told me. And
I think anyone in our position of talking to him about his business could be forgiven for not
immediately seeing the fraudulence of it or the potential fraudulence of
it, given that he had people invest with him, quite sophisticated venture investors early on,
and they didn't detect the problem, right? And we were not in the position of investing with him.
But in the aftermath, there are details of just how he behaved with people
that struck me as arrogant to the point of insanity, really. I mean, just like he's,
you know, in these investor calls, apparently he is, while describing his business and soliciting,
I think it was hundreds of millions of dollars at a minimum from firms like Sequoia. He is simultaneously
playing video games, and this is celebrated as this delightful affectation. But clearly,
he is someone who thinks he need not give people 100% of his attention because he's got so much
bandwidth, he can just play video games while having these important conversations.
And there were some things in Michael Lewis's book that revealed, or at least seemed to reveal, that he was quite a strange person.
And someone who claimed on his own account, at least to Lewis, that he didn't know what people meant when they said they experienced the feeling of love.
So he's neuroatypical at a minimum.
And perhaps I just offer this to you as just a series of impressions, but how peculiar a person
is he? And shouldn't there have been more red flags earlier on, just in terms of his integrity ethically or just his capacity
for ethical integrity given... I mean, if someone tells me that they have no idea what anyone means
when they say they love other people, that is an enormous red flag. I mean, it's something that I would feel compassion for, that the person is obviously missing something. is your impression of Sam as a person? And
in retrospect, were there signs of his unreliability ethically far earlier than
when the emergency actually occurred? Sure. So there's, yeah, a lot to say here. And
So there's, yeah, a lot to say here.
And briefly on the not feeling love.
So yeah, my descriptions of Sam and feelings about Sam are quite varied and variegated.
On his ability to not feel love, it's, you know, that wasn't something that seemed shliking or notable to me.
Like after the Michael Lewis book and lots of things came out,
it seemed like he had just emotional flatness across the board. And whether that's a result of
depression or ADHD or autism is like not really that clear to me. But that wasn't something that
seemed obvious at the time, at least. I guess I interact with people who are, you know,
relatively emotionally flat quite a lot. I certainly I interact with people who are relatively emotionally flat
quite a lot. I certainly wouldn't have said he's a very emotional person.
He did seem like a very thoughtful, incredibly morally motivated person all the way back to 2012.
I mean, his main concern was for the plight of non-human animals on factory farms for most of
that time. It's kind of an unusual thing to care about if you're some sort
of psychopath or something like that. Yeah. I, you know, when I first reached out to Sam after
FTX had been so successful, I talked to him about, you know, okay, you've started this company. It's
a crypto company. Isn't crypto like, you know, pretty sketchy, like what the, you know, how much
have you thought about
risks to the company and so on and there was a narrative that came from him and then was echoed
and emphasized by nishad singh who in my experience was really a kind of crucial part of the story
like really crucial part of my interface with that world and the story i got told was FTX is trying quite self-consciously to be much more ethical
than the standard crypto exchange or anything going on in the crypto world.
And there are two reasons why we need to do that, even putting aside the, you know, intrinsic
desire to act ethically.
One is because they were trying to get regulated.
So they were very actively courting regulation in the US
because they thought that was a way in which,
they were these center-left people.
They were not the libertarians that populate crypto normally.
They thought that's how they could get the edge over the competitors
was by being
much more open and open to regulation.
And then secondly, because they planned to give the proceeds away, they knew that they
would get, you know, they would face a higher bar for criticism.
And that claim got made to me over and over again, where not just Sam, but Nishad.
Sam was very busy, so I spoke to him a number of times,
like half a dozen times or something, one-on-one, more times in group settings.
But I talked to Neshad, and Neshad really came across. I mean, this is the thing that maybe
breaks my heart the most about the whole story, where he came across just as this incredibly thoughtful,
you know, morally motivated, careful, just kind person.
And I would ask him kind of,
okay, so why are you in the Bahamas?
And there would be an answer,
which is that that's where they were able to get licensed.
Or I'd ask kind of, why is your apartment so nice?
And they would say, well,
you can't really get kind of mid-level property in the bahamas we just need somewhere that we can create like a campus feel and so yeah it is nicer
than we'd like hopefully we can move over time to something a bit less nice so over and over again
or other kind of ethical issues in crypto we can go into and yeah over and over again he was painting
uh that picture and something that was just so hurtful and confusing is just, was he lying to me that whole time? Like, was that just all false?
Or was he just like a gullible fool? I haven't followed the trial in sufficient detail to know
what his role was revealed to be in all of this? I mean, where is he and has he been prosecuted?
And what do you think of his actual intentions at this point?
Yeah, I mean, so he pled guilty, I think, for fraud, among other things, and he testified.
In these pleadings, do you think this was, you know, just kind of a classically perverse
prisoner's dilemma situation where you have people, given the shadow of
prosecution and prison hanging over them, they're willing to testify to things that
the government wants to hear, but which are not strictly true? I mean, what's your theory of mind
for the people who pled guilty at this point? Yeah. I mean, again, this is something that comes up in Eugene Salters' book and he talks about where it's a very strange aspect of the US legal system. Like
it's not something that happens in the UK where the government will reward people literally with
their lives for going on the stand, like, you know, because they will, the other people probably
will get no jail time. They will reward people in that way for going on the stand like you know because they will the other people probably will get no jail time they will reward people in that way for going on the stand and testifying and so that just does
mean you know they can't tell lies or not verifiable lies but there are very strong incentives to
present things in a certain way and again i don't want to this is all sounding much more defensive
of sam than i want to be but the salters book talks about some people who were just, you know, they would be
rehearsed with their lawyers for hours and hours and hours in order to seem, you know, display
appropriate contrition and so on. And so the view of this that kind of Michael Lewis took is just,
you know, people understand they will have just said true things throughout but
the kind of tone of it is maybe a little different than it really was where there was a lot of you
know the co-conspirators talking about how bad they felt and how they knew what they were doing
was wrong at the time they were really torn torn up. That seems quite inconsistent with my experience of them, but maybe they were just incredible
liars.
One question about that.
So they knew what they were doing was wrong, could mean many things.
It could mean that they knew that they were taking risks with people's funds that were
unconscionable, given the possibility of losing money that customers thought was
safely on the exchange.
But that's not the same thing as stealing money and misappropriating it in a way that
is purely selfish, right?
It's not like we took money that was not ours and we bought luxury condominiums in the Bahamas
with it and hoped no one would notice, right?
That's one style of fraud. You tell me, is it possible that they thought they were going to wager this money on
other real investments, however shady some of these crypto properties were, but they actually
expected enormous returns as a result of that misappropriation and that money would come back
safely into FTX and no one would lose anything in the end if everything worked.
Yeah. I mean, in terms of how things seem to me, I just think they didn't think the company was at
risk, not at serious risk. And here's a couple of, there's a few reasons why. I mean, one,
this is kind of how I felt,
like why I was so confused this whole time. Like, you know, I visited the Bahamas a number of times
in 2022. I never saw any kind of change in attitude from them over that time. Like you
would really think if you're engaging this major fraud that something would seep out,
some sort of flags. Maybe I'm a fool, but I did not see that.
And in fact, even so in September, my last trip to the Bahamas, I heard from Michael Lewis that
Sam had been courting funding for FTX from Saudi Arabia and other places in the Middle East.
And I do not love the idea of taking money from Saudi Arabia. I have issues with that. And it also
just struck me as kind of odd. And I was aware there was a crypto downturn. So I talked to Nashad
and I say, I asked like, look, is there anything up with the company? Like, are you in trouble?
And he says, no. And we talk about this for, you know, it's not a passing comment for some time.
And that by any account is like past the point when he allegedly had learned about
the huge hole that the company faced. Similarly, Michael Lewis, at the same time,
asked both Caroline and Neshad as a kind of fun question, like, oh, what could go wrong
with the company? If this all goes to zero, what happened? And again, he said like no indication
of stress upon hearing that question. They had fun with it. They were like, oh, maybe crypto is
just a lot of air and everyone gets turned off or maybe Sam gets kidnapped. That was kind of one of
the big worries, but nothing that kind of leaked out there. So given his guilty pleading and his
testimony, what's your belief
about your conversation with Nishad at that point? Do you think he was unaware of the risk,
or do you think he was lying to you? So I think another thing that came out during the trial,
though I'm not sure if it was admissible as evidence, was that Nishad commented to the
government kind of immediately upon pleading guilty that in that period, he was, he kind of
thought, he still thought that FTX would last for years. So, and yeah, and in terms of just giving
an indication of what Nishad's personality was like when the collapse happened, he really, he
had to be watched because he was on the verge of suicide.
He was so distraught about what happened to the customers
and I think was really quite close to taking his own life.
So then what sort of fraud was he pleading guilty to
if he's the kind of person who's suicidal when the wheels come off
as though he had no idea that this was in the cards, right?
I mean, you think he's just responding to all of the opprobrium and disgrace aimed his
way in the aftermath, or do you think he was actually surprised fundamentally by the risk
that was being run?
And if the latter, in what sense does he claim to be culpable for a conscious fraud?
Yeah, I mean, so, yeah.
So I don't know about whether Neshad at this time was ignorant, as in just like really did not know the risks they were running.
I don't know if he was just like delusional.
delusional so again this is a thing that soltez talks about is just you know the capacity for humans to create a narrative in which they're still the you know the good guys i don't know
perhaps he thought that yes this was bad but they would get out of it and so it was a little bit bad
but it'll be fine maybe maybe that was there too oh yeah one thing that he does is he buys this
three and a
half million dollar property for himself in october again it's just not the action with someone who
again on the stand said that you know how distraught he was and he was talking to
sam about this so all of these things are possible to me as for you know him pleading guilty well i
mean whichever of these is true i think it would make
sense to plead guilty if you're in that situation where you know there are huge costs to going to
jail and like i don't know like you know plausibly also he just thought look yes i thought so you
know there's a very various stories but plausibly he thought look yes i knew what i was doing was
bad i thought it was only a little bit bad. Actually, I was wrong. It was very bad. It was extremely bad. I'm willing to just fess up and take the hit I should get. Like, there are various possible explanations there.
the trial, but in the court of public opinion, there was a lot made of a text exchange that Sam had with somebody. I think it was a journalist or a quasi-journalist in the immediate aftermath
of the scandal where he seemed to admit that all the effect of altruism lip service was just that.
It was just the thing you say to liberals to make them feel good. I forget the actual language, but it just, it seemed like he was copying to the fact that his, it was,
that part was always a ruse. Honestly, when I read those texts, I didn't know how to interpret them,
but it was not obvious to me that they were the smoking gun. They were, they appeared to be in
so many minds who are ready to bury effective altruism as a scam.
Do you know the thread I'm referring to?
And do you have a view of it?
Yeah, I know the thread.
And yeah, I mean, in a way, like from the perspective of the brand of effective altruism,
maybe it would have been better if he'd actually, that part had been a big con too but no i think he
i think he believed in these ideas i think there he was deferring to kind of what you might call
like like sort of corporate ethics that are really kind of pr so you know companies will often make
these big charitable donations to their local community and so on. And I mean, in this case, exactly, everyone knows this is
marketing. And I guess I don't know the details, but presumably FTX was doing stuff like that in
the same way other companies do. And my interpretation of those texts is that that's
what he was referring to. Actually, it reminds me that the one concern I did have about Sam
before the scandal broke, I don't know if this was
contemporaneous with my conversation with him on the podcast, but I just remember thinking this.
When I heard how much money he had given away, and you referenced it earlier, it was something
north of $100 million. I'm always quick to do the math on that, and I recognize what a paltry sum that actually is if you have $30 billion.
It's an enormous amount of money out in the real world where people are grateful for whatever you give them,
but it's analogous to somebody who has $30 million giving $100,000 away.
It's not a sacrifice. It's a rounding error on their actual wealth. And it's
certainly not the sort of thing that I would expect of someone for whom the whole point of
becoming fantastically wealthy is to give all of it away. And so I think, and I forget if I asked
him a question about the pace of his giving during that podcast, but it's just, I know that some
people think, well, it's think, well, the best thing for
me to do is to use these assets to make more assets in the meantime, and then I'll give it
all away later on. But given the urgency of so many causes and given the real opportunity to
save lives and mitigate enormous suffering every day of the week, starting now. My spidey sense tingles when I hear a fantastically wealthy
person deferring their giving to the far future. And so I'm wondering what you think of that.
Sure. Yeah, I think that wasn't really an issue. So a couple of reasons. One is just, you know, his net worth is basically
entirely in FTX. There was no way of converting that. So if you're in a startup and all your
wealth is in the equity of that startup, there's not really any way of converting that wealth into
money, the sort of thing that you could donate. You have to basically keep building the company
until you can have an exit. So get
acquired or sell the company, and then you can become more liquid. And then the second thing is
just at least relative to other business people, he was very unusual in wanting to give more and
give quickly. So I mean, I advised on the setup of his foundation and it got a lot of criticism for
scaling up giving too quickly.
So going from kind of zero to, you know, one to 200 million in a year is like a very big
scale up and it's actually just quite hard to do.
And so I guess I like, you know, if you were asking me, yes, it's a tiny fraction.
And I agree with the
point in general, that when someone who's a cent a billionaire, then gives, you know, $100 million
to something that is just really not very much at all, especially once they've had that money for
decades, and they can really kind of distribute it. But in that case, the way it seemed to me at
the time, and I guess still does just seem to me, was like basically consistent with someone trying to scale up their giving as fast as they can.
And in fact, in a way that, you know, plausibly should have been paying more attention to the business and not getting distracted by other things.
Yeah. So what is this? What, if anything, does this say about effective altruism?
does this say about effective altruism? I mean, I guess there's an additional question here. What has been the effect, as you perceive it, on EA and the public perception of it, the fundraising
toward good causes? Has it forced a rethinking of any principles of effective altruism that,
you know, whether it's earning to give or a focus on long-termism,
which we haven't talked about here yet, but you and I have discussed before. And
how large a crater has this left and what has been touched by it? And is there any good to come
out of this? Just give me the picture as you see it of EA at the moment. Yeah, I mean, huge harm, huge harm to EA, where, you know, at the time of the collapse,
I put it like 20% or something that the EA movement would just die.
This was a killer blow.
And so obviously, in terms of hit to the bland, you know, so many people think ill of EA now,
critical of EA now, in all sorts of different ways
driven by this, in a way that's not surprising. It was this horrific, horrific thing. I don't
think it happened because of EA. I think it happened in spite of EA. I think EA leaders
and communicators have been very consistent on the idea that the ends do not justify the means,
really since the start. I mean, and really this goes back centuries, go back to John Stuart Mill.
And actually even Sam knew this. So again, as part of just trying to figure out what happened,
I did some Facebook archaeology. So there's an essay by Eliezer Yurkowsky called the ends do
not justify the means among humans basically making the classic
point that you are not a god at calculation even if you're a hundred percent consequentialist
which I don't think you should be but even so follow the juristics that are tried and true
including juristics not to violate side constraints and this was shared in a group
on a kind of discussion group. And this is, you
know, well before FTX, Sam's response was like, why are you even sharing this? This is obvious,
everyone already knows this. So yeah, this was, in my view, in spite of EA, not because of it.
But yes, the damage is huge. Also internal damage as well. Morale was very, very low.
Trust was very low. The thought being, well, if Sam and the others did this, then who knows
what other people are like. And there has just been an enormous amount of self-reflection,
self-scrutiny, whether that's because of this catastrophe itself, or just if there's any point
in time for self-reflection. I think it was in the aftermath of that. And so there's a whole
bunch of things that have changed over the last year and a half. Not in terms of the principles, because, you know, what is effective
altruism? It's the idea of using evidence and reason to try to make the world better. That
principle is still good. Like, I still would love people to increase the amount by which they are
benevolent towards others and increase the amount by which they think extremely carefully and are really quite intense about trying to figure out how they can
have more positive impact with their money or with their time. That's just still as true as ever,
and the actions of one person in no way undermine that. I mean, take any ideology, take any moral view that you can imagine. You will find
advocates of that
ideology that are utterly
repugnant. Yeah, this is the
Hitler was a vegetarian principle.
Exactly, and Sam was too, and
so vegetarians having
a bad time. Yeah, exactly.
Exactly. But there have been a lot of
changes to the kind of institutions
within effective
altruism. So it has essentially entirely new leadership now, at least on the organizational
side. So Center for Effective Altruism, Open Philanthropy, 80,000 Hours, and the boards
of at least some of these organizations are really quite refreshed. This is partly just
a lot of people had to work exceptionally hard as a result of the fallout and got really quite burned out.
In my own case, I've stepped back from being on the boards of any of these main organizations, and I won't do that again really for quite a while.
that I wasn't able to talk about this stuff in the way I really wanted to for, you know, over the year.
Like I spent, again, like months, literal months kind of writing blog posts and rewriting
them and having them then kind of knocked back because there were, there wasn't kind
of investigation being held by Effective Ventures, one of the charities and the law firm doing
that really didn't want me to speak while that was ongoing.
But then also because I think a healthier effective altruism movement is more decentralized
than it was.
And there was an issue when the collapse happened that I was in the roles of being on the board
of the charity, also if anyone was being a spokesperson for the EA, but also having
advised Sam on the creation of
the foundation. And that meant I wasn't able to kind of offer guidance and reassurance to the
community at that time of crisis in a way that I really wanted to and wish I'd been able to.
And so I do think like a healthier EA movement is, you know, has greater decentralization in
that way. And there's some other things happening in that direction too.
So various organizations are kind of separating,
or projects are separating out legally
and becoming their own entities.
Yeah, in the aftermath,
I was certainly unhappy to see so many people eager
to dance on the grave of effective altruism. And in the worst cases,
these are people who are quite wealthy and cynical and simply looking for an excuse to
judge the actual good intentions and real altruism of others as just, you know, patently false. And
it was, you know, there was never a there,
there, no, everyone's just in it for themselves. And therefore, I, rich Ayn Randian type should
feel a completely clear conscience in being merely selfish, right? It's all a scam, right?
And that's, I just think that's an odious worldview and a false one, right? It's not that everyone is
just in it for themselves. It's not all just virtue signaling. There are real goods in the
world that can be accomplished. There are real harms that can be averted. And being merely selfish
really is a character flaw and is possible to be a much better person than that. And we should
aspire to that. And I say this as someone who's been, to some degree, always somewhat critical
or at least leery of EA as a movement and as a community. I mean, I think I'm one of the
larger contributors to it just personally and just how much money I give to EA-aligned charities
and how much I have spread the word about it and inspired others to take the pledge and to also
give money to GiveWell and similar organizations. But I've always been, and I've spoken to you
about this, and I've said as much on this podcast and elsewhere, you know, I feel
like as a movement, it's always struck me as too online and for some reason, attractive to,
you know, in the most comedic case, you know, neuroatypical people who are committed to
polyamory, right? I mean, it's just, there's a Silicon Valley cult-like dynamics that I've detected,
if not in the center of the movement, certainly at its fringe, that I think is evinced to some
degree in the life of Sam Bankman Freed, too. And we haven't talked about just how they were
living in the Bahamas, but there's certainly some colorful anecdotes there. And it just,
it seems to me that there's a culture that, you know, I haven't wanted
to endorse without caveat. And yet the principles that, you know, I've learned from my conversations
with you and, you know, in reading books like your own and, you know, Toby Ord's book, The
Precipice, their ideas about existential risk and actually becoming rational
around the real effects of efforts to do good rather than the imagined effects or the hoped
for effects, divorcing a rational understanding of mitigating human suffering and risk of harm and the good feels we get around specific stories
and specific triggers to empathy. And just performing conceptual surgery on all of that
so that one can actually do what one actually wants to do in a clear-headed way, guided by
compassion and a rational
understanding of the effects one can have on the world. And it's, you know, we've talked about many
of these issues before in previous conversations. I think all of that still stands. I mean, none of
that was wrong, and none of that is shown to be wrong by the example of Sam Bankman Freed. And so, yeah, I just, you know, I do mourn any loss that those ideas have
suffered, you know, in public perception because of this. So, yeah, I mean, do with that what you
will, but that's where I've netted out at this point. Yeah, I mean, I think it's part of the
tragedy of the whole thing. It's just, what we can has over 9 000 people who are trying
to get who are pledging to give at least 10 of their income to highly cost-effective charities
aiming for 10 000 people this this year for those people you know generally living like pretty
normal lives middle class maybe they're you know or maybe they're wealthier. Like in what way does the
action of Sam and the others invalidate that? And the answer is not at all. Like that is just as
important as ever. And yeah, one of the things that's so sad is like, maybe fewer people will
be inclined to do so, not for any good rational reasons, but just because of the bad aura that surrounds the idea now.
And that's just a little tragedy.
I think that's, I think, donating a fraction of your income to causes that effectively
help other people.
I still think that's a really good way to live.
You talk about, yeah, the kind of online cult-like and shot through with Asperger's kind of side of the
movement I think I want to do say that you know EA is many things or like the EA movement is many
things and also of course you can endorse the ideas without endorsing anything to do with the
movement but I definitely worry that you know there is a segment that is extremely online and perhaps
unusually weird in its culture or something.
And it's a bit of a shame, I think, if people get the impression that that's kind of what
everyone within the EA movement is like on the basis of whoever is kind of most loud
on the internet.
movement is like on the basis of whoever is kind of most loud on the internet where you know and people can be poly um if they want and no uh no moral objection uh to that um at all find way to
live people can have all sorts of like weird beliefs too and like maybe some of them are
correct like i think ai risk was extremely weird for many years and now people are taking it really
seriously so i mean i think that's important but i think the vast majority of people within the effective altruism
movement are like pretty normal people they're people who care a lot they're people who are
willing to put their money or the time where their mouth is and because they care they're
really willing to think this through and you know willing to go where the arguments or the evidence lead them. And, you know, I'm not someone who's naturally kind of on the internet all the time.
I find Twitter and internet forums, you know, quite off-putting. And when I meet people in
person who are engaged in the project of effective altruism, It feels very, very different than it does if you're
just hanging out on Twitter or on some of the forums online or something.
So is there anything that has been rethought at the level of the ideas? I mean,
the one other issue here, which I don't think it played an enormous role in the coverage of
the FTX collapse, but it's come under some scrutiny and become a kind of an ideological
cause for concern. The emphasis on long-termism, which you brought out at book length in your last
book, was that part of the problem here? And is there any rethink? Because that
certainly brings in this issue of probability calculus that turns our decisions into a series
of trolley problems wherein ends justify the means thinking, at least becomes tempting,
which is to say that if you thought you could, a decision you made
had implications for the survival of humanity, not just in the near term, but out into an endless
future where trillions upon trillions of lives are at stake and they hang in the balance, well,
then there's a lot you might do if you really took the numbers seriously, right? Is there anything that you have been
forced to revise your thinking on as a result of this? Yeah. So, I mean, I really think long-termism
wasn't at play. I mean, again, like I said, I feel like it wasn't. What happened at FTX was not the
matter of some rational calculation in pursuit of some end. I think it looks dumb in a model from any perspective. I also just think if your concern is with the hundreds
of millions of people in extreme poverty or the tens of billions of animals suffering in factory
farms, the scale of those problems are more than enough for the same reasons, for the same kind of
worries to arise. And in fact, we have seen,
like in the animal welfare movement on the fringes, people taking violent actions even
in the pursuit of what they regarded as the kind of greater good. Long-termism, if anything,
kind of actually shifts against it because this argument about, oh, you should be willing to take
more risk if you're using your money philanthropically than if you're willing to just spend the money on yourself that argument applies much less strongly
in the case of long-termism than it does for global health and development for example because
you know if i have five dollars to spend that can buy a bed net if i have a billion and five dollars
to spend that that final five dollars still buys a bed net. Global health and development
can just absorb huge amounts of money without the cost-effectiveness going down very much.
The same is not... Just to follow a further point along those lines, my concern with long-termism
has been the way in which it can seem to devalue the opportunity to alleviate present harms and present suffering. Because if you can
tell yourself a story that the one billion people suffering now, their interests are infinitesimal
compared to the trillions upon trillions who may yet exist if we play our cards right. So
it's an argument for perhaps overlooking the immediate suffering of the present out of a concern for the unrealized suffering of the future.
Yeah, and I think that's, you know, in What We Are the Future, I was very careful to defend only what I call the weak form of long-termism.
That positively impacting the long-term future is a moral priority of our time,
not claiming it's the only one, nor claiming its overwhelming importance either.
Like, I think we should be uncertain about this. In a new preface, I suggest a goal,
kind of a way of operationalizing that, of rich countries putting 1%, at least 1%,
of their resources to issues that distinctively impact
future generations, because at the moment they currently put close to 0%. And I do think the
mode of operating in which you think, oh, a present catastrophe is nothing compared to the
unparalleled goods that may come in a trillion years time. Like, I think that's a very
bad way of thinking, even just from a pure long-term perspective. I think it doesn't have a
good track record and it's really not how I would want people to think. There has been a different
line of criticism that I got from within EA, from the publication of What We Are the Future Onwards,
that I think has had a lot of merit.
And that line of criticism was that I was misunderstanding actually how near-term the risks that we are talking about were. So in particular, the risk from AI, where the risks
we face from really advanced artificial intelligence, even artificial general intelligence,
from really advanced artificial intelligence, even artificial general intelligence.
These are coming in the next decade, at most the next couple of decades.
And secondly, the scale of the problems that are imposed by technological development like AI are so great that you don't need to think about future generations at all,
even if you just care about the 8 billion people alive today, imposing the size of risks that we are imposing on them via these technology
is more than enough for this to become one of the top problems that the world should
face today. And over the last few years since the publication of the book, I just think
that perspective has been getting more and more vindicated. And so I'm now much more worried about really very advanced, very fast advances in AI in a very near timeframe, as in literally the next five, six years or the next decade.
Much more worried by risks from that than I was even just a few years ago.
This is a sidebar conversation, but it's interesting. Are your concerns mostly around
the prospect of unaligned AGI, or are they the more piecemeal and nearer term and actually
already present concerns around just the misuse of AI at whatever capacity it exists to
essentially render societies ungovernable and more or less guarantee malicious use at scale
that becomes quite harmful? To what degree are you focused on one versus the other?
I think I want to say I'm focused on both, but also other things too. So I think misalignment risk, I think it's
real and I think it's serious and I think we should be working on it and working on it much
more than we currently are. I am an optimist about it though, as in I think very probably
it will either turn out not to be an issue because it's just really quite easy to make
advanced AI systems that do what we want them to, Or we'll put in a big effort and be able
to solve the problem. Or we'll notice that the problem has not been solved and we'll actually
just hold back, put in regulations and other controls for long enough to give us enough time
to solve the problem. But there should still be more work. However, I think that AI will pose an enormous array of challenges that haven't really
been appreciated. And the reason I think this is because I find it increasingly plausible that AI
will lead to much accelerated rates of technological progress. So imagine as a kind of thought
experiment, all the technologies and intellectual developments that you might expect to happen over the coming five centuries, everything that might happen there.
And now imagine all of that happens in the course of three years. Would we expect that to go well?
So in that period of time, okay, we're developing new weapons of mass destruction.
We now have an automated army and an automated police force, so that in principle,
all military power could be controlled by a single person. We now have created beings that
plausibly have moral status themselves. What economic rights, welfare rights, political rights
should they have? Potentially, I mean, you talked about misinformation, but potentially now we have the ability to just have superhuman persuasive abilities far, far better than even teams
of the best, most charismatic lawyers or politicians in the most targeted ways possible. And I
think more challenges too, like over this period, we'll probably also have new conceptual insights and intellectual
insights or radically changing the game board for us too. And all of that might be happening
over the course of a very short period of time. Why now? Why might it happen in such
a short period of time? Well, that's the classic argument that goes back to I.J. Goode in the
50s, which is once you've got the point in time when AI can build better AI,
you've got this tight feedback loop because once you've built the better AI, that can help you
build better AI and so on. And that argument has been subject to really quite intense inquiry over
the last few years, building it into leading growth models, really looking at the input-output
curves in existing ML development for how much of
a gain you get for an increase in input. And it really looks like the argument is checking out.
And then that means that it's not long from the point of time that you've got your first AI that
can significantly help you with AI research to then trillions upon trillions of AI scientists that are driving progress in all sorts of
scientific domains forward. And that's just a really quite dizzying prospect. I think
misalignment and misinformation are some of the challenges we'll have to face.
But it's really just like, it's like facing all of technological progress at once and doing it in
an incredibly short period of time, such that I think the default outcome
is not that we handle that well. Handling it well or not, I think we just birthed another topic for
a future podcast, Will. Sure. There's a lot to talk about there. Okay, so finally, where has
all of this controversy and confusion landed for you, where does it leave you personally and in terms it was extremely hard for me and there's just
been no doubt at all it's the hardest year and a half now of my life both you know for so many
reasons just the horror of the harms that were caused um the incredible damage it did to
organizations and people um that i loved and so I found that just, yeah, very tough,
very tough to deal with. And I was, you know, really quite a dark place, like for the first time
in my life, actually. I had this chunk of time where I just didn't, I kind of lost the feeling
of moral motivation. I like didn't really know if I could keep going. So I did actually even think
about just stepping back, like really just giving up on EA as a project in my life because it just felt kind of tainted. And that was weird.
I mean, that was weird not having that motivation. What would have produced that effect? Is it just
the public perception of EA becoming so negative? Is it, practically speaking, fewer funds going into EA organizations that need those
funds? Was it funds that were getting clawed back because Sam Bankman Freed or FTX had given those
funds and now those become, you know, legally challenged? What was the actual impact?
Yeah, I mean, the reason for me thinking, you know, maybe I was just going to give up was nothing practical like that. It was, you know, psychological, like, after, you know, I really felt like I'd been punched or stabbed or something. Like, you know, I felt like I'd been building this thing for 15 years. And it really worked, you know, unsustainably hard on that. I was tired. And it had just been blown up in this one, in one swoop.
And it had just been, you know, blown up by Sam's actions.
And yeah, it was just hard to then think, okay, I'm going to get back up off the ground
and go into the shallow pond and rescue another child or something.
When you say it's been blown up though, so you're talking essentially about the
brand damage to EA.
Yes.
With which you have been so closely associated as really one of its progenitors.
Exactly. And I was talking about just what's going on, what was going on in my mind,
not about what in fact happened. So we can talk about what the hit was to EA. So yeah,
huge brand damage. I think people were definitely keen to disassociate themselves. In my own case,
I actually got surprisingly little in the way of being cancelled. But it definitely means it's
harder for people, especially in the aftermath, to go out and advocate. But I'm now feeling much more optimistic. So it was really tough.
Like the year afterwards, it was low morale. I think it was just harder for, you know,
people associated to EA to just do the work they wanted to do. But a few months ago, there was a
conference among many of the leaders of the kind of most core in organizations. And I honestly just found that
really inspiring because for so many people, the principles are still there. The problems in the
world are still there. Like Sam's actions do not change in any way how serious the problems we're
facing are and how important it is to take action and how important it is to ensure that our actions
have as big an impact as possible.
So there really was a sense of people rolling up their sleeves and wanting to get back to it.
And then in terms of the brand, I think once you go off Twitter, people again are quite
understanding. People understand that the actions of one person don't reflect on an entire movement.
And ultimately, just not that many people have heard of either FTX or
effective altruism. So there's still kind of plenty of room to go. And so I think we're
starting to see things change again. And, you know, it's just made me reflect on the good things that
the effective altruism movement is accomplishing and has continued to accomplish. So just recently, GiveWell has now
moved over $2 billion to its top recommended charities. Just so insane for me to think that
that could be that amount compared to where I was 15 years ago. It's like hundreds of thousands
of people who are alive that would have otherwise been dead. Like, I live in Oxford, has a population of 120,000 people. If I imagine like a nuclear bomb went off and killed everyone
in Oxford, like that would be world news for months. And if a kind of group of altruistically
motivated people had managed to come together and prevent that plot, saving the city of Oxford,
that would be huge news. People would write about
that for decades. And yet that's exactly, you know, not in nearly as dramatic a way, perhaps.
But that's what's happened actually several times over. And so now I think, yeah, we're getting
back to basics, basically, back to the core principles of effective altruism, back to the idea that
all people have moral claims upon us. We in rich countries have an enormous power to do good.
And if we do just devote some of our money or some of our time through our careers to those
big problems, we can make a truly enormous difference. That message, it's as inspiring to me as ever.
Yeah, well, I certainly could say the same for myself.
As I've described, I've always viewed myself to be on the periphery of the movement,
but increasingly committed to its principles.
And, you know, those principles have been learned directly from you more than from any
other source. I think, as I said in a blurb to your most recent book, I wrote that no living
philosopher has had a greater impact upon my ethics than Will McCaskill. And much of the good
I now do in particular by systematically giving money to effective charities is the direct result of his influence. That remains true, and I remain immensely grateful for that influence. And so I
just, you should not read into the noise as a result of what Sam Bankman-Fried did, any lesson
that detracts from all the good you have done, that many of us recognize that you have done,
and all the good you have inspired other people to do. It's extraordinarily rare to see
abstruse philosophical reasoning result in so much tangible good and such an obvious increase in
well-being in those whose lives have been touched by it.
So you should keep your chin up and just get back to your all-too-necessary work.
It remains inspiring.
And while you may no longer be the youngest philosopher I talk to, you still are nearly
so.
So just keep going.
That's my advice.
Well, yeah.
Thank you so much, Sam.
Yeah, it really means a lot that you feel like that. Especially because, yeah,
your advocacy has just, it's really been unreal in terms of the impact it's had. So I was saying,
giving what we can has over 9,000 members now. Over 1,000 of them cite you, cite the Sam Harris
podcast as why they have taken the pledge. That's over $300
million of pledged donations. And so I guess it just goes to show that, yeah, the listeners of
this podcast, you know, they just are people who are taking good ideas seriously and not, you know,
you might think people who listen are just interested in the ideas just for their own sake.
You know, they find them intellectually engaging.
But no, actually, people are just willing to put those ideas into practice and do something like giving what we can pledge.
And that's just, yeah, that's amazing to see.
That's great.
Well, Will, we have another podcast teed up.
That's great.
Well, Will, we have another podcast teed up.
Let me know when our robot overlords start making increasingly ominous noises such that they're now unignorable.
And we will have a podcast talking about all the pressing concerns that AI has birthed.
Because, yeah, I share your fears here, and it'll be a great conversation.
Sure. I'm looking forward to it.