Big Technology Podcast - Silicon Valley's Effective Altruist vs. Accelerationist Religious War

Episode Date: November 29, 2023

Molly White, Ryan Broderick, and Deepa Seetharaman join Big Technology Podcast to dive deep into the Effective Altruism (EA) vs. Effective Accelerationism (e/acc) debate in Silicon Valley that may hav...e been at the heart of the OpenAI debacle. White is a crypto researcher and critic who writes Citation Needed on Substack, Broderick is an internet culture reporter who writes Garbage Day on Substack, Seetharaman is a reporter at The Wall Street Journal who covers AI. Our three guests join to discuss who these groups are, how they formed, how their influence played into the OpenAI coup and counter-coup, and where they go from here. -- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Transcript
Discussion (0)
Starting point is 00:00:00 Let's dig into Silicon Valley's religious war between the effective altruists and the accelerationists in a deep discussion with a fantastic group of guests. That's coming up right after this. Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond. And this week, we're going to get into the EA and accelerationist war that may or may not have been at the center of the open AI debacle, but certainly is starting to play a bigger role. Silicon Valley and we need to get into it so we have an unbelievable group of guests joining us today all of whom have written about this first returning uh actually in short order after her first
Starting point is 00:00:40 appearance and uh a crowd favorite molly white is here she is the author of molly white's newsletter which you can find at newsletter dot molly white dot net and she's also a crypto researcher and critic molly welcome back to the show thanks for having me back great to have you here we also have ryan broderick He's the author of Garbage Day on Substack. It's a great newsletter that looks at Internet culture. I would say you're probably the most immersed reporter on Internet culture working today. Ryan, welcome. Thank you.
Starting point is 00:01:11 Thank you for having me. And, yeah, I'm excited to talk about my new religion that I worship now. So I'm very excited. I'm excited to hear which one you picked. We also have Deepa Sitha Raman, who's a reporter covering AI at the Wall Street Journal, who just wrote a terrific long story. about the role of EA in the explosion at OpenAI. Deepa. Welcome. Thank you. Thanks for having me.
Starting point is 00:01:37 The term effective altruism and the term accelerationism have been thrown around in a wild amount without most people knowing exactly what they mean. And I think that on this show, in particular, like getting the definition and talking about who these organizations are and what they believe in is pretty crucial. So, Molly, in your story, you did a great job defining exactly what these two groups believe in, who they are. Can you take us quickly through what the divide is between EA and the effective accelerationists? And I know that your perspective here and we'll get into it is that we shouldn't just be focusing on these two groups, but like just for definition purpose, let's talk about who they are. Sure. So the two groups are, you could
Starting point is 00:02:22 probably spend an hour talking about each one, but at least, especially when it comes to A, the two groups have become very prominent in terms of their philosophies around AI. So briefly, effective altruism is a group of people who believe in sort of doing as much good as possible with this sort of like data-driven analysis approach. And they have, some of them have very recently started thinking about whether it's really the best idea to be focusing on people who are alive today versus the possible thousands, millions, trillions of people who could be alive in the long-term future. And when they start thinking about that, they start thinking about existential risk, so things that could threaten those groups of people. And one thing that they have been
Starting point is 00:03:11 focusing on a lot lately is this idea of artificial intelligence as an existential risk and the idea that if you were to create a super intelligent artificial general intelligence, it could pose, you know, it could kill all humans, it could pose this enormous risk to people. And so that's part of the effective altruist concern and why they have largely adopted this idea that although they think we should be developing artificial intelligence, we should be doing so in a slower, more methodical way that tries to account for these risks and tries to create an AI that is aligned with humanity for whatever that means. The effective accelerationists, on the other hand, believe that we should just go all out,
Starting point is 00:03:59 no breaks, develop as quickly as we can without any real regard for the risks, and that it will all just sort of work itself out because of some philosophy that they sort of root in the thermodynamic bias of the universe that, you know, the universe is not biased towards destroying itself, and so therefore everything will be fine. Those are sort of the general competing ideologies that have been recently at the forefront. front of the AI debate, although they are hardly the only groups involved. Ryan, can you talk a little bit about the formation of these groups or like how they manifest online?
Starting point is 00:04:35 I understand there's like big message board communities that sort of put forth these ideologies and debate them. And it's like pretty interesting the way that they organize and build online. Yeah, I've tried very hard not to have to explain any of this for many years now because it's just so tedious and embarrassing for everyone involved. Sorry, here we go. Yeah, but basically at the middle 2000s, there was the rise of a very kind of niche,
Starting point is 00:05:02 but influential message board and blog called Less Wrong. And Less Wrong is kind of like the Thinking Man's Fortune, I guess, with like a real preoccupation with the emergence of AI. The possibly apocryphal story about Less Wrong that I think is very funny and I try to spread as much as possible, is that they became obsessed with the idea of an AI arising in the future that would kill anyone that didn't help it to be born and sort of created this version of cyber hell
Starting point is 00:05:31 that they became obsessed with and had to ban discussion of it from the board for a while. Good like message board drama stuff. I highly recommend going back and checking out some of the conversations. But it was the birthplace for several sort of digital philosophies, online philosophies. So the big one is obviously effective altruism, but it was also the birthplace
Starting point is 00:05:52 of the very beginnings of Neo-Reactionism, which was sort of the initial philosophy that Steve Bannon subscribed to. A lot of kind of like techno-feudal thought was born on Lesserong, this idea of using automation to replace democracy, replace countries, turn them into machine-operated city-states. And you'll see kind of references, sly references to this idea still kicking around Silicon Valley because it's like a very seductive idea. Um, effective accelerationism didn't so much start on less wrong. It's sort of like this, uh, I sort of think, compare it to the, the idea that you had the, the alt-right who then became embarrassing to Gen Z, who then created the Groyper's.
Starting point is 00:06:38 And effective accelerationism is sort of the same idea, which is that, uh, the effective altruists, the less wrong guys, they're like doomer, boomer, uh, cringe dudes. So we've got to create like this new idea to galvanize younger people around. And so that's sort of how I kind of view the breakdown. But if you wanted a really good example. It's more than just a different iteration of the same ideology, right? Like you have one, especially regarding AI, you have one that's very much into slowing down AI progress, worried about that risk. And then you have another group that's like, no, put the gas pedal on as hard as you can.
Starting point is 00:07:16 Or am I not seeing it right? I have not found much difference in what these two groups want, only the time frame in which they want them to happen. It is, in my opinion, I have not seen enough difference between the two. In fact, effective accelerationism was more or less a meme until last week. Like it wasn't really even like a thing. It was sort of just like a way to make older Silicon Valley guys look cringe or more cringe. So it's now becoming a little more nuance and a little more coherent, but it, it mirrors almost exactly kind of like the splintering of the far right post-drump, like, but for AI, essentially.
Starting point is 00:07:58 Yeah, but you had, I mean, it is like you had Mark Andreessen, right, you know, effectively, whether, whether someone labeled him as this effective accelerationist or not, sort of carrying the banner of this group. And actually, what he's saying is holding weight within Silicon Valley. don't stop and basically continue to build as fast as you can and don't worry about regulation or safety or anything like that. That is a divide, though. It is. It is definitely a divide. I'm curious if Mark Anderson is sort of thinking this through in a philosophical sense or if he's just sort of like latching onto it because it's exciting. Well, I mean, I was just going to
Starting point is 00:08:44 to say, I mean, isn't Mark, when he did his tech optimist manifesto, when was that a couple months ago, didn't he say that any type of deceleration of AI would cost lives? And so if you're slowing down AI development, you're basically a murderer. Yeah, he did. Hang on. He did say that. He did. Yeah. Well, I mean, like, you know, there's also this, it's a real belief in, like, Effective altruists definitely think that, you know, AGI or AI is going to usher in this new, potentially golden era. And so do the effect of accelerationists. So I do agree that it's a difference in time scale, but where like they are like, well, we need to get everything perfect and right. However, we define right and perfect. First, the accelerationists are like, now, now, now.
Starting point is 00:09:39 And there's a moral imperative to go now, now, now. Deepa, can you talk about how this like practically split, you know, within the boardrooms and the executive offices within some of like the bigger tech companies, you know, especially open AI, but also, you know, this has played out with Anthropic as well, which is effectively a split away from open AI. So talk about how this we actually manifested these ideologies in different companies and, yeah, paces of innovation. I think, I think it's, it's, it's, it's, it's really interesting. It basically is a part of, it's a force that shapes everything. Either you are pro EA or you're anti-EA or you're mixed on EA, but no one is, I've met has been neutral on EA or like they don't think about it that much, right? Like it's definite, it's a, it's a force whether you like it or not. And I think the way that it, it sort of has coursed through these companies is, is pretty interesting. I mean, OpenAI had a lot of EA aligned researchers and founders at its very beginning and a lot of the early employees. So Dario Amadeh, who's now the CEO of Anthropic, was, you know, running research at Open AI for a long time. And he is very EA aligned. For this story, we asked all the companies, by the way, if they agreed with characterization of them as
Starting point is 00:11:06 EA are not. And all of the companies told us they are not EA aligned. Well, sure, they're not going to say they are now. I mean, goodness gracious. Just a little data point. But, you know, the way they're perceived is definitely EA. Right. And so you have at Open AI kind of two different cultures forming. One that is more EA adjacent. So this is like worrying about existential risk, worrying about, you know, an era where humans are treated by machines the way humans currently treat animals, which is something that Ilya Sutskofer has said internally to various employees. Then you also have this, like, very practical side. You know, these are a lot of people that came from Facebook, actually, that are trust and safety
Starting point is 00:11:49 type alums that think about, oh, technology at scale can do really crazy things. Let's try to figure out how those principles might apply to generative AI. And those two cultures theoretically don't need to be in conflict, but are at times, especially with the resource disparity. So if you look at Open AI, they had and have this team that is dedicated to super alignment, which is how do we align super intelligent AI systems that as far as I know don't really exist yet? And what should the alignment look like? And one of the projects that we reported on was they were trying to create an AI scientist.
Starting point is 00:12:34 So this is like not a science, like a human scientist, like an AI scientist that would study alignment. And they have all that really interesting work going. Alia Satskavar is like part of that team. So it has a lot of internal cachet. It's, you know, interesting. And then they have these more, I don't know, like hand-to-hand combat. kind of trust and safety folks that are working on things like the next election, right? And so Open AI a couple months ago, just hired somebody on their policy team focused on the 2024
Starting point is 00:13:11 election. So it's like on two tracks, right? Like we have the super alignment team that is building the safety systems for systems that don't exist yet based on values that don't, they haven't to find, but then we're also planning for, you know, we just hire one person so far for the election, which is an election. A lot of us have been calling the AI election for a while because we're, you know, the impact that AI could have on that, on that time. So a lot of the people and sources we talked to saw that resource disparity is a, not necessary, right? Like, why can't we do both was a question that I heard a lot, but also reflective of EA style print. that even if people at the company don't think their EA, they're influenced by EA such that
Starting point is 00:14:01 the long-term stuff gets more more resources than the short-term stuff. So that's just one example of how it cuts across. Molly, I'm going to turn it to you. I'm kind of curious, like, some of these things that EA is thinking about, whether that's the risk from AI or how to, like, be maximally effective. Like, it seems like they're interesting way to try to solve problems. I'm kind of curious whether you think there's merit to. their ideas? And what do you think the impact is, you know, as it gets applied within tech
Starting point is 00:14:29 companies? Yeah. So the thing about effective altruism is on the surface, it seems very reasonable. You know, the idea is that you should, you know, most people are altruistic to some degree and you want your altruism to be effective. Like that is hard to argue with. But you end up sort of in this rabbit hole around trying to define exactly what effective means. And, There's all of this philosophizing behind it. It's very heavily tied to the sort of rationalist, utilitarianist schools of thought, where you're sort of trying to develop these impartial equations around what the most effective use of your money and your time is.
Starting point is 00:15:12 And so people take that in very different directions. There are people within the EA movement who donate massive portions of their income to you know, pandemic prevention or malaria bed nets in Africa or, you know, things that are probably doing a lot of good. In fact, the EA community has done a lot of good in some, you know, charitable movements. So it's really hard to sort of criticize that. But then you also have this side of it that has spun out into this almost extremist, you know, we must focus on existential risk over any other problem, including very serious problems that are affecting humans today. We must value the lives of future humans above the lives of humans today.
Starting point is 00:15:58 And so it's one of those things that like it's really hard to criticize the movement because you'll often get people who will say, you know, oh, but look at all the good things. Yeah. And it's one of those things that like the problem is that when you when you take effective altruism and you put it into practice, you end up with a vast sort of milieu of of implementations, some of which are enormously problematic. And so it's sort of hard to criticize EA on the face of it because you could be talking about 10 different things. But in the same vein, you know, you should not necessarily be looking at EA and going, oh, it's just effective, you know, philanthropy. That's great.
Starting point is 00:16:38 Because it is much, much more than that. I mean, Molly, you've been studying these organizations. Some organizations where people that are, you know, inherently tied to EA rise very high or sometimes. run them. I mean, Sam Bakeman-Fried, who we've talked about on this show, was tied to EA. We also have, you know, folks extremely high with an open AI and anthropic tied to EA. How has it been so effective at churning out leaders in the tech world who believe in this stuff? Well, I think it goes in both directions to some extent. I think some people were, to some extent, shaped by EA. You know, Sam Bankman-Fried, I encountered it very early in his life. And at least to
Starting point is 00:17:19 his telling made decisions about his life based on the philosophy, whereas there are other people who have sort of adopted it after much of their careers. So, you know, Elon Musk, for example, has somewhat embraced effective altruism, but it doesn't seem like it was really a defining feature more as it was something he learned about and said, oh, yeah, that sounds about right. And I think the same is true of Mark Andreessen and Effective Accelerationism, for that matter. You know, he was not thinking about effective accelerationism, which was not even a thing back in, you know, when he was developing web browsers. This is something he is now adopting sort of post hoc. And I, you know, I argued this in an essay that I wrote, but I think, you know, effective altruism and to an extent
Starting point is 00:18:03 effective acceleration, or sorry, the other way around, effective accelerationism and to an extent effective altruism are really just sort of a rebranding of a lot of the same philosophies that existed in Silicon Valley for a long time where people really want to feel good about themselves, like they're pursuing a higher cause, not just the pursuit of massive wealth. And these philosophies are very convenient because they allow people to define themselves as these hero figures who are, you know, accumulating wealth, but not because they're greedy, not for their own purposes. It's for a higher cause. You know, in the case of the altruist, it's because they're going to donated all at some point in the future, maybe. In the case of accelerationists, you know,
Starting point is 00:18:47 they're creating all of this wealth. They're accumulating all this wealth in the pursuit of this technological goal that is, you know, completely beyond criticism because anything other than that is akin to murder. And so, you know, I think it's a, I think it's a philosophy of convenience in a lot of cases. Also, I just want to step in before we get too far away, which is that like they're not opposites in the sense that like EA are communists like they're both pretty aligned on the idea of making as much money as humanly possible uh they're both supported by a lot of anti-democratic thinkers they're both sort of uh center to not maybe just full on right wing like they're they're not total opposites they're just sort of uh aesthetic there most of is aesthetic i would
Starting point is 00:19:36 argue but then also yeah resources internally sure but like they're not They're not opposed to each other. One is not like a workers' party. Right. And I would add something that I wrote about, which is that, you know, I think recently we've been seeing this framing of the AI debate as being between the effective altruists and the effective accelerationists as though those are the two sides of the debate, when in reality that is a very small portion.
Starting point is 00:20:01 And it's a very extreme and very loud portion of the debate. But there is sort of this vast history of people who have been studying artificial intelligence, machine learning, you know, ethics, all these different things for decades who are like, hello, you know, like we have a lot to say here. We are not the shiny, me, me, effective accelerationsist, but, you know, we have been thinking about this. And that's actually a really good point where it's like, Molly makes a really good point here where like those are two buckets of the people who love AI.
Starting point is 00:20:33 And it's not dissimilar from the AI is going to destroy the world versus AI is going to fix the world debate, which is not really a debate. that's just like a marketing exercise we're watching being played out like in front of Congress sometimes. So these are not really opposites in any way. Okay, I want to bring in deeper here because the headline that you have on your story seems to sort of paint a different picture. I think it's not necessary. I think effective altruism is so polarizing that it does split the industry. I think one group that has also served as a foil for the EA people or what's called the AI ethics group.
Starting point is 00:21:13 So these are the people that think we should probably focus on current problems, things like bias and misinformation and all kinds of things. Those are the people that are actually getting squeezed at OpenAI, right? Like that's a lot of the trust and safety folks that I spoke to, um, were felt like they were more in that bucket. and that at least that they thought the open AI team that did trust the safety was more in that bucket. And they were pushing really hard against this movement,
Starting point is 00:21:48 but it just didn't have the same clout, right? And so, and then you also see this other group, the effective accelerationists that just think it's, you know, again, like murder to slow down AI development. But the AI ethics people are like, we don't, this is an art, these are not our people either, right? they're just kind of stuck in the middle trying to find ways to minimize harm and not a lot of people are listening to them at all the companies but but how did this factor in in the altman episode right
Starting point is 00:22:21 because the narrative is that he did get pushed out by the EA group and he was more aligned with the acceleration folks and in fact i think this is from your story he called effective altruism an incredibly flawed movement with very weird emergent behavior good take i i think i think it it played out in a couple ways i think first it's probably important to note that the board in their statement as sort of lacking in detail as it was said that they didn't fire sam for anything but lack of candor right like they didn't fire him because he wasn't a i safety enough um but our understanding is that there were a lot of ai safety debates There was constant tension about, you know, the poll to commercialize versus the poll to slow down and, like, think about the long-term risks, you know, while the trust and safety people are like, look at me.
Starting point is 00:23:20 Like, they're trying to get everyone's attention and not really succeeding either. But Sam, I think, got into some disputes with board members a little bit about these issues. Because the board was EA heavy. Yeah. Well, two of the board members were associated with EA organizations. So that's Helen Toner, who used to work at Open Philanthropy, which is a very EA organization. And Tasha McAali. Right.
Starting point is 00:23:49 And she was on. And Ilya also, who's affiliated. If you talk to people who know Ilya, he's like, I'm not EA. I'm AI safety, which is like a subgroup, I guess, of this, right? So, I mean, partly this is a little hard conversation at because there's, It's not like you're a Democrat or a Republican and you've registered. You have like, it's a big range. It's more religious, right?
Starting point is 00:24:10 Right. So there's a, their names are all very similar and somewhat overlapping. Like, AI safety is different from AI ethics, apparently, you know. So it's like, someone needs to name things better. It's like 2% whole, soy, you know, they're all kind of different. Yeah. Yeah.
Starting point is 00:24:28 Well, they're all milk, right? They're all very milk. Yeah. If you think everybody's so. similar. Do you think there was an ideological dispute there? Because it seems like in your piece that you thought there was. Yeah, no, I mean, like, I have no trouble believing that a bunch of nerds read too many blog posts and destroyed a company over it. Like, and then that it's having ripple effects across Silicon Valley. Like, yeah, no, I, I am extremely dismissive of
Starting point is 00:24:52 this stuff. And I think it's very silly, actually. But I do think that the people who believe it take it very seriously. I sort of view AI as a bottleneck for these groups, because like, especially the effect of altruists who, you know, they could be focusing on COVID-24 or like vaccines for the swine fluid that just broke out or whatever, but they are obsessed with AI because they've determined that it is the ultimate risk to humanity because it's exciting and interesting and it like makes for a good post. And that is sort of my take across the board with these groups is that like they all require the buy-in that AI is important and not just autocorrect. Okay, so Deepa, sorry, you were, you were trying to take us through exactly what happened
Starting point is 00:25:33 there between these two groups. Right. So, you know, so Sam would have, like, conversations with different people at the organization. And there were a lot of debates inside Open AI about AI safety and where is the line and how do we navigate the line and whatever. But it wasn't so much his stance that bothered the board. It was the response, like how he treated the board and how he engaged in these conversations, which, again, the board hasn't provided a ton of detail here. So it's very easy to wildly, wildly speculate. But I think it's safe to say that they felt like he was lying, right? Or that he was misleading them in the way he responded to different things. A lot of this is still very up in the air. I'm working hard to try to get
Starting point is 00:26:22 examples. I'm trying, you know, but it is, but it has been a little bit frustrating, because if the board is going to say, hey, Sam lacked candor in these conversations, like, what happened, right? And how, give me a specific example of how he responded. And what did AI safety specifically, like, what conversations around AI safety were so made him responded such a, in a way that made the board really take an extraordinary step to fire him, right? There's still a lot of open questions there.
Starting point is 00:26:56 Yeah. And right now, what we're. kind of seeing is that, you know, people using the framing that this, this was this battle between EA and accelerationists, like people are saying the accelerationist won and EA has lost and EA has, you know, had taken a couple of setbacks here with the open AI situation and the San Bakeman-Fried situation. So, Molly, how would you say that they're responding and what do you think this means for that movement? Well, I mean, I think it's kind of the same way that crypto responded to the SBF movement where they can just be like, oh, that wasn't us.
Starting point is 00:27:32 You know, like, there's a lot of reframing and redefining of terms when things seem to go poorly for a movement. And so I, you know, I wrote this piece about effective altruism and effective accelerationism. And the vast response that I've received from people who define themselves to be effective altruist is like, oh, that's a totally different group of people. You know, that's not what EA is all about, trying to sort of downplay. what has happened recently or sort of do the, you know, no true Scotsman around, like, those aren't real effective altruists type of things. But, you know, like I said earlier, I think it's really hard to sort of cast such a broad net around a movement that has, you know,
Starting point is 00:28:16 a very broad range of people in it. But I do think you're right that the takeaway that a lot of people got was effective accelerationsists have won, effective altruists have not, you know, we should be clearly because of this, you know, the winning philosophy is just to develop AI with no breaks and, you know, go completely peddle to the metal on it, which I think is concerning, you know, especially again, because I feel like it removes a whole important part of that conversation. You know, not only are there more people than effective altruists and effective accelerations at open AI having these conversations, there's a lot more in AI than open AI. And so, you know, the idea that effective accelerationists now will be
Starting point is 00:29:04 dictating the future of this entire field, I think, is, it's something that risks becoming sort of a self-fulfilling prophecy, where we now give these people far more attention and sort of, you know, weight to their opinions than probably ought to happen. Amal, you've written about this. But, like, one of the things that we've also seen is that, like, both groups believe that aGI are like really scary doom type of AI applications are right around the corner which like if you're looking at the technology you're like what like even this big Q star you know revelation that came over the weekend about open AI like you know it seems to be not a complete nothing burger but not like a you know 10 alarm fire like people are making it out to
Starting point is 00:29:48 be so why are they also convinced a little bit to to Ryan's point that like there isn't actually that much of a difference between these two groups. They do, they do diverge on sort of a very important point, but they both have this sort of mythology in their heads about this, you know, godlike AI being that is just around the corner. And, you know, I think it's important to look at the historical record when it comes to that. We have seen predictions of gods being created since, I mean, about as long as we have a historical record. Molly, Bitcoin could go to a million. That's true.
Starting point is 00:30:26 Hyper-Bicklinization could happen. AGI and Bitcoin are going to the moon. Yeah. Yeah, but like, I mean, you know, we heard this with the sort of sparks of artificial general intelligence paper. We've seen this, yeah, we've seen this, you know, for years and years in the computing sector and far longer than that, just in the sort of religious side of things. And so I think, you know, it's one of those things where people get very tied up in the mythology and the religion behind it.
Starting point is 00:31:01 And they don't necessarily pay all that much attention to the technology. And people begin to really change their own definitions of what artificial general intelligence means, what sentience means, what humanity means in order to make these things seem more plausible or more true than they really are. I also don't think it's an accident that we're hearing about all of this stuff right as every major company has rolled out an AI widget for their service and it's sort of been a flop. Like we're in like an extremely like kind of boring moment for AI in between releases. The hype is sort of dying down as it does every time we get a new version of these tools. And now all of a sudden there's like two competing like doomsday cults around this idea. Like, that doesn't seem like totally an accident to me. Yeah, Deepa, like from a practical level, like, how is this playing out, you know, in corporate boards and within AI companies?
Starting point is 00:32:01 Is it going to continue to spiral? I was talking to somebody about this last week who said that they felt that the Open AI saga and all the speculation about EA was more damaging for EA than Sam Bankman-F because with SBF, you can look at them and be like, that's one guy. And this is like a movement. This is like a lot of people acting in unison apparently irrationally, right? Like that is the attitude and that's the feat. That's like the vibe right now. And I think there's a lot of truth to that. I think that, you know, EA people already are viewed as a really insular, really kind
Starting point is 00:32:39 of clubby organization that they only ever cite EA papers written by other EA people and they only ever want to work with EA people. This isn't obviously everybody. this is a stereotype, but that I wonder if the double hit to their integrity and their image is going to force like a further contraction or where they just sort of like go deeper. But the reality is also that over the last year, this movement has had a lot of influence, not just on a practical level, like on the ground at AI companies, even though that's significant.
Starting point is 00:33:14 Like we haven't talked about hiring yet. And a lot of EA organizations start on student campus. campuses, and that's the pipeline. If you want to hire an AI and the researcher at this point, you're going to hire from a serious university like Berkeley or Stanford or Oxford, and a lot of them are in this movement. So there's a gigantic overlap. So you're probably going to hire a bunch of EA people, and you're going to have to make them happy because they could leave at any time, right? There's huge competition for people who can actually build these systems. So there's that. But then, you know, So they'll still have a lot of pull just from the hiring perspective.
Starting point is 00:33:53 They're also have a lot of pull on a policy side. I mean, if you look at some of the comments from the EU about AI risks, they talk about existential risk, like Rishi Sunak's entire AI safety summit. A lot of that were, a lot of the speakers were EA aligned or EA, the White House even, like definitely, you know, Dario Amadai and like other people who are broadly viewed as EA people, they went to the White House and talked about their views. Like, they have a seat in the room. So I don't think it's just going to, like, go away at all because they've already been
Starting point is 00:34:23 there and they're very serious and they keep, you know, they're in the room. They can, they continue to hold some kind of influence. Right. I got to ask you what's going to happen to Anthropic. I mean, Anthropic was seen as this like counterweight to Open AI, but run by, you know, a lot of, I mean, I know they're like, quote unquote, not associated with EA, but they have big EA influence there. They left Open AI because of safety.
Starting point is 00:34:47 you have Amazon that just invested $1.25 billion in that company. Google also invested billion, like more than a billion. Are we going to see the same stuff happen in Anthropic as we did within Open AI? I mean, what's the future of that company now? Because it's even more closely tied to this movement. Yeah. Well, it's also, as I understand it, a little bit more homogenous. Like, there are a lot more EA people there.
Starting point is 00:35:17 Right. There's a culture fit test, like when you get hired, where you're asked that as far as I understand, I don't have, like, the questions in front of me, but a lot of the questions are sound, they're basically trying to test whether or not you'd be EA enough to be an anthropic. That's how it's been described to me. Good luck. Good luck. And they, you know, they have a philosopher on staff who, yeah, who everybody does that. Right. But it's Amanda Askell. So she's the former wife, I think, or of Will McCaskill, who is the, right, the founder of EA. So there's like a big, tight connection there. But, you know, they're not, they're, will they be split up by it?
Starting point is 00:36:08 Probably not just because they are all coming from the starting point that EA is good. It's just maybe the execution is a problem or we are misunderstood and it's a public issue. Some of the stuff kind of reminds me a little of like what it was like covering Facebook, if you remember, Alex. Oh, I sure do. There are a lot of people internally at Facebook that were like, well, no, Facebook's really good. We just need to get better at managing the bad stuff and like making sure people can see how good we are. There's definitely that population inside the company and it gives me the same echo. Right. So it seems like we're largely EA skeptical in this conversation.
Starting point is 00:36:48 First of all, if someone has a counterpoint, they want to make, my email is Alex at bigtechnology.com. So hit me up there. I'm happy to listen to it. Maybe we can bring you on. But also, like, I'm curious just from the group here, like, what would someone who's steeped in EA or effective accelerationism say as like, you know, is there, is there any like rebuttal to some of these points that we're making that we should be, you know, that we should be considering. Yeah. I mean, they, they, I mean, Molly's point was correct, which is that like, if you, if you really talk to a person who believes in EA, like, they're, they sound very reasonable. And most of the things are going to point to are, are not Sam Bankman-free committing financial crimes, right? Like, it's, it's mainly just that when you start to take that long-term view, uh, things start to become. normalized that possibly wouldn't be if you weren't looking so far in the future.
Starting point is 00:37:46 That's why a lot of EA guys end up showing their ass on Twitter, like posting weird stuff about race science and like demographic shifts and stuff. And with the accelerationists, like, I mean, I hesitate even calling it a philosophy. I think it's still very much just a meme. Like it's the doge coin to EA's Bitcoin, right? Like it's not really anything yet. It could be. And in fact, like, I think that if there's really any legacy,
Starting point is 00:38:12 from the last week of drama instead of Open AI, it's that the accelerationists all found each other and now know how to talk about each other, and now we'll start to see sort of like those memes becoming more serious and becoming like, you know, possibly a real counterpoint as opposed to a bunch of different factions wanting different things.
Starting point is 00:38:29 You do have people like with their Twitter bios, like Gary Tan, the head of white combinator is like right in his bio. He calls himself an effective accelerationalist. Yeah. Yeah, I think, you know, I think a victory. of the past week or two weeks or so is just the attention that has been given to the
Starting point is 00:38:49 effective accelerationism idea. Again, I agree it's not really a philosophy. I wouldn't even really call it a movement. I think it is mostly a rebrand of, you know, move fast break things, which has been Silicon Valley's, you know, mantra since early Facebook. But with the added sort of religious, almost philosophy behind it. And the very effective altruist style of speaking and writing in this very sort of esoteric and long-winded way, it's, I think it's, it's probably, it's something that I don't give a lot of weight in terms of its own sort of, um, foundation. the quality, I guess, of the ideas behind it.
Starting point is 00:39:40 But I do think that the effectiveness of its proponents in terms of spreading it, sort of memeifying it, making it appealing to especially younger people who are just getting into tech and trying to find a way to think about their place within this sort of huge capitalist structure is something that probably should be taken seriously. as well as the tendency of some of these so-called movements, you know, except effective altruism being one of them to almost radicalize the people within them as they normalize these sort of thought exercises that can lead down a very concerning path.
Starting point is 00:40:27 Ryan, I'm kind of curious, like, you read the message boards. I mean, don't message boards and online forums have a tendency to radicalize and also make the most intense ideas rise to the top? Is this just kind of like a classic case of that? I've never heard of that ever happening before. What could you possibly be referring to? Yeah, I mean, no, it does. I mean, that's definitely the case.
Starting point is 00:40:51 I mean, we didn't even really go further into this because it's so dense. But the reason I was even writing about effective accelerationism this week is because I got sent a tip about a group that was causing trouble on TikTok from my readers. And when I started poking around, it turned out that they were a group I had written about previously of crypto accelerationists that had rebranded into AI accelerationsists and were part of the New York downtown scene trying to get Peter Thiel's money. Like these are the same people that have been kicking around for five years trying to bring about some kind of white nationalist apocalypse via automation. And they went from cryptocurrency to AI. And it's the same people doing the same stuff, sharing the same memes that they've been sharing since 2017. And I think when you talk about the radicalization of these message boards of these online communities that are talking about these philosophies, what it is is that you don't know who you're talking to and you don't know what their goals are where their funding is coming from or what they're influenced by.
Starting point is 00:41:47 So, you know, it's very common to be reading like a Twitter thread and all of a sudden, like if you don't know, I mean, Mali or Deepa might be able to do this, but I don't think a normal person would be like, oh, yeah, there's the EA person. There's the rationalist. There's the Neo-Reactionist. There's like the techno feudalist, Moldus Mugberg fan. or whatever like normal people don't have the time to care about this shit and unfortunately like people in silicon valley really care about it and so it's it's very thorny um and we don't really know where this stuff is traveling or how it's mutating until it's in front of our faces and one of the things that I wrote down before we started is why do we need these movements like why can't we just have folks that are building cool things and you know saying they're trying to build cool stuff like chat cheap tea is a cool product like do you you don't really need a um well i don't think there was like a schism around clippy in the 90s there were the pro clippists and the anti clippiest and this was
Starting point is 00:42:44 very similar i think they exactly but there were always been there have been movements around technology like who should technology be for what should it look like and this is kind of what we had asked of facebook right like why didn't you guys think about the harms before right connecting people is de facto good right right right right But maybe it's not. And we asked that of the tech companies. And so I think the defense that you'd hear from EA groups is like, look, these are real problems and we're going to focus on the problems. It's just going to be a debate about like what the problems, what problems are most, you know, valuable to focus on.
Starting point is 00:43:20 Yes. I mean, in retrospect, if AI does wipe us all out like next week, this podcast is going to look dumb. I can't wait. Yeah. Put me out of work. Give me some UBI. think that like you're right that this stuff goes all the way back i mean the protestant reformation one version of it is to blame it on the accessibility of literature and the printing press and the
Starting point is 00:43:40 ability to read the bible and go wait a minute i don't agree with this right so like every technological revolution if we're really in one right now does typically involve some sort of political social religious of evil i just think the tech guys are so excited about the idea that they're in a revolution that they may have like invented a religion before they've been able to prove if we're in one or not And I think there's also this sort of urge among some of the wealthier people in Silicon Valley to try to ascribe a greater meaning to the work that they do. And these types of philosophies are very appealing to those people. You know, Mark Andreessen being one of the most prominent who wrote this literal, he called it a manifesto. You know, I think that that just sort of exposes that, you know, he is sort of an embarrassed billionaire to.
Starting point is 00:44:31 some extent, who wants to define his legacy as more than just amassing wealth off of venture capital. And so by adopting this religion and becoming one of its preachers, you know, I think that's a way to do that for some of these people. And I think that's why you see people in those positions become drawn to this. And I also think they realize that it is a very effective way to spread their own influence by becoming the, you know, leaders of these sort of movements, they realize that, you know, they can preach to a very willing audience who might not otherwise listen if they're just saying, you know, here's this new tech thing that I'm interested in. Don't you think it's cool? So just stitching everything together, my takeaway here
Starting point is 00:45:16 is that we've really just seen kind of round one that like the story doesn't end with open AI. It almost begins here with the acceleration is starting to find. other and EA, you know, maybe reforming, but not going away. Yeah. And we'll see. I mean, hopefully there's room for others, right? That's the real question. I actually, to your point, I will say, I think that the philosophies that we've seen sort
Starting point is 00:45:40 of emerge and the rebranding of these philosophies that we've seen happen over the last month or so, have a good chance of outliving the generative AI fad if we're in one. Like we may lose interest with mid-journey next week, but I think these people are going to be around in sort of thinking about this stuff in this way for a lot longer? I think that's very true. I mean, if you look at crypto, a lot of those people have moved into effective accelerationism without even blinking. And, you know, the crypto was the last old, interesting thing. And now it's artificial intelligence or AGI or whatever they want to focus on. But, you know, they're continuing the same beliefs and behaviors with just whatever is in front of them.
Starting point is 00:46:22 Great. All right. So let's, let's round this one out. I just want to give everybody who's listening a chance to find our great guests online. So Molly White's, oh, it's called citation needed now. Okay, remember, you rebranded it. Newsletter.mollywhite.net. And then Ryan Broderick's Garbage Day, Garbageday.com. And you can find Deepa's work on WSJ.com. Okay. Well, thank you, Molly, Deepa, and Ryan. This has been a great conversation. Really appreciate you coming on and helping us break this down. And I also like really love that, you know, you didn't take it at face value that this is a one versus one, but kind of broadened it out and talked about what we're really looking at here, which is great. Wait, were we supposed to fight each other? Oh, no, no, I see. The philosophy. Yeah, yeah, yeah, yeah. Exactly. So thank you so much. Ryan and I will one versus one each other at a different point.
Starting point is 00:47:13 For sure. We will host it. We'll host it. Yeah, so thank you so much. Thank you to our guests. Thank you to our listeners. And thank you to everyone who helps make this podcast possible. We'll be back on Friday with another show. breaking down the week's news and continue with our coverage of these religions of open AI of the AI field every week, Wednesdays and Fridays. All right, we'll see you on Friday on Big Technology Podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.