Bankless - Tech Acceleration vs Deceleration: e/acc vs. d/acc debate | Erik Torenberg & Haseeb Qureshi
Episode Date: April 24, 2024Erik Torenburg and Haseeb Quereshi join us for today's debate. Should we accelerate or decelerate our tech progress? How about when it comes to something as powerful as AI? This is not just a debate ...in tech circles - this is a political debate that poised to define the next decade. We’ve had leaders from both sides of the debate on the podcast - episodes with Elizer Yudkowski and Beff Jezos. You’ve heard what they think. Today’s episode is a discussion and at times a debate on these opinions to help you think through where you stand on this issue - Haseeb Quereshi tends toward the EA side which favors more caution and regulatory intervention around AI while Erik Torenburg tends toward the e/acc side of the issue which favors faster progress and lighter touch. ------ 📣SPOTIFY PREMIUM RSS FEED | USE CODE: SPOTIFY24 https://bankless.cc/spotify-premium ------ 🔐 SAFE | USE SAFE, GET REWARDED. CHECK OUT SAFE PASS https://bankless.cc/SafePass_NL ------ BANKLESS SPONSOR TOOLS: 🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://k.xyz/bankless-pod-q2 🔗CELO | CEL2 COMING SOON https://bankless.cc/Celo 🔐 SAFE | USE SAFE, GET REWARDED https://bankless.cc/SafePass_NL ⚖️ ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum 🛞MANTLE | MODULAR LAYER 2 NETWORK https://bankless.cc/Mantle 🗣️TOKU | CRYPTO EMPLOYMENT SOLUTION https://bankless.cc/toku 🏙️ CONSENSUS | SAVE 20% WITH CODE BANKLESS https://bankless.cc/4aykesD ------ TIMESTAMPS 00:00:00 Start 00:03:51 Intro To Haseeb and Erik 00:05:58 Definind EA 00:11:30 Reflecting on EA 00:19:23 Erik's Case For E/Acc 00:23:05 Defining E/Acc 00:27:18 The Problem with E/Acc & Humanism 00:34:20 How Big is This Debate Really? 00:46:31 AI Safety & Regulatory Capture 00:56:39 EA's Political Affiliation 00:59:43 Extinction Risk 01:11:04 Politicizing the Debate 01:19:56 The Pro Tech Case 01:24:08 Productive Tension 01:28:33 AI Ethics vs Safety 01:32:53 Pick a Side ------ RESOURCES Haseeb Qureshi https://twitter.com/hosseeb Erik Torenberg https://twitter.com/eriktorenberg ------ Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
If you look at the companies that are actually accelerating, they're mostly EAs, right, which
kind of fucks up the narrative, which is like, oh, the EAs are ruining everything.
They're slowing everyone down.
And, like, it's the accelerationists who want to go fast.
The accelerationists are mostly VCs.
They're mostly not building anything.
They're just, like, on the sidelines cheerleading for this imaginary team that they think is
in the driver's seat when it's reality, it's fucking EA people who are even thinking
that this was possible in the first place, right?
It was the, it was all the VCs who were not investing in any of this stuff because they
thought it was so impractical and it was worth it.
is the EA's who believed that AI risk was a real thing because AI was going to become really powerful, really quickly, who created all this shit.
Welcome to Bankless, where today we're exploring the frontier of the tech acceleration versus deceleration debate.
This is Ryan John Adams. I'm here with David Hoffman, and we're here to help you become more bankless.
The question on today's episode, I think a question that is everywhere right now, should we accelerate or decelerate our tech progress?
And how about when it comes to something as powerful as AI?
This isn't just a debate in tech circles, at least not any longer.
This is now a political debate.
I think it's poised to define the next decade.
We've had leaders from both sides of the debate on the podcast.
You might remember our episode with Eliezer Udkowski,
who argues very much from an AI safety perspective and Beth Jesus,
which is almost the complete opposite of that.
So you've already heard what they think.
Today's episode is more of a synthesis-type episode.
This is a debate on these two opposing opinions to help you think through
where you might stand on the issue. We have Haseeb Qureshi, who tends towards the EA side of the
conversation, which favors more caution, more regulatory intervention around AI. And we have Eric
Tornenberg. He tends more towards the EAC side of the issue, which is a bit more tech forward,
faster progress, and a lighter touch from our regulators. We talk about the history of the debate,
the cultures in Silicon Valley. We talk about Sam Altman. We talk about the regulators,
the big, bad U.S. government, SBF, and what they think is the most
sensible position on the acceleration versus deceleration debate.
Before we get into some of the takeaways, our friends and sponsors over at SAFE,
wants to let you know that SAFE, this is the multi-sig.
Bankless uses this.
I've used this personally.
I have my own personal multi-sig.
A lot of people use the SAFE multi-Sig.
And now, SAFE is launching an activity rewards program.
So as you use your multi-sig, you can get rewarded with points.
So weekly users, or just for pushing volume through the SAFE, or having a high number of
transactions or you store a lot of assets with safe, everything that's an activity around
safe gets you now safe points. And they also want you to stay on the lookout for further activities
that will offer additional points as they unlock. If you want to learn more about safe points
and what you can do with them, click the link in the show notes to getting started with safe
points today. One of the reasons, Ryan, why I was really motivated to get this episode out here
is because I think right now this conversation is relevant to both crypto and AI is about
like accelerationism and are we pushing the gas? Do we want to push the gas further? It's now just
relevant to these two industries. I think there's more industries that is going to be relevant
for beyond just what the scope of what this conversation is. I think this is like a sign of things
to come. There are other industries that are also going to present some very hard conversations
to humanity and therefore government. And our tech sector is going to be coming out of the
tech sector. When we talk about accelerationism, the world is going to look so much more different
in 20 years than any other 20 year increment that we've ever had before. And that's only going to
increase. So we have biotech and gene editing on the horizon. We have longevity. People like
Brian Johnson literally trying to become immortal. And I think AI and crypto are like banded together
as kind of these accelerationist movements. But the thing is, I think these are the first of many
industries to show up on the acceleration aside.
And this conversation about this like caution versus gas, do we press on the gas pedal or do we
hit the brakes, that is going to be a more, it's going to define society more and more and more,
I think, as we move forward into the future.
So that's kind of the frame of thought that I hope listeners go into in this conversation.
I completely agree.
This is the political debate of the decade, but it's also probably the political debate of the
entire next century.
So a lot of podcast episode.
for us in the future. All right, guys, we're getting it right to the conversation. But before we do,
we want to thank the sponsors that made this episode possible, including our number one recommended
exchange, the place you can accelerate your crypto buys. That is Cracken. Go create an account.
If you want a crypto trading experience backed by world-class security and award-winning support
teams, then head over to Cracken. One of the longest standing and most secure crypto platforms
in the world. Cracken is on a journey to build a more accessible, inclusive, and fair financial
system, making it simple and secure for everyone, everywhere to trade crypto.
Cracken's intuitive trading tools are designed to grow with you, empowering you to make your
first or your hundredth trade in just a few clicks. And there's an award-winning client support
team available 24-7 to help you along the way, along with a whole range of educational
guides, articles, and videos. With products and features like Cracken Pro and Cracken NFT marketplace
and a seamless app to bring it all together, it's really the perfect place to get your
complete crypto experience. So check out the simple, secure, and powerful,
way for everyone to trade crypto, whether you're a complete beginner or a season pro. Go to crackin.com
slash bank lists to see what crypto can be. Not investment advice, crypto trading involves risk of loss.
Have you heard about SAFE? They're not just pioneers in crypto custody. They're also leading
the transition to the world of smart accounts. With SAFE, managing your crypto has never been
smarter or safer. They've recently passed a hundred billion dollars in total value secured
and are now deployed everywhere with over 15 supported networks. But that's not all.
SAFE is revolutionizing the game with their full compatibility with ERC-4337,
making embedding wallets super easy to integrate into your daily crypto activities.
And for the crypto enthusiasts out there, SAFE has something special.
The SAFE Pass Activity Rewards Program.
It's a fantastic way to engage and earn rewards, launching just after SAFDAW voted in favor of SafeT token transferability.
So, what are you waiting for?
Head over to SAFE.com.
Sign up for the SafePass Activity Rewards Program.
And don't forget to check out SAFECon during Berlin Blockchain Week.
where the future of smart account is unfolding in real life.
Click the link in the show notes to secure your spot today
and be a part of the transition
that's setting a new standard in the world of custody.
Taking self-custody of your crypto
is one of the most important things you can do
on your bankless journey.
It's also one of the hardest things to get right
with huge consequences if you don't.
If you want help going bankless, talk to CASA.
Kasa helps you take custody of your crypto assets
so you don't have to wonder whether you're doing it right.
Kasa is a one-stop shop for doing self-custody the right way.
With Kasa vaults, you can hold ether, bitcoin,
stable coins, all with one simple app and multiple keys for the ultimate peace of mind
with a support team to help you every step of the way. But it doesn't stop at self-custody,
because even though crypto is forever, you are not. We all plan on making life-changing wealth
in crypto, but with CASA's inheritance product, life-changing wealth can elevate to generational wealth.
For your kids and your loved ones, who don't know anything about crypto. With CASA, you won't
lose your private keys and you won't accidentally take them to the grave either.
Click the link in the description to get started securing your generational wealth.
Bankless Nation, I'm excited to introduce you to Eric Tornberg, an investor, technologist, and fellow podcaster.
He's got a podcast network called Turpentine Media, lots of excellent podcasts in that network at the intersection of technology, culture, philosophy, social commentary. Eric, welcome to the podcast, man.
Thanks for having me. Longtime Bankless fan. Just stoked to be out.
You know, actually before we introduce your fellow guest here, I actually remember a very early podcast episode that you had Ryan on as a guest, actually.
Ryan and Cyrus, UNessie, I believe.
which is when some of the formative years around, like,
eth is money as a topic.
Amazing.
Yeah.
Joining Eric here on the podcast today, bankless systems probably know,
Haseeb Qureshi, another investor, technologist, podcaster,
who we've had on the bankless many, many times before.
GP over at Dragonfly and is also in the effective altruist camp.
Well before anyone in crypto even knew what that meant.
Haseeb, welcome back to bankless.
Thanks for having me, guys.
Always fun to be here.
So just to lay some foundation for this episode, I think what this episode is going to be is going to evolve on its own. But really we are watching two camps out evolve in the tech space. And it's also migrated into government as well. But there's also some nuances. This is the accelerationism versus decelerationism camp. There are some other tribes inside of these tribes. There are some adjacent tribes. But overall, there's identities forming in the world of tech and governance. And,
where we are going as a society in the future and is causing arguments, it's causing debates.
The accelerationists want to just hit on the gas pedal and innovate, innovate, innovate as fast as possible.
And the decelerationists are like, ooh, scary. Let's regulate. Let's regulate these things.
Now, I don't think we've totally seen perfect identification or identities form around these tribes.
I think this is all new for Silicon Valley as a whole, which is why we want to bring you guys here on the podcast today to kind of like suss out and like figure out this landscape.
of these growing alignments across tech, across Silicon Valley, across politics.
I think we can do that here on the show today?
Let's do it.
Perfect.
So I kind of just laid down a whole bunch of stuff, different identities, different values.
Maybe Haseeb you can kind of get us started here because you are an effective altruist
well before anyone in crypto knew or cared what that meant.
And that was one of the earliest, call it, tribes, both in Silicon Valley and the Venture.
space and also just broadly. Maybe you can kind of identify about like the formations of that
whole camp and how it's innovated or just being a part of the whole like Silicon Valley tech sector.
Sure. So I think effective altruism, for most people, it came into the popular imagination with
SBF, which now it's kind of a irrevocable taint on the movement and on the kind of the ideas behind
EA. But EA came originally in around like 2011, 2012, mostly around Oxford and Cambridge from a small
number of philosophers who were very
interested in this idea of
trying to apply a more rigorous form
of ethics and morality to
philanthropy. It originally started with philanthropy and they're going
to broaden into a number of other ideas.
So it starts with folks like Peter Singer
and
a small number of
who's the guy
who was affiliated with SBF.
There's Will McCaskill.
Will McCaskill. That's right. Will McCaskill.
That's right. Will McCaskill, Toby Ord.
a few guys who all basically came up around that time
who started writing about a lot of these ideas in public.
And the origination of the EA movement
is very close to the rationalist movement,
which was also founded by Eliezer Yukowski,
who also has a lot of ties to AI Dumerism.
And so the EA, though, is a little bit different
from rationalism.
Rationalism is the cohort that really came out of Silicon Valley,
whereas actually EA mostly came outside of the tech sector,
mostly came from universities.
So now EA, the core
ideas behind effective altruism very simply are that most people, when they think about doing
charity or doing good for the world, they kind of turn their brains off. They're basically guided
by their emotions. They kind of donate based on mood affiliations or what makes people look good or
what sounds good. And they don't think as rigorously as they do when they are, for example,
doing science or technology or politics or, I don't know, maybe not politics, but when they're
building businesses, for example, right? So when you're building a business, you A-B test. You look for
statistical significance. You're like very careful in making sure, does this thing actually
improve my product? Does this thing actually get the thing that I want? And if not, you are very,
very careful to measure the difference and allocate resources accordingly. But when most people do charity,
they're just kind of like, well, you know, teaching violin in the inner cities. That sounds good.
Sure, let's like go, you know, start a museum. That'll be good for kids learning how to read or whatever.
These kinds of things that like, they sound really good, but are they actually good and how good are they?
And so EA kind of came up as this very scientific, engineering-heavy approach to thinking about how to do ethical actions in the most rigorous way.
And there's a whole framework around it.
I don't think we need to go into the details.
But one of the things that came out of EA, which is now very, very relevant in the world of Silicon Valley, is for a long time, EAs have argued about what is the most important cause area that actually can move the most impact in the world over a short amount of time with a small amount of resources.
And, you know, many people talk about global poverty, animal welfare, but one of the most
popular cause areas for a very long time, like almost 10 years now, has been AI risk.
The idea being that as people develop more and more powerful AIs, AIs might be so powerful
that they could end up disrupting society or even causing, you know, mass human extinction.
And this was before Chattip-T, this is before any of the modern, you know, kind of acceleration
in AI advancement that we've seen over the last 10 years.
And so people have been working on this, you know, a lot of small.
groups like Miri and Chai and these, you know, very weird kind of, you know, people in the back
room doing stuff that was very disconnected from Silicon Valley. Only now with the rise of large
language models and the sense that everybody has of like, oh shit, this is actually happening
and this is real, has the discourse around AI risk started to really intersect with society,
with tech, with economics, with politics. All these things suddenly matter now. That's
kind of been happening in a backroom for about 10 years.
That part of EA, so I was aligned with these ideas basically like 10 years ago,
before any of this stuff was really on a global stage of thinking,
hey, you know, you read these arguments.
You know, there's a very famous book by Nick Bostrom called Super Intelligence,
where he sketches out a lot of the in-principle arguments of how rapid AI advancement
might cause some degree of existential risk.
And you read the arguments, you're like, yeah, you know, this kind of makes sense.
It's not a slam-d...
It's like obviously correct.
It's not that clearly this will happen.
But there's at least a risk that it will happen,
and nobody seems to be thinking about that risk,
besides these, you know, small cohort of weirdos in the East Bay or in Oxford.
And so people thought at that time, oh, yeah, you should donate some money to these people
because this seems like a very under-explored risk that right now society is ignoring.
Now, today, it's quite different.
Society is not ignoring that risk.
Now it's like a, it's something that the White House is thinking about, which is kind of crazy, right?
Very recently, Paul Cristiano, who was the head of, he was a very, very early EA.
He was the head of safety at Open AI.
He left Open AI.
Now he is at NIST.
NIST recently hired him as like the head of,
AI safety edness. So now at this point, you know, it's very firmly entrenched in, at least the
U.S. government, is this concern that, hey, AI safety is a real thing. And that's not just,
it's not just sort of AI bias or AI replacing jobs. It's also catastrophic risks that might
arise from the rapid advancement in AI.
Yeah. I think probably listeners are generally familiar with Givewell, which is this charitable
organization that does a lot of research for understanding if you give this.
this one charity, $1, $1, how effective will that $1 actually go into, like, improving people's
lives? And then, like, this is where famously, like, they came down to, like, malaria nets.
If you actually want to, like, put your $1 and save the most number of human lives, you would
just buy malaria nets for, like, kids in Africa to prevent them from getting malaria.
But this would also, like, this was a very effective and rigorous process of understanding
how to capital allocate in the world of charitable giving.
But I think as we are working on defining the landscape of all these different, like, tribes and
identities about like the future arises here. I think Haseeb what you're also alluding to is that
well there's like this like risk off low risk research and pragmatic capital allocation strategy in
the world of charity. But then there's also like okay well what about some like venture bet future
high impact things as well. So not just about like hey current kids could be impacted by this
but like we could also have future generations and like not just like next future generation
but literally every single future generation. So that's this is how like the world of effective
Effective Altruism got interested in, like, AI risk, because, like, that's literally the whole thing.
It's, like, all of human society.
And so effective altruism is kind of taking the, like, not just, like, a pragmatic approach of charitable giving today, but, like, also taking, like, venture bets in charity or venture bets in, like, effective altruism.
How do you, how do you, how would you feel about just that out on right there?
Reflect on that for a second.
No, I think that's exactly right.
Most people, when they think about EA, or they hear these recommendations among EA, they're immediately turned off.
There's a few reasons why people immediately get turned off.
One is that like, well, you know, who are you to say?
What's the most effective and what's not?
You know, it seems like you guys are kind of these big central planning type people.
And, you know, naturally people have an aversion to that.
And, you know, so one thing to understand about EA is that EA is making recommendations at the margin, right?
EA is sort of not assuming that most people will follow EA because they don't.
And that would be stupid to assume that everyone will do what you tell them to do.
So you always want to be thinking, what should more people be doing at the margin?
and you'll probably influence 1%, maybe 2% of people at the most.
If you're influencing everybody, like let's say you are the government, then okay, the way you allocate capital and the way you make recommendations should be very different.
But if you are saying, okay, I can basically influence about 1 to 2% of people.
What should 1 to 2% of more people do, given where dollars are already being allocated?
And the answer usually is, well, okay, you know, there's some obvious things like we should have schools.
We should have sanitation.
And so somebody might say, well, are you arguing that we should not have schools or not have sanitation?
annotation instead we should all donate to malaria bed nets. The answer is no, obviously not.
Given where the majority of dollars are being deployed today, where should the next dollar go?
That's the real question that EA is tasked with trying to answer. So given that backdrop,
now you might ask the question of like, well, you know, if you donate to an anti-malaria bed net,
okay, I guess that makes sense. You save somebody's life from dying of malaria. That's a very
cheap way to save somebody's life. But, you know, donating to AI risk, maybe AI is not a risk, right?
Maybe there's only a 10% chance that AI is a risk. In which case, you'd be wasting that
money. And again, you know, the way that EA frames this is that, you know, you're thinking more
about a portfolio of good as opposed to thinking, well, definitely this good will do this much and
this good will do that much. Most people when they're thinking about, okay, I want to do one thing,
you know, I want to eat lunch. And if you're going to eat lunch, are you going to take a risk on,
like, you know, maybe this random restaurant is like an amazing restaurant and I should try it out,
or you're just like, look, I know that if I go to, you know, I don't know, Denny's at least
have a decent meal, right? I don't know, depending how you feel by Denny's.
And most people are naturally very risk-averse when it comes to their own lives or the things that they do with their time.
But the idea is that when you have a large portfolio of giving, of all the giving in the world, you actually want there to be some people who are doing high-risk giving.
Because, you know, sort of high-risk, high-reward, if nobody's doing high-risk giving and nobody's doing stuff that looks kind of weird or that may not pay off or that may not even matter, but in the times when it does matter, it really, really matters a lot.
So the analogy that a lot of EA people give is that, you know, imagine that, you know, this is something like nuclear research.
If you were in the 1930s or the 1920s and you were thinking about nuclear research, you know, at the time, it was kind of this relatively boring kind of scientific thing that nobody really, nobody really understood why this mattered until they saw the atom bomb and they saw the destructive power of nuclear energy.
And there was a time that this was considered to be very speculative, that this was even possible at this kind of scale to have this much dismal.
destructive damage from this technology.
And, you know, on the off chance that it turned out to be right, that the order of magnitude
estimates of this stuff was correct, then you would really want there to be a lot of research
into the safety of, you know, how to prevent nuclear proliferation, how to prevent mass nuclear
scale wars.
But, you know, people may be underinvested in that.
And as a result, you know, people in the, you know, 50, 60, 70s kind of lived under this specter
that maybe there would be maybe one or two more decades of human existence and then the
will just wipe itself out.
And, you know, for all we know, maybe we weren't that far from that happening.
Maybe we were kind of various times on the brink of massive amounts of human damage and catastrophic
risk from this proliferation of nukes.
And so the question is, is AI like this?
Is AI a similar technology?
And the answer might be, well, probably not.
Maybe AI only has like a 20% chance of that, 10% chance of that.
But even if it's 20 or 10% chance, what an EA is?
would likely say is that, well, that is high enough that you probably want some people,
maybe 1 to 2% of people spending their time on this, because, you know, 1 to 2% of people
spending their time on something that might be 10% as bad as nukes, that's a pretty good trade.
That's a pretty good investment if you're thinking about the overall portfolio of how people
are spending their time and spending their resources.
So if you think about EA as, here is what I decree as an EA as this East Bay rationalist guy
that everybody in the world should be doing, then I think you'll come away with a sense
that EAs are a bunch of selfish pricks, and they think they should run the world.
But if you come away with a sense that, oh, okay, EAs are advocating about at the margin how to
alter the portfolio of all the different things that human beings are working on to ameliorate
risk and increase the likelihood of human flourishing, then you might say, okay, that sounds like
a reasonable investment strategy.
You know, maybe I don't agree with it completely, but it's a lot more reasonable if
you think about it that way.
So, Haseba, I want to bring Eric into this conversation, just
kind of reflect on what you said and add, but I have just one question that kind of plagues me.
So we started this conversation talking about accelerationism versus delerationism.
The way we started this conversation was talking about EA and the rationale behind how EA
came to support and fund a lot of the AI safety initiatives.
And it's really, when I think of AI, it's really like utilitarianism or consequentialism
as a philosophy, kind of applied to giving.
That part to me sort of makes sense.
But it doesn't necessarily follow that AI must lead to investments in funding of
AI safety. Because if you are in the EA community and you believe in this sort of giving,
you could just as well make the case that in order to benefit the most amount of,
like the greatest number of future humans, say trillions of humans, we want to actually
invest in AI technology because we believe AI technology will build a utopia and cure diseases,
cure illness, solve the problem of death in like old age and all of these things.
And so you could easily make the case in the EA community under that philosophy that
no, we shouldn't be funding necessarily anything that would slow down AI. We should actually be funding
startups. We should be funding open AI. We should be funding like all the tech that goes into into AI.
But it seems like the community just forked mainly in one direction such that EA has become somewhat
associated with AI safety. I think by the way, recently maybe some of the founders of EA,
Will McCaskill and others have kind of walked back from that and be like, hey, you know, that's not
a EA is not necessarily AI safety. They're not one of the same. There's kind of some
I just throw that idea out there. And Eric, what are your reflections on this conversation
so far? Well, first I just want to add that not all long-termism is in the venture bet category,
right? It's also things like pandemic prevention or reducing nuclear proliferation, things that already
exist, already happened. And they were talking about pandemic prevention before COVID, right?
So they're really on to some things. The leap that long-termism can be broadly explained by this
idea that EA is about a kind of arbitrage, right? David, you described how if one dollar can make a
better difference, bigger difference to people in Africa than it can to people in San Francisco,
well, maybe we should put our dollar to people in Africa. And EA says, okay, because a person in Africa's
life is worth as much as a person in San Francisco's life, what about a person that doesn't exist
yet? What about a future person? That person's life is also worth as much as a current person's life.
And guess what? There's going to be trillions of future people, way more than exist today.
And thus, we should really care about the future. And there's a lot of people who got on board
of that. There's a lot of people who didn't get on board with that. And this was a schism in the EA community.
Some people are like, hey, I'm all about effective charity. But this long-termism stuff, that's a bit too
out there for me. And there's some people who go all the way. And now when we go, even among the people
go all the way, there's another schism that Ryan just outlined. Some people,
People say, hey, it really depends on what you think about the future of AI.
Some people say that the idea that you just said, Ryan, which is, if you're concerned about helping the most future people, you want to accelerate AI.
And these are people who are effective accelerationists.
This is why there is an additional schism.
It's people who are not on board with the AI safety agenda.
And one thing that's worth noting, you know, EA and IAC are having the strongest conflict.
But would Henry Kissinger say something like the tyranny of small differences?
Like AI safety people and EAC people agree on 99% of things in many cases.
They both believe that the people who are worried about misinformation or DEI or AI saying racist things,
they're not focused.
They're not at the adult table of the important conversation.
Those people are misguided.
They're not disagreeing with that.
And in many ways, AI safety people and EA people are tech accelerationists.
They agree that nuclear regulation has been really bad.
They want to accelerate things like self-driving cars or longevity or other things that
EAC people get on board with.
The one place in which they disagree is this field of AI.
They say AI, this time it's different.
We're not just introducing a new technology.
We're almost introducing like a new form of evolution or a new species.
that one day is going to be cognitively way smarter than us and thus is a threat not just to fake news.
It's not like social media, not just to jobs, but to actual humanity, to actually the chance of us being extinct.
And thus, we should treat it differently.
And that is the core of the argument between effective accelerationsists and effective altruists as it relates to AI safety.
Okay, okay.
So we're dealing with like, this is like an argument.
between like Catholics and Protestants basically.
It's like all in the same religion,
but like you guys are arguing over these specifics here.
I'm going to dispute that.
I'm going to dispute that a little bit.
I think, I'd say a lot of EACs.
So first of all, I think it's probably worth
worth actually defining what EAC is,
what it means as a movement.
Because I think it's also been in a sense co-opted
by what people want it to mean,
which is like accelerationism, right?
This idea that all technology is good
and all technology should be faster.
We should be super libertarian.
we should, you know, kind of let all the guardrails off and just let it room, room, right?
That's not actually what EAC says, right?
If you read the EAC manifesto from Beth Jzos, who's the guy who invented it as like this
Google engineer, worst quantific.
It's actually pretty fucking crazy, like what he actually literally says.
Like, I don't know, Eric, I'm not, it's been a while since I've actually looked at that
manifesto, but there's stuff in there about how, like, yes, if AI ends up, you know,
totally conquering humanity, like, that's good.
We actually want to feed entropy.
we actually worship entropy.
Entropy is the thing, you know, the core value from which all good flows.
It's like actually very weirdly cult-like and kind of anti-humanist.
I think most people don't haven't actually read it or they just kind of take it as a vibe.
And they're like, well, the vibe that I wanted to be is like, you know, basically Mark Andresen.
I think that is what I imagine EAC is.
But if you actually read the EAC manifesto, it's like some, it's a bunch of other weird shit that's adjacent to like the Mark Andresen philosophy.
But it's not quite the same thing.
So I think it's sort of been typecast into this is like Silicon Valley, you know, pro-tech libertarian energy, but that's not that's not literally what EAC says.
I don't know, Eric, if you would agree or disagree with that.
I do agree with it.
And I should say that that is my tribe in some sense is people like Mike Solana, people like Bologi, people like Mark and Driesen.
And so that is where that is where I come from.
And so I've been very sympathetic to this EAC movement.
right, which to some degree says, hey, over the last decade, there have been a lot of anti-tech movements and sentiments and people trying to use arguments to thus regulate tech in ways that have been very bad from whether it's energy or social media or a wide variety of things.
How about crypto?
Let's add that to the worst.
Yes, exactly. And we're not going to let that happen here. And we're finally going to stay.
stand up for ourselves because he saw Zuckerberg and others apologize and that didn't really get them
anywhere. It only conceded their sort of lack of moral sort of credibility and that they were wrong
and blah, blah, blah, and that's all these bad things. And so this, IAC is coinciding with a desire
for tech to stand up for itself, to say, hey, we are actually good. We are actually going to
push back and we are going to defend ourselves. Now, I think it's a thing.
brings up a really interesting point because, yeah, Mike Solana, Bologi, and Mark Adreason are not saying
that, you know, we're going to be pro-human, or sorry, post-human in some sense. They're saying
that technology is going to make human life way better for decades to come. And I think what
Beth was doing, and, you know, in some ways, EAC is critiqued as being about vibes. And the
benefit of being about vibes is you can't really be pinned down. You can't really be
critiqued on your philosophy. And I think that's partially because of what Hasib just,
just identified, which is if you take it to its logical conclusion, it's not something
that a lot of people want to get on board with. If you take sort of advanced technology to its
logical conclusion, like, what is the role for humans once we have super intelligence?
What is the, like, when we're like cyborg stuff and then different speciation as a result?
So it is very weird, but also, I, I, I,
I think in some ways it's intellectually honest of like that's where things are going.
And I couldn't pin down Solana or Balogy or Mark on like what do they think about that.
But I think there is some truth to that.
But I also think there's some like I do think it's hard to be a pro tech on a long enough time scale and not think that we're going to like be like the thing that humans are the last sort of, you know, form of intelligence that is going to go into rule the world.
So I think it's intellectually honest.
I also think it is very weird.
So Haseeb.
What's the problem with like EAC plus some humanism, right?
It's like a more, you know, like moderated version.
To be clear, there's, yeah, there's nothing, there's nothing wrong with EAC plus some
humanism.
I think it's much preferable to like raw EAC as it was actually articulated in its genesis.
I think it's evolved that way.
The thing that's weird.
Yeah.
Well, I don't know that it's evolved so much has been co-opted.
I think it's more like it's been co-opted, right?
It's been co-opted.
The reality is that any, any time, you know, you know,
you get a culture war.
You basically get the superimposition of what was once like a pretty, you know, esoteric
argument about like, will AIs take over the world?
Is it risky?
Is it not risky?
Is it good to have more?
Is it good to slow down?
This is like a pretty, you know, conceptual, theoretical argument that's been going on
in very obscure circles for a very long time.
And then basically in the span of about a year and a half, it has now become a culture war.
And once it becomes a culture war, then all of a sudden people get typecast, right?
So like, okay, if you're in EA, you're like the decelerationist.
That means that you hate technology.
And if you're an EAC, you love technology, and you love Silicon Valley, and you love Mark
Andreessen, and you love Peter Thiel.
And it's like, well, you know, not necessarily, like, in reality, there's a panoply of
use.
There's people who arrive at different parts of the spectrum from many different places.
And there's people who say, like, well, look, I think the probability of AI is going
badly is 5%, but the likelihood that actually it's better if it's open source is, like,
25%.
Or some people say, like, well, look, I think it's like 10%.
But I actually think that, you know, decelerationism is a bad idea, and it ends up, like,
just politically backfiring, and so we kind of have to live with the acceleration.
Like, in reality, there are EAs and their EAX, quote unquote, who have views anywhere
along the spectrum, but once you become a culture war, all that shit goes away.
And it's just like, oh, your EAC, oh, you're DEC, or your EA or whatever the fuck.
And people ultimately want a one-dimensional or sort of one-bit answer to how you feel about
what is probably one of the most important questions right now facing technologists or,
you know, technology regulation in America.
So, you know, if you ask EAs, what do you think we should do, you know,
regulatory about, about AIs?
You'll get a lot of different answers.
You won't get, well, we should shut it all down and, you know,
if I can control all the GPUs, right?
Eliezer says that.
But Eliezer is like the, all the way on the spectrum.
He's like, you know, the, the platonic ideal of a decelerationist, right, in that sense.
But most people on E.A. are very, you know, there's like a multi-dimensional spectrum
that people sit on different places in that spectrum.
So for myself, I was donating to AI risk like in 2014, 2015, way before, you know, large language model.
I mean, not way before large language level, but before any of this stuff was in the popular imagination.
And it was not very obvious at that time.
What were the specific policy prescriptions that were going to be made or how was this going to interact with Silicon Valley?
At that time, the people who are working on AI risk were also Silicon Valley nerds.
It was the same team, right?
It was the same people talking about this stuff.
It's only very recently that now there's this sense.
that, well, you know, build baby build, which is like the Silicon Valley mantra, is now
hitting this wall.
And like, I think in, it's a little bit of a boogeyman as well, because the reality is that the
EA types, you know, they don't really have that much power.
Like, they're not actually stopping anyone from doing anything at the moment.
Like, the real people who are stopping AIs are like the EU or like China, right, who are not
EA's, right?
That's not what's motivating them.
It's like data privacy laws or just, you know, deference to the CCP, right?
even the executive order that Biden passed said that, okay, if you're above a certain training
run size, you have to report that to the government.
That's it.
It doesn't say you have to stop.
Doesn't say you can't do it.
It doesn't say you have to like, you know, follow these rules or whatever.
It just says you have to let us know that it's happening, which to me like that, okay,
that's a pretty far cry from, you know, oh, deceleration, blah, blah, blah.
It's really just like an information gathering exercise, which I don't know, like probably
they wouldn't have that much trouble gathering that information anyway, given the amount of
GPUs would have to be cornered by anybody running a training run of that size.
So I think there is a lot of reactivity right now.
And that's what's causing people to believe, oh, you're way on this side and I'm way
on this side and you kind of can't take any nuance of you in the middle.
That's what happens in a culture war.
First, let me just say that Elizer has also tweeted things like abortion.
We should be able to abort babies that are up to 18 months or you should be able to
leave your partner, if you find some, your wife or husband, if you find someone 25% better.
Like, he is, you mentioned the platonic ideal.
He takes utilitarianism to the extreme.
And so, like, in the same way that Best Manifesto takes that idea to the extreme, there
are people in EA, SBF, of course, one of them, who take, you know, some of the ideas
within EA to the extremes, so far that they're no longer, you know, sort of palatable.
I would take the other side a little bit on, I do think EA has a lot of power because I
I think that a lot of the people who make up Open AI,
who make up Anthropic,
who make up some of these biggest labs,
were influenced by EA or themselves.
They were.
Yes, exactly.
And so the ideas, they sort of came up in the EA soup.
And right now there's not abrasive regulations, as you mentioned.
You know, people are still a lot.
But we're at a time where these regulations are about to be defined,
the regulatory era of AI over the next year, et cetera.
and the people who have, you know, like they're inviting people into the conversation who have
EA influence, EA ideas.
And so I do think the area safety community will have an impact into the regulatory.
But, okay, but you raise a good point, which is kind of ironic, right, which is that if you
look at Open AI, if you look at Anthropag, if you look at the companies that are actually
accelerating, they're mostly EAs, right, which like kind of fucks up the narrative, which is
like, oh, the EAs are ruining everything.
They're slowing everyone down, and, like, it's the accelerationists who want to go fast.
The accelerationists are mostly VCs.
They're mostly not building anything.
They're just, like, on the sidelines cheerleading for this imaginary team that they think is in the driver's seat when it's reality.
It's fucking EA people who are even thinking that this was possible in the first place, right?
It was the, it was all the VCs who were not investing in any of this stuff because they thought it was so impractical and it was worth.
Is the EA's who believed that AI risk was a real thing because AI was going to become really powerful really quickly, who created all this shit.
So that's why I think, like, again, this culture.
culture war typecasting that has happened is just a historical.
It just doesn't make sense given who's actually the players involved.
Most of the people who are regulating AI, like, yes, we just talked about Paul Cristiano,
who is now somewhat influential, presumably ethnicist, which is, again, a relatively minor
kind of scientific organization.
He's not a congressperson, right?
That's who actually writes the laws is congresspeople.
If you listen to Congresspeople, what do they care about?
They care about jobs.
They care about, you know, bias.
They care about, oh, you're squelching conservative voices.
that's what is very likely to end up on the regulatory regime or the legislative regime.
You know, I'd love if Paul Christiana got, you know, something in edgewise into that conversation.
And maybe he will.
But so far, it doesn't seem like that's predominantly what we're hearing from the people who actually make the laws.
There's one thing that we're familiar with in the crypto space.
It's tribes and certain like cultural leaders in respective tribes becoming the hardliners for that tribe, right?
they really, they teach the tribe who, you know, vibe associate with the cultural leader how to be
a hardliner. And then like the people that listen to like the high priests of this particular
alignment camp, uh, absorb some, not all of the message, right? Like, oh, yeah, most of this stuff
feels good to me, right? We see this in literally every single crypto tribe that has ever been
birthed and, and we all have like one, at least one respective hardliner. And that's what like kind of
more or less creates the tribe. And then also you smash the particle of social media into that. And like,
everything just gets juiced.
Especially this last cycle as like Bitcoin ETFs have gotten approved and like BlackRock
is starting to tokenize securities on Ethereum, I've very quickly realized that like the realm
of crypto Twitter is actually a significant minority of like what actually matters in the space.
And like if you're in crypto Twitter, you think crypto Twitter's massive.
You think it's a really big deal because you are in it.
And maybe it actually is.
but like I remember hearing this like line from Sam Harris when he was critiquing Elon Musk's takeover of Twitter.
It's like Sam Harris was saying like Elon Musk is like disconnected from reality because he's on Twitter all of the time.
He thinks the universe is Twitter.
And this is where these tribes fight, right?
This is where the hardliners, you know, chant their chant and they'll post out their tweets, like get the their tribe on board.
And then everyone else thinks that this is the universe.
And so I guess like zooming out, Eric, like you talk to all the technologists.
of Silicon Valley.
Like, can you, like, help us understand the scope of this?
Like, is this, like, a broad, encompassing blanket that is over all of Silicon Valley
and, like, government and, like, everyone is focused on this conversation?
Or is this, like, kind of, like, a side quest from, like, the VCs and the builders?
And, like, it's actually not really defining the game the landscape that much.
Like, how far are we on these, like, continuum here's continuums?
I think the answer is both.
Similarly, you know, people say Twitter is not the real world. Most people in the world are not on Twitter.
That said, the people who are influential in the world who influence other people's views,
whether it's journalists or politicians or people on TV or anyone who's got a mouthpiece to the rest of the
world are on Twitter. Not only are they on Twitter, they get their ideas from Twitter. They battle it
out in the marketplace of ideas or the culture war. And so I disagree with.
Sam Harris, I do think Twitter is the source where a lot of ideas come from and a lot of the
sort of ideas get litigated on Twitter. And similarly, most of Silicon Valley is just doing their jobs.
Not on Twitter, not focused on this, but where do things get litigated often on Twitter, right?
Like, you know, as he was just saying how EA has not had a big impact yet. We alluded to this off camera, but
there's a world where Sam Altman did get fired, and there was an EA coup, and maybe
there was overblown, but there just seemed to be some EA concerns from the board, and maybe
Open AI would have taken a much more sort of decelerationist stance, right?
There was this movement to pause AI, you know, about a year ago that it didn't seem like people
followed, but maybe they would have.
Go ahead.
Yeah, I want to push back against this, because like this again was the frame.
of the Open AI coup, basically in like the couple days when it was all fog of war and nobody
knew what was actually going on. There's now been a ton of reporting on what actually happened in
the open AI coup. And it looks like just a regular old, you know, like Sam Allman was like kind
of telling people different things. And he was like raising money for stuff on the side and not telling
the board and blah, blah, blah. There's just like actually there was not, oh, there's Q Star and it's a secret
algorithm that's going to AI risk. And, you know, people saw that like, oh my God, you know,
Sam Alman wants to unleash it on the world and he's not being safe enough. That was not.
what happened.
Like, we know now that's not what happened, right?
What happened was just ordinary board loses trust with CEO, CEO and board fight.
Board is like very amateurish, oust the CEO, hire somebody.
You know, it was all just like terrible board management.
Wow, really?
Because like, to this day, I have always thought that it was an AI safety.
No, there was no, no, no, there's been so much reporting right now.
Like, no, no, no, go read.
The New York Times has done a ton of exposés about this now.
Like, there was almost nothing to do with AI safety in the reason why opening, like,
Because at the end of the day, like the whole nonprofit structure of opening eye was not the reason why they fired Sam, right?
The reason why they fired Sam was that the board members lost trust with Sam.
Sam reportedly was like telling different people on the board, different things.
The board was like getting back together and they were very uncomfortable with the way that Sam was managing things.
And he was kind of controlling people.
And he had people talked a lot about the fact that Sam is a very manipulative guy.
And he's very, very effective at what he does.
Part of what makes him a very good leader is that he's very, he has this reaction.
distortion feel, and he makes people feel how he wants them to feel when he's around them.
That's the real story.
If you try to tell the story about, well, opening I was an EA coup, then you have a lot
more explaining to do about, like, okay, what was the EA thing that happened, that precipitated
all this?
Perception is reality, right?
And it was framed.
Yeah, exactly.
No, it's true.
It's true.
It was framed in that way by, like, the chorus, you know, just the people on Twitter,
the talking about, who wanted this to be the story.
It's the same thing with, like, EA versus IAC, right?
People want this to be the story.
but the story is almost always more complicated.
And the reality of opening eye was that it was more complicated than that.
Yes.
But if it had been completed, this coup that never was.
Okay.
It would be framed as an EA coup, right?
And so.
Yes.
Yes.
Which, which, so let me infer in a different way.
You're saying, hey, the people who actually write the laws, they don't care about AI safe.
They're not AA safe to people.
They care about jobs and they care about misinformation.
And I agree with that.
And this gets to the sort of Baptist bootleggers sort of dichotomy that Mark and Jensen often brings up,
which is this idea that there are people who are purists.
They actually believe in things like AI safety.
And they really care about the issues.
And there are bootleggers who are sort of either grifters or opportunists who notice something
that is in their self-interest to take the same position as the Baptist.
and they use the language of the Baptists to justify their own advancement of their own self-interest.
And similarly, in tech, when right now there's a schism within tech.
There was a survey that went out to 10,000 researchers or thousands of researchers,
and I think 10,000 MLIR researchers who work at these big labs expressed some concern over or sympathy with AI safety.
And so if we as an industry don't present a united front,
in our belief of these technologies, it gives ammo to people who want to regulate them, right?
And in the same way that within crypto, if you have, you know, half the community talking about,
you know, sort of why crypto is bad, and regulators see that, they might say, oh, even crypto people
think it's bad, or what three people think it's bad, thus we need to regulate it.
So there's a concern in the same way that was social media.
Social media people apologize all the time and express their concern.
and it gave more ammo for regulation,
they have the concern that the same thing will happen here.
The counter of that, which I think is a good counter,
is no, it's not about giving them ammo.
It's about self-regulating.
If we as an industry can self-regulate ourselves,
maybe that will prevent other people from coming in
because we can prevent some of the damages.
And crypto talking about this lot too,
of like getting rid of scams early
so that doesn't give the whole industry a bad name.
Well, in reality, that is what's happening, right?
It's who's doing the AI safety research.
Who's actually trying to be responsible with this stuff?
The answer is open AI and anthropic.
And like the guys who are actually leading the front are self-regulating because the
regulars have no clue what the fuck is going on.
They have no idea how to even be productively, how to productively give input into how to guide
this technology forward.
So the reality is that, you know, the concern about regulation, I mean, it's one that I
don't know anybody who doesn't share the concern over bad regulation.
Bad regulation is bad, right?
Clearly.
But it's also obviously true, even among.
the effective acceleration is that AI is really powerful.
And AI is going to massively change the playing field around who controls what.
And right now, you know, they're kind of toys.
Like, okay, yeah, you can spam people or you can, you know, spearfish much more efficiently
using these tools or they can tell you how to build a bomb, which is like, okay, if you can't
figure out of Google, how to build a bomb, like, you know, good luck using a large language bottle
to get there.
But like there's a lot more that's coming.
And it's pretty obvious to everybody.
There's a lot more that's coming.
And so I think, you know, whether you agree about.
about whether you think the right way to do this is self-regulation or government regulation depends on your
confidence in the right laws or right frameworks to arise. And Silicon Valley obviously has,
you know, a lot of scar tissue in believing that, you know, governments fuck it up. Governments are
terrible, you know, everybody hates us. And so they're going to create these draconian laws
that are going to be about this, you know, power struggle with government and with special interest
or whatever. And that's going to be the regulatory regime that we enter into. But nobody goes
around saying like, you know, you know, all the stuff that opening I is doing around safety or this
research that Anthropic is doing, it's stupid, it's worthless, it's slowing us down, let's throw it away,
right? If effective acceleration has said that, then I would say, okay, it sounds like you guys are
very consistent in believing that AI is an unalloyed good. But they don't say that. They don't
say, like, well, why are these people slowing themselves down? Instead, they say, well, you know,
government bad, these people good, which like, okay, I understand where that's coming from.
And I don't think EAs are pro-government, right? Which I think is, again, another way that this
dichotomy kind of breaks down,
EA's in general think that governments are terrible allocators of goods, right?
I mean, you can just see it historically.
Governments are incredibly inefficient.
And so I don't think EA's look at that and say,
well, great, government is the natural person to provide this regulatory regime
because they're all brilliant, and we love what everything the governments do.
Most EAs are pretty libertarian on almost every single front.
They generally believe that governments are incredibly wasteful.
But when it comes to AI, the concern is that, well, is self-regulation enough
when you have a tool that is incredibly powerful, right?
So if you have self-regulation around, you know, nuclear weapons, how well is that going to do in an environment where nuclear weapons are extremely powerful and extremely damaging?
Well, in practice, we've kind of learned through experience the only way to really control nuclear weapons is with the monopoly on violence.
It's basically saying, yo, Iran, if you develop nuclear weapons, we're going to come at you.
So, you know, North Korea, if you develop nuclear weapons, we're going to come at you because we don't want these proliferate.
We don't want this to just be a tool in everybody's hands.
We actually think that's a worse equilibrium to have everybody be really powerful.
But this is kind of what EAC implies is that the best equilibrium for AI is that everyone has one and AIs keep other AIs in check and it's sort of like we all have Nukes.
Maybe that's true.
But there's at least a real possibility that that's not true, that it's more like nukes and it's not like, you know, I don't know, you know, books, right?
Which is a thing, okay, more education is good for everyone.
So anyway, sorry, Eric, I know you want to jump in.
You were correct in saying that we are accelerating and EAC people are happy with the status quo.
they're just worried that things will change because, as you mentioned, the irony,
people who are sympathetic with AI safety are the ones driving the acceleration.
People work at these labs.
But there are people who are even further on the side of AI safety who are noticing that
contradiction saying, hey, you claim to be about AI safety, but you're actually accelerating
people like Elizer, people like Laurent, very smart, brilliant people who are saying, hey,
we actually need to slow down.
Like, you need to be more, we need to pause AI.
We need to put an off switch.
We need to have some sort of more extreme intervention.
And I think the EAC people are responding to that concern.
Not that the status quo is a problem,
but that a very significant change could be a problem.
So that's what they're worried.
Mantle, formerly known as BitDAO, is the first Dow-led Web3 ecosystem,
all built on top of Mantle's first core product,
the Mantle Network, a brand new high-performance Ethereum Layer 2,
built using the OP stack,
but uses eigenlayers data availability solution instead of the expensive Ethereum layer 1.
Not only does this reduce Mantle network's gas fees by 80%,
but it also reduces gas fee volatility, providing a more stable foundation for Mantle's applications.
The Mantle treasury is one of the biggest Dow-owned treasuries,
which is seeding an ecosystem of projects from all around the Web-Free space for Mantle.
Mantle already has sub-communities from around Web3 onboarded, like Game 7 for Web3 gaming,
and Buy Bit for TVL and liquidity and on-rats.
So if you want to build on the Mantle network,
Mantle is offering a grants program that provides milestone-based funding to promising projects
that help expand, secure, and decentralize Mantle.
If you want to get started working with the first Dow-led layer-2 ecosystem, check out Mantle at
mantle.xyZ and follow them on Twitter at ZeroX Mantle.
Selo is the mobile-first EVM-compatible carbon-negative blockchain built for the real world,
driving real-world use cases like mobile payments and mobile defy, and with Opera Minipay
as one of the fastest growing Web3 wallets, cello is seeing a meteoric rise with over three
300 million transactions and 1.5 million monthly active addresses.
And now, cello is looking to come home to Ethereum as a layer two.
Optimism, Polygon, Matter Labs, and Arbitrum have all thrown their hats in the ring for the cello layer two to build upon their stacks.
Why the competition?
The cello layer two will bring huge advantages like a decentralized sequencer, off-chain data availability secured by Ethereum validators, and one block finality.
What does that all mean for you?
With cello layer two, gas fees will stay low and you can even pay for gas natively using ERC20 tokens, sending
crypto to phone numbers across wallace using social connect but cello is a community governed protocol this
means that cello needs you to weigh in and make your voice hurt join the conversation in the cello forums
follow cello on twitter and visit cello dot org to shape the future of ethereum launching a token
don't let complex legal and tax issues slow you down tooku provide specialized support to optimize
your launch and ensure that you as a founder and your team and your investors get the most tax-efficient
outcomes the toku team understands the crypto space inside and out and will ensure your token
launch is fully compliant while maximizing tax efficiency. Toku can connect you with the best
attorneys if you need them to make sure that you have the best advice and Toku can help to optimize
your taxes so you pay the least possible amount of taxes while still maintaining legal
compliance. With Toku's guidance, you can concentrate on building your company while Toku handles
the logistics. Tokun launches don't have to be complicated. Talk to Toku today to get a free
initial token valuation. Welcome to Consensus 2024. Consensus is one of the biggest hubs for
all things crypto, blockchain, and Web 3. This May, Consensus is celebrated.
10 years of decentralizing the future.
Going to consensus means being in good company
with over 15,000 attendees from over 100 countries.
You have the opportunity to learn directly
from the architects and advocates representing Bitcoin,
Ethereum Solana, along with the teams
from popular layer twos like Mantle, Arbitrum,
optimism, base, and others.
But consensus is more than just discussions and presentations.
Badge holders get access to witness
10 professional karate combat fights
and one unprofessional one between myself and Nick Carter.
For the visionaries, Pitch Fest is your spotlight,
presenting a stage for the most
promising early stage web three companies.
And for the devs, the hackathon offers a unique chance to build your next project or take your existing project to the next level.
Immerse yourself in hundreds of side events and hacker houses scattered around Austin.
Register for consensus 2024 today and save 20% with code bankless.
Yeah, yeah, I think that draws something that we see on both the extremes and something that E.
E.es Rukowski and, like, Beth Jesus might also agree.
It's like there's another more cynical interpretation of the stuff that the EA people like,
the same altman's are doing and the reason they're spending time in dc and that interpretation is
regulatory capture so they they actually want to hit the accelerator on on ai in general but they're also
reasonable people and they're happy to work with the government in order to come up with with
sensible regulation and partner with them because we're the educated ones we're building the models
and so yeah let's team up with the regulators and the governments around the world in order to
to kind of create the right rules for this new and emerging industry.
And oh, by the way, this would be, I think, the Mark Andresen critique.
By the way, we are the recipients of, like, of value at the end of you.
We have cemented ourselves.
We've created a moat for ourselves and our own businesses.
And so there's a criticism that comes from LEASER in doing this.
You're saying, well, this is hypocritical.
If you're really worried about AI safety, you would, like, cease immediately.
You would stop, like, doing this until you do a Manhattan project and go figure it out.
And then there's a criticism from the Beth Jaisos side of things with Sam Altman.
And Beth would probably be someone who says, like, this is clearly regulatory capture.
They're doing it in broad daylight, right?
The only solution for this is decentralized AI and everyone needs a model and an AI genie in their own home running on their own hardware, right?
So this is a critique that's coming from both sides.
And I'm wondering, Haseeb, like, what about that more cynical interpretation of what's going on?
And in fact, wouldn't that be like actually the game theory of what companies want to do?
They want to go build monopolies.
And so, I mean, that's a sensible business strategy.
Sam Albin's a smart individual.
He's great at this, right?
Yeah.
Okay.
So first of all, I'd say I find this a little ironic because, again, during the whole AI ousting or open AI ousting drama,
Sam Alman was typecast as like the Promethean, you know, like the EAC guy who was trying to make AI go faster.
And now all of a sudden, oh, no, no, actually, never mind.
He's not the savior.
He's actually the, he's the boogeyman.
He's actually going out there and trying to, you know, he's the JP Morgan,
trying to corner the steel trade or whatever.
I don't know.
To me, this is not a critique.
This is an innuendo, right?
What is Sam Allman?
Let's step away from AI.
Let's talk about crypto because I think, you know, people in this podcast understand crypto easier.
Let's pretend that instead of Sam Allman, it's Brian Armstrong, okay?
He goes to D.
And he says, hey, I'm going to engage with D.C.
And try to come up with, you know, sensible legislation.
Would you be like, oh, my God, he's trying to cap these regulatory capture.
what is he doing? Why is he consorting with the enemy?
So not Brian Armstrong, but SBF, that's exactly what he was doing.
Yes, yes. Okay, so put SBF aside. Let's say Brian Armstrong.
Brian Armstrong does go to D.C. He does do all those things. Do you say the same thing is happening now it's just Brian Armstrong instead of SBF?
No, but I would be cautious. Generally, I'm supportive. Why not? Why not? Why not?
Just because, I mean, for me, is this a...
Be cautious. What do you mean be cautious? What do you mean be cautious?
I think it's both.
He's both fighting for regulatory arbitrage,
maybe implicitly, but he's also fighting for the industry as a whole.
Yes, regulatory capture, yes.
But I think he's motivated by all the right reasons,
and the net effect of that is regulatory capture.
Sure, okay, so it sounds like what you're saying is,
hold up a sec, let's see what he actually does.
Yes.
And then judge him based on the consequences of the legislation that's actually proposed.
Yes, yes.
Okay, that sounds very, very.
reasonable, right? That sounds very reasonable. Okay. What is the legislation that Sam Alman has proposed?
I actually haven't seen anything. Nobody fucking knows. We don't know. The answer is we don't know. So it's all innuendo, right? This is people just making up a fucking story because it sounds good. The reality is, if you're Sam Alman and you're not going to DC and you're ignoring people in the White House, you're a fucking idiot. You're an idiot whether you're fighting for the industry or whether you're fighting for regulatory capture. Like either way, if you just ignore them, you're a fucking moron. So until we see, what is the actual legislation that he's advocating?
We don't know.
And if you are St. Malman and you literally just ignore D.C., you would be definitely
doing the worst thing possible, right?
I would much rather Sam Altman be there than whoever else is going to be there instead,
you know, some guy from, I don't know, fucking Google or something.
I would much rather Sam Alton be there.
I think this is a fair point.
I want to hear what Eric says on this, but it is a fair point.
Actually, it was kind of like a critique that I had of like Brian Armstrong and Coinbase
of like not engaging in D.C. earlier and sort of letting a vacuum be present so that
people like SBF could actually engage. The moment I actually, like, bankless in general, so David
and I kind of like started, our spidey senses started tingling with SBF was when the draft of some
proposed legislation that he was responsible for actually came out. And it felt very much like a rug
pull, ladder pull up on all D5 front end interfaces. And like brought him to task and said, hey,
SBF, this is not like crypto values. Like, what are you doing? You're, you're essentially
forcing it or creating some legislation that would force kind of compliance on this space and
promote your centralized exchange. And so a critique of like Brian during that time was like,
I wish he had to have been there. Because if if you're saying Sam Altman could be good,
we don't know, it's kind of like, you know, Schrodinger's Sam Altman. We'll see with the legislation
that that pops out on the other side. I guess the case for that would be if he doesn't do it,
than some more nefarious version of a Sam Alton
and would be in D.C.,
engaging those regulators and engaging regulatory capture.
So I suppose we don't know until the legislation is out.
What do you think about this, Eric?
I agree with Haseeb's framing
that Sam has to get involved in, yeah, let's wait and see
what the regulation looks like.
I would like to see AI safety people
more strongly disowned the AI ethics people
because they often get conflated, right?
And because they have a sort of a coincidence of wants motivated by, you know, disparate desires.
Here are the AI ethics people.
I don't think we've defined that well enough, Eric.
AI ethics people are people who are worried about misinformation, who are worried about
sort of inequality about certain people or certain voices being represented by the AI.
Is this like AI woke people?
Yes, that's exactly it.
And these are the concerns that along with jobs and other things,
things that Congress people or regular regulators are more sympathetic to because they don't quite
understand the AI safety arguments just yet, or it's a bit to abstract. And by not disowning
those sort of reasons, they sort of implicitly accept them or give them more power. And I would
like to see that. And this is why we see things like Google Gemini being super woke or even
chat GPT or earlier. And these things matter.
And so that's something that I would like to see.
One thing I want to return to a little bit earlier is he was framing the culture war.
And when we have a culture war, things get kind of dumbed down.
And people typically here think of associate culture war with bad, right?
And when they think of a lack of culture war, they think of, oh, a great marketplace of ideas
where people are being very sophisticated and presenting good arguments.
And this, to EA's credit, is often what EA is.
In many ways, it's a very, you know, intelligent conversation, trying to get
truth in some ways. Now, I would also posit that the lack of a culture war is not just
this great marketplace of ideas. It's also something like North Korea, but it's something
where there's a lack of moral or intellectual diversity. It's just like, you just have to have
this, this, this is defense that some people give a cold. Is this a pro-cultural war argument?
Yes, yes. It's a pro-polarization argument. I think in, and even in EA, I saw this blog post,
we can link to it where someone was saying that EA needs more polarization.
And it actually affects their giving.
Like, EA gave or, you know, Gible gave like $200 million to criminal reform or something.
And this guy was arguing that actually made the problem worse.
And why didn't people come in and measure this?
Because they felt it was too controversial to go against it.
And this is a critique that some people give a VA, which is as much as they want to claim to be rational, first principles, you would think that that would lead to a diversity of
perspectives. And in some cases, it does on, you know, issues like long-termism and and specific
implementation details. But it's worth noting that EA is Democrat, that there are very few Trump
supporting EA's. Now, why is it possible that everyone in EA from their first principles is
just like Trump bad, Biden better? That seems very unlikely. It seems that EA is also prone to
tribal thinking in ways that other communities are. And that's not a knock on EA, but it's,
the reason why culture war in this case might be good, I'll also posit that EAC is a way for Silicon
Valley to implicitly be a right way, implicitly be conservative. And you notice that EAC is not only
talking about issues like, you know, advancing AI or technology, but also waiting into other
culture. It's much more masculine. It's much more alpha. It's much, you know, it's concerned with
birth rates, right? And so I think it's good for there to be in Silicon Valley more polarization
and more diversity as it relates to as it relates to intellectual issues. It relates to politics
because for so long we've had a lack of that. And that's presented, prevented really important
conversations on things like, you know, criminal justice, for example, or things where there are
different, where there are taboos. And EA and rationality is supposed to get rid of taboos or,
and in some ways it hasn't done that. So I'm curious how you would react to the characterization
of EA as, as Democrat-affiliated or left-wing and not having enough sort of intellectual and
moral diversity. I think you raised good points. And I should probably reiterate that, you know,
I've been affiliated with the EA philosophy for, you know, almost 10 years now.
I wrote a post fairly recently kind of critiquing EA's, I think I titled the post,
The Unreasonable Ineffectiveness of Effective Altruists.
And I think what I've found over time, you know, coming into the crypto space, I kind of thought
that there would be, you know, when I first got into EA, EA was like, you know, all these kids
who were really, really smart and, you know, made a lot of money and they seemed to be extremely
smart and fastidious and hardworking.
And I thought, wow, these people are going to be super successful.
I'm a nobody. I hope I can be successful too someday.
And then I found my way to crypto and started building stuff.
And I knew some EAs who'd gotten into crypto.
One of those EAs was this kid named Sam, who I first met at EA Global, which is like this
big EA conference in San Francisco in 2015.
He was still working at Jane Street at that time.
And I met him in a hallway and we chatted for like five minutes just about blah, blah,
blah, yeah, whatever.
And, you know, whatever.
I didn't think much of it.
He was just, you know, a peasy guy, curly hair shorts.
Little did I know, little did I know.
I'd come across him again much later
when we were both doing very different things
in the crypto industry.
Neither of us were in crypto at that time.
And so I kind of assumed
that there would be a lot of people
who were really successful in crypto who were EA's.
And what I would later discover
many years after being a VC
was that that was super not true.
There was basically only one of the person
who was a real EA,
or like a kind of central EA,
who was very successful.
successful in crypto, and that was Sam. And I didn't know Sam super well. We chatted maybe three,
four times over the years that we'd interact with each other. But obviously, you know, we all
know the story of what happened with Sam. And he, you know, single-handedly kind of redefined EA for
most people as being this kind of really corrupted, kind of corrosive, you know, group-thinky,
kind of go along with the flow, very left-leaning tribe, right? Essentially, that's what most people think
of EA as. Well, well, now I'd just like to add a little bit more.
more onto that, just like sociopathic comes to mind because it was like utilitarianism to its
endth degree, like, not considering about like any local humans' emotions or anything like that.
Right, right.
And so there are these famous stories about, you know, would you bet the entire universe on a 51-49
coin flip?
And he's like, yes, I do it infinitely many times.
And so just almost like, you know, autistic chasing of, or go ahead, Eric.
It's worth noting.
He said that in a podcast, and it wasn't like there was this, you know, everyone coming down
him criticizing him. He was still a god. And so it's crazy the difference between how SBF was
treated before we found out he was doing the fraud, even though he was saying some of these things,
he wasn't saying fraud, but he was saying extreme utilitarian ideas in broad daylight. And so
it's just worth note it. Yeah, exactly. And so it is, it is something that we actually talked
about, we alluded to briefly earlier about extinction risk, right? And like, you know, what is
the, what is the importance of avoiding extinction risk?
And the reason why SBF's example is wrong.
So a classical thing in EA is this idea of risk neutrality, which is this idea that, okay,
you know, most people are risk-averse.
They don't want to take risks.
And so it's like, okay, if I can definitely save a life by, you know, going to, I don't
know, a food drive and donating some food and I will definitely save a life by doing that,
but it's very, very expensive.
Or I can, like, maybe probabilistically save a life by, you know, investing into water
filtration systems and maybe somewhere down the road that's going to result in saving a life.
Most people are like, well, I want to know for sure that I saved a life.
life. And EA kind of advocates this risk neutrality, which is the idea that, look, in expectation,
because, you know, there's millions of people doing millions and millions of things trying to save
lives. All you really care about is on average, how many lives do you save? And in the portfolio
of all the things people are doing to save lives, you should just increase the expected value and not
worry so much about whether your coin flip turns out has or tails, right? And so, you know,
SBF kind of took this idea to an extreme and said, okay, well, we should gamble the whole fucking
world on, you know, plus EV one life, great.
Okay, flip the coin as many times as you want.
It's free money.
Now, this is wrong.
Okay, this is actually just wrong.
This is wrong according to this mathematical rule called the Kelly Criterion,
which basically defines the optimal level of risk-taking you should make in order to
maximize the growth rate of your bankroll.
I used to be a professional poker player before I ever got an attack or crypto or anything.
And Kelly Criterion is a concept that everybody in poker knows, is that if you want to make
the most money, you follow the Kelly criteria.
You don't bet your entire bankroll on, oh, okay, I've got like, you know, 60% to win this hand if we go all in.
So great, I'm going to bet my entire bankroll on it because that's the maximum expected value.
That's wrong.
That is wrong to do that.
It is wrong to do that because when you go to zero, right, if there's a real existential risk, even if the risk is low, like, you can never come back from zero.
As long as you still have some bankroll, you can make it all back.
But if you go to zero, you cannot make it all back anymore.
Your growth rate is now zero forever.
And so Kelly Criterion, tell you.
you should maximize growth rate as opposed to maximize expected value. And you maximize growth
rate by following the Kelly criterion, right? So Sam was wrong in his elocution of that,
but people kind of get stuck on this idea that, oh, well, that's what EA's believe.
EAs believe that you should bet the universe on every tiny little edge and, you know, every
small piece of expected value. So again, it's like this, it seems like the EA worldview
just completely diverges from what normal people think of as common sense morality. And I think
It's, it's, it's, it's, it's, it's, you know, if you, if you, if you explain them clearly
enough, it's like, actually, no, that actually does sound pretty reasonable. Yeah, okay, if you,
if you, if you're playing with a, with a technology that might make you go to zero, okay, well,
you know, the one rule of gambling is that you never fuck with going to zero, right? You never
fuck with going broke. You always leave some money on the side that if you end up losing the
hand, even if you're all in, aces versus kings, you still, there's a chance you lose when
you go all in aces against kings, right? And so, okay, that 10% of the time that you wipe out,
you want to make sure you've got some money to start grinding it back.
And in some sense, that is the core idea behind, hey, if you're P-Doom, that's the probability
that you think that AI is in some way are an existential risk of humanity.
If your P-Doom is like, you know, 5%.
That means to most people like, oh, okay, well, then why worry?
Like, obviously, this won't happen.
And so, you know, what's the chance that a 1 in 20 thing is going to happen?
Why are you guys freaking out about this?
But if somebody says, you know, I don't know what my P-Dume is, but it's probably something
on the order of like 5, 10%,
which means the modal outcome,
nobody ever has to worry about any of this.
None of this is going to matter.
Most of the time you're going to walk to the world
and AIs are just going to be, you know,
sex robots and, you know,
it's going to help you with their taxes and whatever,
but, you know, none of them is going to,
none of them are really going to try to kill humanity, right?
That most people who are talking about EA
believe that.
They believe that the modal outcome is that nothing bad will happen
or nothing bad on a catastrophic scale will happen,
I should clarify.
And, but they say,
even if that's true,
that 5% of the,
time, you don't risk that. You don't risk that. So even if you think, yes, there are going to be
trillions of humans in the future. And the EAC worldview is that, well, all those trillions of humans,
I agree there will be trillions of humans in the future. And we should, we should care about their
well-being just as much as we care about present humans. And so therefore, you know, if we wait
even a moment longer to let EA, or sorry, to let AIs arrive into the world, we are doing a massive
amount of harm to all those humans. The shape of the argument is kind of similar, right? But the
Kelly criterion, if you take it seriously, what it implies,
is that, hey, waiting, you know, another couple of years to get to the same place we're eventually going to get to, but with much more, with basically minimizing the chance that your bank role goes to zero is worth it if you're just thinking about Kelly Criterion.
And also with the role of human incentives, I think the gap between the Kelly Criterion and what value, like maximizing EV ended up being like SPF's like Bahaman penthouse and all of his political donations and everything else.
One thing I want to speculate with you guys is why is EA this convenient punching back?
And it's a punching bag on contradictory statements, right?
Like the AI ethics people for years have been calling EA, like transhumanists,
or saying, hey, you're way too tech-oriented.
And of course, the EA people call them Dumers.
Like, you know, EA has a branding problem.
It's been having a banning problem.
And it was a punching bag after SBF.
It was a punching bag for opening.
AI, it's basically something that everybody else can rally around of why they dislike it, right?
And it continues to have this trouble. And I would posit, it's because EA has beta energy.
EA is as nerdy beta energy and it doesn't stand up for itself. It doesn't say, you know,
like after the SPF thing, it apologizes, it goes into hiding. After opening eye thing, also like,
why didn't they say, no, this was totally fake. You're making this up. Actually, like,
EA has done some amazing things, right?
EA has had incredible accomplishments, and yet they're not shouting them off of the rooftops
in the way that Mike Solana is shouting off, you know, alpha aggressively saying why tech is good.
And this is where I posit when I, you know, somewhat joking about culture war good,
but I think EA and EAC need each other.
Like, I want an ecosystem where EAC and EA are duking it out because I think we need both.
I think we need a movement that is explicitly, you know, unadulterated,
pro-tech alpha energy willing to fight back against people who are trying to unfairly critique it or get
our CEOs fired or put in dump regulations. But I also want another movement that keeps that
movement in check and says, hey, AI safety is actually this really important thing. We should
really look into it. We should self-regulate and have that fight internally as opposed to
externally. And we've seen when when EA gets too powerful, it becomes.
becomes this massive punching bag. At the same time,
EAC, when it gets too powerful, it becomes really cringe, right?
Like, EAC is at its best when it's duking it out with EA.
When it's trying to be this overarching thing, it can't really,
EAC is like an underdog movement. They're both like underdog movements that should be in
productive tension with each other for us to figure out.
And maybe the status quo is actually like pretty good.
Like we have labs who are accelerating, and yet they're also working on alignment research.
They're also trying to keep each other in check.
And if we could just keep this productive tension, or what I'm calling productive attention,
maybe we can get to a better solution here as opposed to the worst of these extremes.
You talked about the manifesto and how extreme that is, and we talked about some of the more extreme players in the AI safety and how bad that is.
So that's something I would posit why EA gets beaten up so much and why it needs EAC support to help defend.
the parts of it that they agree with, which is 98% in my belief.
And then also why the tension is helpful.
I agree with you that EA is a punching bag.
It's weird to me in a way.
So, you know, I was in EA before EA got really on the global stage, right?
Most people didn't know what an EA was when I was first in EA.
And then really kind of in large part due to SBF, EA suddenly became very high status, right?
During SBF's reign, EA, like it was almost crazy the amount of positive coverage that EA was
getting despite the fact that EA is like objectively really weird and very unapproachable for
most people right so it kind of got lumped into this just like general asceticism and like sagely saintly
energy that SBF was trying to portray himself as being um and uh the the like in a way it was
two things I'd say one is that I that made me very uncomfortable this idea that like wow EA is like so
good and everything about you is good and everything that is EA is good um and now EA is
is actually really cringe, right?
It's like, not cool.
You go to Silicon Valley Party
and you tell us when you're in EA
and they're like, oh, okay, well, interesting.
And that's definitely not something
that people continue to self-affiliate with,
which is part of the reason why I also like it more now
because I feel like, okay, if you're at EA now,
then you actually, it requires some balls, right?
You're basically taking a hit publicly
to say that you're an EA.
Right, exactly.
It's an EA bear market.
I think, like, that is kind of the valley of darkness
that EA needs in order to really define its identity
and not just be like this kind of,
this kind of milk toast like, oh, I'm smart and cool and good.
And isn't that wonderful that I'm all those things at the same time?
And so that's one thing I'll say.
I think it has been good for EA in a certain sense to kind of get stress tested by becoming
very politically unpopular.
The second thing I'll say is, okay, why is EA such a punching bag in a way that, for
example, tech was not post-theranos or post-Irano or, you know, post-I don't know, crypto.
I mean, crypto obviously was a punching bag post-SP blowing up, but EA as well.
And I think a big part of the reason for that is that EA is just not designed to be politically capable, right?
It's almost premise on this idea that politics is bad and politics is the mind killer.
And if you believe this and everything that you preach is about, you know, nuance and careful thinking and, you know, avoiding sloganism and not being lazy and being suspicious of power structures.
And then you're like, great, now let's go fight in the, like, biggest power game of all, which is politics.
you're just going to lose because you're going to suck at it.
So I think that's largely the answer is that the one person who was good at politics in EA was
SPF.
And everybody else in EA just like fucking sucks at politics.
And that's why they were the like the saintly, quiet, nerdy people who are just, you know, in the background doing stuff and donating to bed nets and all this, you know, doing the AI research,
safety research at a time when nobody else cared.
And now it's like, oh my God, you guys are actually in the way of progress and you guys are all terrible people.
And they were like, no, no, we're really not.
let's, you know, let's do a two-hour debate to discuss why.
And, like, they just don't have the ability to get out there and fight with slogans,
which is the EA people very politically capable, right?
I mean, Mark Andreessen and Mike Salon and all those guys, they live and breathe politics, right?
Whereas the EA people, you know, they're a bunch of, like, philosophy professors, right?
Like, that's, that was the leader of their movement.
These are verbose, you know, kind of extremely particular people, and they suck at politics.
They might not be interested in politics, but politics is interested in that.
That's just the point.
Maybe we could agree to that part of the conversation because like these nerd battles that we thought we were having insular to like kind of nerd culture in Silicon Valley are now like political battles.
And there seem to be some like party lines kind of drawing and like using these platforms, right, for maybe like a different use case than when they were originally like created.
And there seemed to be this dichotomy forming behind like kind of a political party for.
where de-acceleratism, like check on Silicon Valley power and technological progress without
constraints.
Maybe there's some anti-capitalism or anti-billionaire type stuff tied up in that.
And then there's this like pro-tech accelerationism, remove the constraints, like let's let
America be kind of like the grandmaster of AI.
Let's like, like let's win this battle.
And it's at a very curious time with respect to like, at least.
American politics, right? Because we're sort of in between orders, right? We had the New Deal
Order for a very long time, FDR, until like the 1970s. And now we're in this neoliberal
order and have been for the last 40 years. And we've, I feel like we very much are on the exit
ramp out of that neoliberal order. And it's like the U.S. is trying to find out what's the next
political order that we're going to establish ourselves with. And I think the neoliberal order that
kind of rubber stamp the internet in the 1990s and, you know, helped Silicon Valley prosper.
They were very tech forward. They love technology, right? It's going to bring jobs like the economy.
It's not a certain thing that the next order, political order that dominates America will be as tech
forward as the order has been at this point in time. You could start to see some battle lines being
drawn. You know, I think people like Bologi and Mark Andreessen might say there's like the Blue
tribe that is tending towards tech de-accelerationism. Like lots of restraint, lots of regulatory
apparatus, like slow down on this whole tech thing in general. And then there's maybe Team Red.
They haven't quite chosen an alliance yet, but maybe they are going on the tech forward side
of things and tech accelerationism side of things. And it seems like, I don't know, you have to pick a
tribe at this point in time. From our conversation up to this point, Hasib and Eric, I feel like both
of you guys believe in a P-Doom for AI greater than zero, right? So you're not maximalists on either
side, the EAC side, you know, or the EA side of this debate. You're probably somewhere in the
middle. But it's hard to really pick a side politically, right? When it seems like if you choose
the de-accelerationism side, then you're consigned to like shitty regulation and they're going to
like shut things down and cement some of the leaders. Or if you go on kind of the other side,
well then maybe you don't have the constraints that you actually need for the space and you're not
properly regulating kind of like the the danger of this anyway how do you think about the politics
of this whole conversation that has exploded beyond like nerd culture and is now like
Biden has executive orders about this stuff all right and it's just starting like AI is the
thing that the nation that the world is talking about at this point so has see what's your
on this. So what I resist the most strongly is the politicization of this question, right? Because this is
one of the most important questions of our time, which also means that it's really fucking
important to get it right. And the one way you will guarantee that you don't get it right
is by making it into politics, right? Like, if you think about masks, like how we dealt with the
pandemic was probably one of the most important questions this decade. And we fucking failed because
we turned it into politics. Instead of turning it into a just raw scientific question of what is the
answer of how this fucking disease is spreading and disrupting society and like, you know,
killing millions of people around the world. And so if you let that happen to AI, I think you've
already lost the game, right? If you concede this idea that, well, there's this tribe over here and
this tribe over there and you pick one side and then, you know, at the end. Like, no, that's fucking
retarded. Okay. That's just absolutely wrong. That's not the way you should be thinking about this.
The way you should be thinking about this is that these are really hard questions. And the answers
require nuance.
They require careful study.
They require lots of time and energy.
And the only way you will guarantee
that you get it wrong
is by saying that they're only two sides.
In reality, it's a multidimensional question.
And the idea that, okay, well,
there's the D-cells and there's the X-S
and the blue tribe like this,
the red tribe likes that, and blah, blah, blah.
It's like, no, no, no, that's not true.
Okay?
One thing that we know about politics
from people who actually go and study this stuff
as opposed to reading media,
which will give you a very distorted view
of what people actually think,
is that most people are not diehard leftists or diehard rightus.
Most people are somewhere in between.
They have a mishmash of ideas.
They don't fit neatly into either of these categories.
And they don't like either of the leading candidates right now, right?
That is what we know.
We know that by actually doing polling.
If you go look at people, they don't fit neatly into these two categories and they don't
on AI risk either.
So people say, well, you know, I think Pidum is this, but I actually think we should do that.
Or I think, hey, you know, GPU constraints are not smart, but I think we should be putting
more effort into, you know, using the government to fund.
safety research, right? Because maybe it's under supplied by the private market. There are a lot of
median positions you can take that are not shut it all down or like build baby built, right? There's
tons of room in the middle. And if you lose that room, then you're definitely going to fuck up.
I think you'll fuck up to some degree, but I would argue that it could actually be worse.
And so I think it would be not the worst situation if one side was build, baby boom, and the other
side was shut it all down, because my worry is that both sides are going to say shut it all down.
Like we have Tucker Carlson going on Joe Rogan literally saying shut it all down.
He sounds like a dumber version of Elizer.
Not to put down Tucker necessarily.
Like he's and Tucker is one of the most popular people, you know, on the right.
Like if he ran for president and it wasn't Trump, maybe he'd be a candidate.
So I could see shut it all down being a bipartisan thing.
Like, you know, we're at a time of extreme populists.
As Steve mentioned, you know, people don't like Biden.
and people don't like Trump, you know, and we're also at a side of an era of horseshoe theory
on anti-tech, right?
Like, when you think about someone like Bernie Sanders or you think about someone like Trump,
these are populists from both sides who actually have a lot in common or more than we might think.
So my hope is that the AI safety people, I don't want to call them decelerationists because
they don't call themselves deserationists and we should use the terms that people use for each other
for themselves.
My hope is that the AI safety people and the EAQ people can publicly state, hey, we have
95% agreement.
We're all pro-tech.
Tech as an industry is pro-tech.
We disagree on this little issue.
Of course, it's of my importance, almost importance of AI safety because that would signal
to the rest of the world that there's alignment.
Whereas my concern is that both sides will be, both red and blue in a populist side will be
anti-tech and we'll be sort of missing the forest from the trees when we need to actually
defend tech i share that concern and actually that might be kind of like the the default state of
the pendulum swing because i would say the neoliberal order of the last 40 years has been very much
like pro globalization pro many things but definitely pro tech okay and i'm seeing whatever the new
political order that's replacing it to not be so favorable uh to tech as as they previously were
And a microcosm for this is like, is crypto for me, right?
It's basically like it's not a sure thing that team red or team blue will be team crypto.
It's just totally not a sure thing.
Like we have no idea where they're actually going to fall on this.
And it actually does, to Eric's point, does make me as somebody who is an advocate for
crypto technology, decentralization, private keys in the hands of individual,
all of these things, start to, you.
skew a bit more extreme on the like more extreme than I otherwise would be and trying to make the
case for like why crypto is as good right it's like um I have to do that in order to push the agenda
and to contrast myself against these movements from both like sides of of politics that would
just like shut it all down they don't see the use case for proof of work mining in bitcoin so just
shut down all the mining facilities they don't understand why
people need to run a validator in their own home or why defy interfaces, you know, don't require
AMLKYC and all of these things, right? So let's see, what do you say to that? Like, Eric's making the
case that maybe we should all ban together, not necessarily on the far extreme EAC type of case,
but we should definitely make a pro-tech case, like, or else we'll fall prey to the political
parties that just don't want any of this technology, just don't care about it.
So the EA tribe is often affiliated with being doomers about the technology.
But I feel like of the EAC side is in a way a dumer on the politics, right, where they basically believe that, you know, if we don't have this extremely rigorous defensive technology, that it's all just going to get, you know, it's all going to go cablooie.
And we're going to get, you know, we're going to turn into like this socialist state that we're all going to be enslaved by Elizabeth Warren and nobody's going to have any freedom.
And like the reality, you just look right now.
You say, oh, the EACs are in control, or sorry, the EAs are in control, the safeties are running the asylum.
If you look right now around the world, where are the AI restrictions actually happening, right?
The answer is in the EU.
That's where the most constraining stuff is happening.
And then in China, right?
People are saying on the EX side, they're like, oh, no, if we don't, you know, EA is so anti-tech.
You know, if we let the Democrats, we have a Democratic president right now.
The Democrats, if they control things, they're going to shut down all the technology and they're going to make it so I can't use the Internet.
I can't use social media.
I can't, whatever. Go look at China where they have, you know, people are worried, oh, China's
going to race ahead of us, they're going to unlock the technology. They're the real
accelerationists. Go read the AI regulations in China, right? In China, you cannot release
an AI application without it being explicitly approved by the CCP. There are only 40 applications
that have ever been approved by the CCP, right? You are liable in China if your large language
model says something that's not true. You are liable in China. That's the law that they passed in
China, okay? That's fucking insane. That's absolute. That is, okay, that is what you're up against
in America. The reality is that, yes, there's a cacophony of voices. Many people saying many different
things. The EACs are saying one thing. The EAs are saying one thing. The AI, you know, anti-biased people are
saying some other things. And the reality, what did we actually have in the U.S.? The answer is
nothing. There is no laws. There's no regulation. There's nothing right now, right? There's an executive
order that says, you have to let us know if you run a training run at least this size. That's it.
That's the only regulation that we actually have.
And regulation is a strong word for that, right?
The reality is that there's no regulatory regime at the moment in the U.S.
And the reason why that is is because that cacophony of voices continues pulling things in different directions, right?
I mean, we say, man, it really sucks that Gensler's doing this and, you know, the FinCEN is doing this and blah, blah, blah, blah.
Okay, what are the laws of crypto in the U.S.?
The answer is that there are none.
There are no laws in the U.S.
There's no regulatory regime.
There's nothing.
There's just a bunch of people fighting.
And, you know, if you look at the founding father,
the way that they described government
is that they wanted that to be the case.
They wanted there to be continual fighting
between the different branches of government
to prevent any centralized control
in, for example, the president.
They didn't want the president to be able to say,
well, guess what, I'm president, so here are the laws now.
Well, the laws can get shut down by judges,
they can get replaced by legislators,
and the reality is that that cacophony of voices
keeps things in check.
So anybody who's claiming, oh, my God,
there are EAs in the government now,
therefore it's all going to go to hell
and, you know, we're now all going to be, you know,
talking on our Google-controlled smartphones
and nobody's going to have open-source AI anymore
because it's all going to get banned.
They're fucking wrong.
It's never happened.
It's never been true because of our system of government.
So that's why, like, there's a political dumerism
that I also object to,
which I feel like is part of what motivates the,
the energy in demonizing the EA type.
So I think what they have to say is a very good point
that deserves to be heard,
but do I want EA's running the government?
No.
I want the government to be the way it is, which is, you know, a contest of different forces, right?
There are people who are economically minded who have a good point.
That's not being pro-cultural war.
It's being pro-democracy.
Yes.
I'm just easy.
But you're pro-this productive tension, and it plays out.
Yes, yes.
I'm very pro-productive tension.
I, 100%.
I actually really like this productive tension framing.
Kind of like, as we get to the close of this conversation here, I kind of want to zoom out because I actually see the same acceleration.
I know decelerationist isn't actually a tribe here, but it's like a kind of a talking point.
This accelerationist for its decelerationist debate.
But it's also not just like Silicon Valley.
It's not just AI.
It's not just crypto.
Whereas these things are generally all on the accelerationist front.
Like we're also seeing experiments going on in other technology verticals as well.
Like Brian Armstrong or excuse me, not Brian Armstrong.
Brian Johnson is trying to experiment himself into like becoming a new class of humans.
This is the guy who's taking any risk or not any risk, but.
any sort of like experiment to try and like add, you know, five more minutes to his life every
single day, right? Like people like Aubrey de Grey are literally trying to create like immortality.
Even in the crypto space, there's like Vida Dow out there who's like trying to combine
crypto and immortality to make it happen faster. And like this is we haven't even like open up
the conversation about like gene editing, which is like not far away. Like there's other
technologies out there that's really going to like if you think AI and crypto is going to make
the future weird, well like there's like three more technologies out there who's also going to
make the future weird. And the thing is, like, this has always been the case of, like,
technology, technology just accelerates. It's always accelerated. As soon as we develop tools,
we can use those tools to develop other tools and all of a sudden, like, we are an accelerationist
in an accelerating state. And this is just, the only difference between, like, now and any other
time in history is that it seems to be where we are now versus where we are at the end of the
lifetimes might be, like, multiple orders of magnitude different. And that has never been true before
in history. Like, we've only ever had, like, 20.
30% gains at most.
Usually, over the span of human history, we have like 2% gains.
And for thousands of years, we actually literally had 0% gains.
And now we're on this like parabola of innovation.
And so if we're talking about this tension between governments, like stakeholders in the
government, it does make sense that when you have crypto and AI and like biotech and all
of these other different like sectors of technology, which are all like making monumental
gains, that all of government is like, okay, everyone break.
Because who else is saying that? Who else is saying, like, yo, calm down.
Like, that's maybe like kind of like my commentary on, on like why I think the government
will trend towards breaks, not gas. But like, maybe Eric, maybe you can like check my reasoning
on this. No, I, I agree. That's part of the broader productive tension. And I,
I want to bring in, you guys had this amazing episode of Vitalik, where he, he identified and
expanded upon his Diak philosophy.
And I think that's worth noting, too.
And I think that piece had a great synthesis of the problem and sort of the common alignment
between EA and EAC and trying to develop this sort of, you know, thesis and synthesis
approach.
But I think, unfortunately, I think it's a bit too neat.
I think it's tough to just focus on technology that are defensive.
You know, things that tend to be defensive can be offensive and vice versa.
fire can be used for good, it can be used for evil.
And if we're going to continue to develop these models,
I don't see ways in which you can only be used for defensive ways.
So that's my commentary on that.
And then to kind of to your point,
I think we are going to continue to need the AI safety people to develop
and further develop our alignment capabilities.
And we're going to need people who are pushing the brakes.
It's my goal in this conversation to rehab,
or improve EA's reputation, AI safety's reputation, but to also keep it in its box.
As Haseeb said, we don't want them running the government.
They're not close to it yet, but the regulatory regime is going to be up for grabs over the next few years.
And that's why these fights are really important.
Even if they only influence a small amount of people right now, they have potential to influence
a large amount of people.
And so I think we do need the right combination of brakes versus gas.
I would rather see it come from within the industry, from people who know what they're talking about, than outside of it, you know, prematurely.
And so, yes, we are going to see governments come in with some breaks.
Ideally, they can be informed by some of our people and some of the productive tension that we're having first.
The one thing I would add to that is that, you know, we've talked a lot about, oh, my God, what if the EAs are running things and what if they have total control, won't they shut it all down?
the reality is a much more politically powerful cohort is not the AI safety people, but the AI biased people, right?
These people are much more influential.
Are this the same people that Eric was calling EI ethics earlier in the conversation?
Yes, ethics, exactly.
So these are people like, okay, you know, what if the models are biased?
What if the models are more liberal?
What if they're more Republican?
What if they, you know, say things that they shouldn't be saying or, you know, they give people instructions on how to cause harm or whatever?
Now, the interesting thing about AI ethics versus AI safety is that the core problems that you need to solve are actually kind of the same, which is how do you control these models?
And the answer is we don't really know.
We don't have good techniques to control these models.
Now, how do you control a model?
Well, that's called AI alignment, which is the core problem behind AI safety, which is, okay, right now, the thing that the model is doing that you don't want is making biased representations of, you know, people in certain, you know, stereotypical careers or something.
But eventually it might be the thing you don't want is that, you know, gathering a bunch of resources or hacking people in order to, you know, make a bunch of money in order to fulfill some task that somebody who's running an AI agent wants it to do.
These problems are one of the same, in a sense, right?
I mean, one of them is easier to solve than the other.
But the core of the problem is that we don't know how to control these models.
They are very difficult to control.
They're very unintuitive.
They're kind of like these space aliens that we've conjured up, right?
People often say that large language models were discovered, not invented, in the sense that, you know, we just sort of threw more and more compute.
And then it's kind of like, you know, when something moves faster and faster, it turns into from a liquid into a gas.
In the same way, when you throw more and more compute at something, at a certain point, it becomes a large language model.
And you're just like, whoa, what the fuck?
What is it?
This thing can now, like, talk to me and do my math homework.
Like, that's really weird.
And that phase transition is still very poorly understood.
And I think there's going to be a lot of political fights about it.
But if both the EACs and the EAs don't get their shit together, the AI ethics people are going to win because they have much, one, they're much more persuasive.
They're much more easy to understand for most people, right?
Like the whole Gemini drama had nothing to do with EAC or EA.
It was all bias, right?
It was all like, oh, you know, this thing fits neatly into the actual culture war, which is the east or the left-right culture war, which is, oh, these things are super woke and they're, you know, they're, they don't.
won't depict any white people.
And that is much more powerfully animating in, you know, among Congress.
You know, when you're talking about what are 70-year-olds who are actually the people
making the laws and voting on this stuff, what resonates with them?
I'll tell you, it's the AI ethics stuff.
None of this stuff even rates yet.
So everything that, you know, Mark Andrewsson doing and that Eliezer is doing is important
because by default, both tribes are losing.
And maybe this is a great place to end this idea.
There's this saying, you know, I'm against my brothers.
but me and my brother are against my cousin.
And right now,
EAC is against EA,
but EA and EAC need to team up
against AI ethics or AI bias
because we can all agree
that those people are wrong.
Well, hold on.
Last thing I want to say,
I want to make some room for the AI bias people
because, like, obviously it's true
that Gemini was like a horrible model.
And obviously it's true that, you know,
the AI models can do pedestrian bad things too.
And you want to control that
because way before AIs are going to become superinteched,
or become so powerful they take over the world,
they're going to do pedestrian harms.
And you kind of want both, right?
You want to also solve the pedestrian harms,
and that's a real problem.
You also want to solve the existential harms.
That's a potential real problem, too.
It's a much bigger problem.
And you also want to solve AI over-regulation.
That's also a big problem, right?
All three of these are problems that a good regulatory regime
should find a way to solve.
And so, I mean, that is my very lame and narrow attempt
at synthesis of these three positions.
But I don't think you have to say, well, you know, these two are illegitimate and this one is legitimate.
The answer is obviously all three of them have legitimate concerns.
And you want to find a way to address all three.
That's too much nuance for me.
I preferred my...
Fair enough.
Let's close this conversation out.
I want to ask you just one final ending question, which is just maybe summarized the conversation for each of you.
So when faced with the question, which you surely will be bankless listener and then Haseeb and Eric, are you team Excel or team decel?
Are you team EAC or are you team EA?
Pick aside.
What's the most sensible posture to that kind of framing of the question to pick aside, Haseeb?
So, I mean, I won't surprise anyone to say that I'm on team EA.
But I'll frame that is saying I'm on team EA because I think at the margin the world needs more of that.
I think it's got enough people who are, say, you know, pro tech, no regulation.
I want more AI products.
And I think there's also enough people who are saying, hey,
there's too much bias in these things and they're bad for society.
I think there's not enough people saying, hey, these things are really risky and we should invest more into safety.
I'm also one surprise people.
I'm team accelerationalist.
I'm team pro tech, but I think that we should absorb some parts of EA.
I think EA is beaten down.
They have wounds and we should pay for their hospital bills and bring them into our tribe and team up against the AI
bias or AI ethics, some who have some legitimate points, I will concede that, but also some points
that I also very much disagree with. And I think my call out to accelerationists is to take AI safety
seriously. I think they worry that if you concede that there's, the P. Doom is even a concept,
or that there are AI safety concerns that you sort of throw out the baby with the bathwater
and you go as extreme as some of the most extreme EA people. But I think we should find a
way to have more common ground with AIS safety people acknowledge that there's 95% agreement with
most of them and you know work on alignment and then fight the the right enemies which as has
seen mentioned it's the AI ethics people I advise people regardless of what you think of them they
have a lot more power on a regulatory stage and that's where the the battle is really taking shape so
I'm an accelerationist with some humanism and I think we should extend an olive branch to our
most rational and reasonable AI safety people.
Nothing unites us like a common enemy.
And so may that common enemy is the EI bias people.
The people are trying to censor all these models.
Haseeb, Eric, thank you for this fantastic conversation.
I think this will leave bankless listeners with a lot to think about on this subject.
It's been a pleasure.
Thank you.
Thanks for having me.
Got to leave you guys with this.
Of course, I usually say crypto is risky, but we talked mostly about AI.
I think AI is risky as well.
This is definitely the frontier.
It's not for everyone, but we're glad you're with us on the bankless journey.
Thanks a lot.
Thank you.
