Bankless - 209 - Why e/acc Is Right with Beff Jezos (Guillaume Verdon)

Episode Date: February 12, 2024

✨ DEBRIEF | Ryan and David unpacking the episode: https://bankless.com/debrief-e-acc-beff-jezos   One guest, two identities. Beff Jezos (Guillaume Verdon) is a founding father of the e/acc movemen...t, a physicist, a quantum researcher, and the founder of an AI startup called Extropic. Beff thinks AI doomers are not only wrong, but they’re taking humanity in the worst possible direction. Growth. Acceleration. Progress. These are the core pillars of the e/acc movement. Instead of slowing down on AI progress, Beff explains why we should be speeding up.  ------ 🎧Listen On Your Favorite Podcast Player  https://bankless.cc/Podcast    ------ BANKLESS SPONSOR TOOLS: 🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://k.xyz/bankless-pod-q2    ⁠  🔗CELO | CEL2 COMING SOON https://bankless.cc/Celo    🗣️TOKU | CRYPTO EMPLOYMENT SOLUTION https://bankless.cc/toku    🛞MANTLE | MODULAR LAYER 2 NETWORK https://bankless.cc/Mantle    💸 CRYPTO TAX CALCULATOR | USE CODE BANK30 https://bankless.cc/CTC  🦄UNISWAP | ON-CHAIN MARKETPLACE https://bankless.cc/uniswap   ------ TIMESTAMPS  0:00 Intro 7:15 Why Pseudonym 15:01 The e/acc Pil 31:54 Fume 36:25 Beff’s Beliefs 39:39 Defining e/acc 40:52 The Dangers of AI? 52:05 Why We’re Here & The e/acc Religion? 58:19 Thermodynamics & Life  1:06:16 Maximizing Human Happiness?  1:16:25 Forgetting Society’s Bottom Half  1:23:00 Cancer Also Grows? 1:29:01 AI Regulation 1:33:09 Social Media’s Mistake   1:36:30 AI Bill of Rights  1:38:26 Domesticating AI 1:41:00 Biggest Threats Against e/acc  1:43:43 AI Humans vs. Humans  1:46:09 e/acc vs. Decels…Violence?  1:49:09 Beff’s Thoughts on Crypto 1:54:36 Beff’s Company - Extropic 1:58:18 Closing & Disclaimers  ------ RESOURCES Beff Jezos https://twitter.com/BasedBeffJezos  Guillaume Verdon https://twitter.com/GillVerd   Extropic https://twitter.com/Extropic_AI   AI Safety Podcasts - The Decels Eliezer Yudkowsky https://youtu.be/gA1sNLL6yg4   Connor Leahy https://youtu.be/pMoVsM1EWR0  Paul Christiano https://youtu.be/GyFkWb903aU   Nate Sores  https://youtu.be/Ymjb3SkElco   ------ Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research. See our investment disclosures here: https://www.bankless.com/disclosures 

Transcript
Discussion (0)
Starting point is 00:00:00 What we're advocating for with EAC is sort of freedom of access to compute, freedom of access to AI. We don't want these centralized entities, just like a central bank, like inflating away your money, to control access to advanced AI. We want it in the hands of many because otherwise the centralized parties are going to abuse their power. Welcome to bankless, where we explore the frontier of internet money and internet finance. This is Ryan Sean Adams. I'm here with David Hoffman, and we're here to help you become more bankless. Or maybe more EAC in today's episode, right, David? More accelerationist, yeah.
Starting point is 00:00:39 Beth Jaisos is the founding father of EAC. That's effective accelerationism. This is a philosophy that thinks AI Dumer's are not only wrong, they're taking humanity in the worst possible direction. The emphasis here is growth, acceleration, progress. These are the core pillars. And instead of slowing down on AI, Beth explains today why we should be speeding up.
Starting point is 00:01:02 I think this is a direct answer to an episode that we've required. recorded over a year ago with Eliezer Yudkowski on why we are all doomed and why AI is going to kill us all. So it's a debate, I guess, if you listen to those two episodes sequentially. We talk about a few things on today's episode, including why AI isn't going to kill us, after all, according to Beth. And what is EAC? He explains the fundamentals and why EAC is more than just AI. For Beth Jaisos, it's an entire philosophy for life. Also, we get into what in the world? Homotechno-Capital memetic accelerationism is. Did I say that right, David? I don't know. Finally, we end with Beth's take on crypto. One thing I enjoyed about this conversation is that I think you could end up
Starting point is 00:01:46 at this conversation from a variety of different angles. If you are coming from crypto, you will start to catch a vibe for this. It will sound familiar to you a little bit if you've been paying attention to the AI world for sure. But also, you can go really far back and just talk about philosophy. Like, if you are a fan of Friedrich Nietzsche, like, you will find yourself in a familiar place. I've been a big fan of the concept of optimism as a way to achieve strong mental health. Like, optimists tend to feel better. They tend to be less depressed. They tend to feel like they have more control and agency around their surroundings. So if you come from a mental health and, like, psychology background, you will find yourself, I think, resonating with some of the things that
Starting point is 00:02:29 Beth Josios has to say. That's one thing I really just enjoyed about this conversation is it's Beth starts from a conversation around base principles, getting really close to the metal, and then expands it, extrapolates it outwards onto like a political philosophy that people could align with. I think this is one of the most important debates of our era, really, and we'll probably crescendo in the decades to come. So it's worth listening to this episode for that reason alone. David, there's so much more to discuss with you in the debrief, I think. I've got to take some time to really unpack this. And that, of course, is the episode. that we record right after the episode. If you're a bankless citizen, you can enjoy that now.
Starting point is 00:03:03 All right, let's get right to the episode with Beth. But before we do, we want to thank the sponsors that made this possible, including our number one recommended crypto exchange for 2024. That is Cracken. Crypto accelerator? Definitely a crypto accelerator. Well done, Cracken. Go create an account. Cracken knows crypto. Cracken's been in the crypto game for over a decade. And as one is the largest and most trusted exchanges in the industry, Cracken is on the journey with all of us to see what crypto can be. Human history is a story of progress. It's part of us, hardwired. We're designed to seek change everywhere, to improve, to strive. And if anything can be improved, why not finance?
Starting point is 00:03:40 Crypto is a financial system designed with the modern world in mind, instant, permissionless and 24-7. It's not perfect, and nothing ever will be perfect. But crypto is a world-changing technology at a time when the world needs it the most. That's the Cracken Mission, to accelerate the global adoption of cryptocurrency, so that you and the rest of the world can achieve financial freedom and inclusion. Head on over to crackin.com slash banklist to see what crypto can be. Not investment advice, crypto trading involves risk of loss.
Starting point is 00:04:05 Cryptocurrency services are provided to U.S. and U.S. territory customers by Payward Ventures Eek. PVII doing business as Cracken. You know Uniswap. It's the world's largest decentralized exchange with over $1.4 trillion in trading volume. You know this because we talk about it endlessly on bank lists. It's Uniswap. But Uniswop is becoming so much more. Uniswap Labs just released the Uniswap mobile wallet for iOS. The newest easiest way to trade tokens on the go.
Starting point is 00:04:28 With a Uniswap wallet, you can easily create or import a new wallet, buy crypto on any available exchange with your debit card, with extremely low Fiat on-ramp fees, and you can seamlessly swap on mainnet, polygon, arbitram, and optimism. On the Uniswap mobile wallet, you can store and display your beautiful NFTs, and you can also explore Web3 with the in-app search features, market leaderboards, and price charts, or use Wallet Connect to connect to any Web3 application.
Starting point is 00:04:51 So you can now go directly to Defi with the Uniswop mobile wallet, safe, simple custody from the most trusted team in Defi. Download the Unswap wallet today on iOS. There is a link in the show notes. Are you launching a token? Is it already live? How are you managing the legal and tax obligations for providing token grants to your team?
Starting point is 00:05:07 It's no secret that token management gets complicated. Between learning all the legal language and tax obligations in every country that your team is in, token grant management can feel like an obstacle course. But it doesn't have to. That's where Toku steps in. Toku provides practical tools to handle token grants, allowing for effective oversight of token distributions
Starting point is 00:05:25 and payroll tax compliance for employees, contractors, advisors, and investors. They also handle tax withholdings through their real-time tax calculations that can be done by Toku or integrated into any payroll E-O-R providers in any jurisdiction. Toku is a trusted provider of protocol labs,
Starting point is 00:05:41 D-Y-D-X Foundation, mean-a-Protocol and many more. Get started for free and make token compensation simple at Toku.com slash bankless. Bankless Nation, I'm very excited to introduce our next guest. This is one guest, but two identities on the podcast today. Beth Jzos is a founding father of the effective accelerationism movement.
Starting point is 00:06:01 This is also known as EAC. And you've heard David and I talk about EAC before, particularly in contrast to the AI safety or desal movement, as someone would call it. EAC is a basic philosophy that advocates for the full acceleration into our AI-powered future. This is all gas and no brakes. Beth, welcome to Bankless. Well, thanks for having me. Pleasure to meet you, Ryan and David. and excited for the conversation today.
Starting point is 00:06:28 Now, I said there were actually two identities, one guest, but two identities. So, Beth is your online alter ego, but in Meetspace, your identity is Guillaume Ferdone. This is your original identity. Your human name. Your nation-state assigned identity. My name. Yeah, your human name. Let's call it.
Starting point is 00:06:44 Sure. And you are a physicist, quantum researcher, and the founder of an AI startup called Extropic. So I think we'll be talking more to your Beth persona today, if that's okay. But also welcome to Guillaume. Same person, but appreciate it, appreciate it. Yeah. Good to be here as well. You know, include all of the identities in the intros here.
Starting point is 00:07:03 That's right. Why don't we actually start? Because for some people that may have been a little bit jarring, right? So let's start with the decision to use a pseudonym to kickstart the effective accelerationism movement. Why did you use the pseudonym of Beth Jzos? Like, why not just use your regular old nation state fiat name? I think, you know, at the time I was, you know, in some secretive team. within a secretive unit of a big tech company.
Starting point is 00:07:30 You know, I was a Google X in their, you know, secrecy's baseline. I was in particularly secretive team. Can't really talk about what I was doing. People can try to piece it together for my patents and have fun with that. Some Easter eggs in there. But, you know, secrecy was my sort of baseline. I couldn't necessarily have too many hot takes that, you know, wouldn't get back to me, right? And where I felt, I don't know if that was, like, fully true.
Starting point is 00:07:54 I mean, I've had some feedback that way, but there was always that sort of, you know, thought that if I said something that would be over the line, it's over, oh, it would jeopardize my career. You know, again, this was like pre-Elon buying Twitter. It feels like a completely different vibe now, which is great. Back then, though, it felt like if you said something, some sort of latent truth or something that you felt, you were even just trying to experiment with your ideas and experiment with different point of views, you can, you can. lose your job and get canceled and so on. And so to me, you felt muzzle. Yeah, I did. And it doesn't have to do with any one particular employer, to be fair. I think it was just the general vibe. I think many people felt this way. And I think that's why Annon accounts were so potent, you're kind of removing the sort of threat vector of reputational counterattacks if people try to shut you down, which, of course, now that I am doxed, I no longer have that shield, but, you know, I still
Starting point is 00:08:51 stand by my values and that's why I'm here on these podcasts. But at the time, it's funny how giving myself an anonymous account kind of opened up a whole new space of ideas that it didn't even let myself experiment with because I knew I wouldn't even be able to communicate them, right? So freedom of speech induced freedom of thought for me. And sort of Beth was a sort of experiment in applying my thinking that I've applied to sort of quantum computing, physics-based computing in AI, applying that sort of thinking and background to sort of civilization and societal systems and how we organize ourselves in culture. And really, it was supposed to start as a low-stakes experiment, you know, have a couple followers and just put some ideas out there, see which one's a stick.
Starting point is 00:09:39 But it kind of grew and kept compounding and now we're here, I guess. And so, but that was really nice for me because at the time, I think, I had grown my reputation in my field of study, which was quantum computing. And it wasn't clear to me if, you know, my fans at the time would, you know, just wanted a job or something like that. It was kind of the opposite of why most people do an on account, right? Maybe they just want, well, I think, like, eventually people just want their ideas to be evaluated on their own, right, regardless of reputation, regardless of your background. For me, it felt like I want it. It's like I do the analogy of new game plus when you have beat the video game. You want to restart with like your,
Starting point is 00:10:20 knowledge base and maybe some of your gear, but like you're restarting the game. And so for me, it was like, can I rebuild a sort of a following or reputation, you know, from scratch, right, in an uncorrelated fashion? So I would never talk about quantum computing. I'd only talk about AI. And so started it while on Google X, and then I ended up, you know, leaving that and starting my own company. Now, of course, you know, I'm my own boss more or less. So, you know, I have a lot more freedom of speech. But I think overall, people feel freer to say sort of things that were maybe taboo or off limits a couple years ago. And I think that's healthy for everyone, embracing that sort of variance in the space of thoughts. And so, yeah, that's like that.
Starting point is 00:11:03 But it also feels to me, I mean, try this on too, just from my observation. Beth has a different communication style than Guillaume, I think. So, you know, Beth is a bit more memetic. Beth is a bit more like in your face. Beth is a bit more rhetoric, I think. And that communication style seems to have meme market fit online at least. And I think accounts for the growth of the YAC movement so far. Like, I don't know if you were trying that personality on or that persona on. And it's just like a better communication style for the spread of ideas.
Starting point is 00:11:39 But that's one observation I have from seeing Beth communicate. Yeah, I mean, like we say, you know, we're on a mission to spread viral optimism, a sort of fuck you optimism, like unapologetic optimism about the future, right, to fight the sort of pervasive pessimism and dumerism. And we will do anything we need to do in order to achieve memetic fitness, right? Because if it's a sort of viral memetic vaccine that renders people immune to sort of feeling terrible about the future and being demoralized, we have a responsibility. to make it spread, right? And so if in an era of algorithmic information propagation, you can A-B-Test what sort of packaging of your message has highest memetic fitness. And, you know, usually that's memes, that's, you know, a tone that's sort of brash in your face, that's adversarial. That's what gets propagated. That's what gets seen. And ergo, you know, that shapes people's priors because they have an information diet that is fed to them by these algorithms. And so yeah, I mean, it is a sort of certain communication style. I had sort of experimented with sort of
Starting point is 00:12:48 similar firebrand style in quantum computing. Back then, you know, I was really trying to push quantum computing towards something like deep learning, differentiable programming, right? So that's what I did at Google with so my team there is bringing sort of deep learning thinking, you know, let gradient descent program for you to quantum computing. And that was a sort of similar fight back then against, let's say, the complexity theorists that needed hard proofs about everything, every statement, rather than just, you know, letting go and letting the computer figure out how to program for you. And it feels very similar to this sort of battle against the rationalists and the EAs that want to have, you know, axioms and want you to prove that, you know, humanity will be
Starting point is 00:13:33 eternal and you guarantee your safety forever. And if you can't prove that, we got to shut everything down, right? Like, very similar sort of adversary. So I've been through this drill before. So I've bring that sort of baggage to this new fight. So Beth, we're already talking about adversaries, the rationalists and the EA. And so I think we need to tee up this conversation now that we've established who we're talking to. Beth Jaisos, one of the founding fathers of effective accelerationism. So let me just throw out some context for bankless listeners and for you, Beth, as we get in this episode of what we're hoping to achieve in this. And the reason for this episode. So about a year ago, like almost a year ago to the day that we are recording, we did an episode with a gentleman by the
Starting point is 00:14:14 name of Eliezer-Eyukowski. I don't know if you know at all or have run across them in your travels. So we thought, David and I thought in our innocent days, that we were going to have a nice conversation with Eliezer-Eukowski about the interplay between AI and crypto. And it turned out that was not the conversation destiny had in store. for us. It turned out we were going... We got stuck. Yeah, we got stuck somewhere, and we were going to end that episode in a full-blown existential crisis about this thing. I didn't know as much about as I feel like I know today, but yet I'm still kind of like wondering how much I really know. That is AI safety.
Starting point is 00:14:51 And Easer offered a compelling argument. Basically, we titled the episode, why AI is going to kill a song, because that was the distinct impression he left us with. In fact, the episode, the entire episode was a dire warning. Like, halt all AI progress. You know, like right now, you know, go be with your loved ones, kiss your children goodbye, because we're not going to make it. I think Ryan went in and followed that advice.
Starting point is 00:15:14 Yeah. Well, it's never bad advice to go, you know, tell the people in your life you love him. And so we're having you on partially, Beth, because you stand on pretty much the opposite side of that argument, I think. And not only do you not think that AI is going to kill us, you think it's going to be great.
Starting point is 00:15:33 Like maybe usher us into a utopia. Okay, so this is not like, eh, he was right. You, I think, are saying Eliezer is dead wrong. In fact, it's the complete opposite. And we should accelerate progress on AI completely across the board.
Starting point is 00:15:48 And so Bankless is a technology podcast, but very specific in the crypto community. And I want to give you the chance to explain your side of the argument. I want to give you a chance to pitch a, effective accelerationism, EAC to the crypto audience, and just like PILUS. Straight up PILUS.
Starting point is 00:16:06 Like, give us the gospel of EAC, Beth. Getting right into it. Yeah, I mean, you know, yeah, that Yudkowski podcast really was, I think, to start for you all to get into this sort of area. Did it tumble across your feed? It definitely did. It definitely did. Okay.
Starting point is 00:16:23 I think, like, this sort of doom mongering or fear mongering has been really nefarious. Like, it's really affected people psychologically. right and yet like you know they thought gpt two was going to kill us all gpt three we're at gpt4 4.5 soon yet we're still here right the economy hasn't collapsed everything everything's fine the crypto bags are still you know they're doing all right they're doing all right these days baseline but these are the last days maybe beth that's what a lees here it says like enjoy the last days no i think that that's the thing about having a system that is malleable and adaptive even if you have new technologies that are progressively rolled out, it just morphs and absorbs that capability
Starting point is 00:17:06 and creates utility out of those new technological capabilities. And really, EAC is about trying to understand where do we come from and where are we going, right? What is the process? What is the process that gave rise to life? What is the process that gave rise to civilization? What is the sort of this weird, you know, clearly something's going on, right? Like when we were maybe children, you know, we had like massive computers that were exponentially worse than the ones we're using to converse today. Something's going on. There's a sort of evolutionary process in the space of technologies and the space of ideas. You know, what is this machine? What is this process that's always morphing and adapting civilization and the technologies around us? And really,
Starting point is 00:17:51 EAC first and foremost, is trying to understand this process. How does it work? How did it get us here? why is it good? And then how do we accelerate it? Right. And, you know, that's what we call, you know, acceleration or techno capital acceleration or more generally homo-techno-capital memetic acceleration. So that's like all the things, right? Essentially, we think that this process of searching over the space of parameters or bits
Starting point is 00:18:23 of information and how we organize ourselves culturally in terms of genetics of humans in terms of the space of technologies, in terms of how to organize companies, nations, et cetera. It's all one big search process and competition induces a sort of evolutionary selective pressure on the space of all these things. And that sort of competition breeds fitness that then benefits us all, right? Like in a sense, like we've tried capitalism and freedom versus sort of authoritarianism or communism where everything's top-down prescribed prices are essentially controlled. So far, free markets have been far more successful at creating wonderful things, such as
Starting point is 00:19:11 the technologies we used to chat with today and the system we live in today. And overall, such systems where you have many more freedoms, you find much better optima in terms of, again, technology is wasting. to live your life and just about everything. But I think it's too broad of a question, like, just pill me on EAC. I think, like, let's go through how you feel about dumerism. And it's just, let's just pick it apart, really. Let's just pick it apart. Yeah. I kind of want to do one more thing first just to kind of frame this conversation. One of the maybe, like, disarming things about the conversation that we had with Eliezer Yudkowski that maybe like me and Ryan just
Starting point is 00:19:51 weren't ready for is that Eliezer's arguments were extremely technical. They went down to the basement on, like, how AI works. He was using phrases called, like, gradient descent and, you know, reward mechanisms. And it all got, like, outside of our frame of expertise very quickly because he was, like, down at the basement level. And he was making, like, this kind of logical argument that, like, me and Ryan were just, like, weren't ready to, like, fully unpack. Because he got really, really technical. And this conversation, this EAC versus D.CEL, people can approach this conversation at, like, varying heights. Like some people, I think Mark Andresen wrote the techno-optimist manifesto talking about this at a very high level.
Starting point is 00:20:31 And then we went and me and Ryan did a number of episodes about AI safety. And each one of those conversations were like somewhere up and down in the very technical to very philosophical like conversation. I'm wondering where you see your innovation with this conversation. You run an AI startup. So you're pretty technical. But you're also just in your response just now is some pretty philosophical, directional, like approaches. where would you say you would like to innovate in this conversation? Yeah, I mean, whatever you think is more appropriate for your audience, for me, it's always
Starting point is 00:21:01 difficult to adjust my level of technicality. I tend to converse pretty technically. You know, I was a theoretical physicist, and then now I run an AI startup. Usually I have very sort of mathematics first thinking, trying to explain my thinking, and I have to convert that into English to some extent. And so, you know, if there's any words I say or anything like that, that you want to dig in, like let's dig in, but I would say Yud is not very technical actually, and that if you start digging into his technical knowledge, it's actually lacking quite a bit.
Starting point is 00:21:34 Interesting. Right. I mean, this notion of recursive self-improvement, I guess, that, you know, the runaway foom maybe we can address, right? Like recursive self-improvement is something we've tried in machine learning for a very long time. It's called meta-learning. You have a system that learns how to accelerate the learning, and you have also
Starting point is 00:21:50 architecture search algorithms. the reality is that the larger the space over which you search, the exponentially harder, it becomes to find the true optimum, right? And so what that means is that if we task an AI to improve AI, it's exponentially more complicated every level of like an AI that improves an AI, that improves an AI you go, right? And so it's going to take exponentially more compute and energy, right, to achieve that optimum, right?
Starting point is 00:22:20 And so in a sense, there's already a soft cap on compute in terms of the energetic and capital costs of compute. And that keeps us safe, right? In a sense, like, everything in our civilization, even every life form is trying to fume. Everything is trying to grow. And that is the thesis of EAC. And but that, because everything is trying to grow and competing with one another for resources, we get better optima. But it also keeps everything in check. There's not one sort of singleton, you know,
Starting point is 00:22:50 know, a biological system or one single company or one single nation that takes over the whole system, right, because they're competing with one another. And in general, to find sort of optimum of high fitness, for example, you know, if you're building a company, a company is almost like, at least in the startup space, you know, at the seed around, you're basically a search algorithm. You're searching over the space of technologies and you're burning capital and intellectual compute and maybe actual GPU compute to find sort of to pinpoint exactly what product you got to build. That's a couple bits of information. But that's costly. That's a costly thing to find. So every optimum that is high finesse is very costly to find. And the more optimal it is,
Starting point is 00:23:38 often it's exponentially costlier to find that optimum. And so that keeps us safe in the sense that I can pretty confidently say we're not just a small edit distance, like this one weird trick where like now we have super AGI that fooms and creates gray goo and takes over the world. Like that's like very exponentially unlikely that that is the case. You can't guarantee it, right? You can't guarantee that there's not one weird hack. But from what we've seen so far, It's taken billions of years to get where we are now with life and intelligence as we know it. And it's been a very long sort of process of improvement to this point. But there won't be a sort of hyperbolic discontinuity in the rate in which things improve.
Starting point is 00:24:23 And in general, we take the opposite camp. Like this sort of rate of self-improvement of everything is actually really important. And it's what creates everything we enjoy. and we should seek to keep that process growing and scale it, right? Really, like, systems in nature either get busy growing or they get busy dying, right? And so they either secure more resources, secure more free energy and figure out clever ways to utilize it in order to grow, or they try to stagnate, they run out of fuel, and then they die. That is it.
Starting point is 00:24:59 Everything runs on some type of fuel. nothing is forever, no bit of information is forever. To maintain its coherence, it costs energy, because everything wants to decoher naturally. Everything wants to sort of, you know, it's a constant fight against entropy. And so the thesis of EAC is the one golden metric to some extent that measures the progress of the whole system is sort of free energy. How much energy are we acquiring and consuming as civilization because that's a metric of our progress that can't be gameed, right? And this will resonate with you guys, but if you measure it in US dollars, that's not, that's not an objective scale, right? That's not a scale that you can play with. You can have inflation, you can print
Starting point is 00:25:45 money, and then it seems like you have some progress, but really you went the other, the wrong way. Whereas something objective like energy is a good metric to measure progress. And we think that scaling up civilization and in doing so with urgency is how we ensure its sort of long-term success in existence. And I think that one of the most dangerous things we could do is let this mindset of doom and demoralization become superstitious, right? If you focus on doom, you focus on darker futures, well, first of all, you stop building, you stop having children, you stop hoping for better things in the future, it sort of becomes a self-fulfilling prophecy, and that's how civilizations and empires die, and that will cause massive pain. And so we're on a mission to spread sort of optimism
Starting point is 00:26:39 that is hyperstitious in the sense, yes, we can do it, we can build better things, we could build a better future, we can leverage AI to build a better future we want, to cure diseases, to tackle climate change to unlock nuclear fusion, nuclear energy, massive prosperity. We have all this upside on the table. And the longer we wait, the lower likelihood we can achieve it. And we have urgency to make it happen. And that's very similar a mindset you have in startups in Silicon Valley, right? The most successful startups are very optimistic. And they believe they can do something that seems like unimaginable at the beginning of the company. But then they do it. And it, it just keeps happening, right? And it's this sort of hyperstitious optimism effect. And that's the
Starting point is 00:27:25 sort of mentality we're trying to scale to the world with EAC. And it's why we think that doomerism and pessimism is really dangerous and needs to be fought somewhat aggressively, right? Because that is actually the source of doom, not some sort of fictitious artificial superintelligence from sci-fi. It's not backed by science, right? You know, I've worked. on AI for material generation, protein folding, biochemistry, chemistry, like AI for the physical world, it's much harder than people think. The physical world is really hard. And one of the best things at designing things in the physical world is life itself.
Starting point is 00:28:10 But you could see life itself as a big optimization algorithm that is trying to fume. Every life form is trying to fume. and yet nothing has foomed, right? So that should give you a bit more peace and you can breathe, but happy to go into any sort of argument, like any sticking points you have about Yud's argument. Yeah, just, Beth, really quick.
Starting point is 00:28:31 This one thing you've mentioned a couple of times is this word fume. Could you define that really quick for people? Yeah, I mean, you know, well, how do you understand Fum, like from your interview with Yud? How would you explain it? Yeah.
Starting point is 00:28:42 I think you're using it in a very general sense, which I think, I wish I appreciate it because it kind of gets down to the bare metal of kind of how life works. Like life is interested in life. Like life is interested in propagating. Fum in the AI sense is mostly talking about that super intelligence explosion where AI just takes over everything and it's all, it's just like this one single event and all of a sudden the whole world is AI and it's run by AI's four AI's. Trees want to plant more trees, you know, bugs want there to be more bugs. Ants wants more ants. Humans want more humans.
Starting point is 00:29:13 And I think it's all trying to hit some sort of like point in a curve, which like once the ball starts rolling, it starts rolling faster and faster and faster. And all of a sudden we have like a population explosion. It doesn't matter what kind of life form you are, but like everything is trying to look for that like growth in population. That is something that's fundamental about life. And so you're using in a general sense where it's just like, hey, any system whatsoever that's propagating is trying to find more energy because that's how it can propagate even more. And this is the playing field that just the universe exists on. I think that this is how you're using it. Every system and subsystem, whether it's a company, a group of people, a culture, like you said, bugs, trees, whatever, even nation states.
Starting point is 00:29:52 Everything is self-organizing, self-adapting its inner workings in order to grow. And by construction, things that are not optimizing themselves to grow, run out of fuel, and they fade. And that's it, right? And so if you think about, if you think of like several nations in the future, which nation, will have survived and what culture would they have, right? Well, they would probably have an EAC culture that is literally by construction trying to figure out what is the optimal way to organize ourselves in order to grow. And the sort of pessimistic, dumeristic cultures will have faded because they'll have
Starting point is 00:30:31 destroyed themselves. This kind of reminds me, by the way, of an episode we did with Robin Hansen. Grabby aliens. About his theory for why we haven't seen aliens is basically that, for one, it's too early, but, like, we will see them. And the ones we will see are not the quiet aliens that stick to their home planet, not the decel aliens. We'll see the EAC aliens, the grabby aliens that go and they consume their solar system, they consume their galaxy, and then they're a multi-galxy type of, like, the grabby ones are the ones that are effectively going to fume and win. Yeah. Well, it's the same
Starting point is 00:31:03 with, you know, variants of a virus, right? It's the ones that have higher replicability that you get, statistically, right? You don't see like earlier variants that are lower fitness, right? And it's like every bit of information is getting selected for in terms of, does it confer the organism of which it's part of an ability to grow, right? Like your genetics, it's like, does this piece of the genetic code give you higher fitness? Does it, you know, make you have more offspring? And then that piece of genetic code is higher likelihood in the future, right? But it's like applying that sort of evolutionary thinking to everything, including culture, including ways to organize your companies, including you can think of, you know, that's sort of thinking even applied to
Starting point is 00:31:47 crypto, right, which sort of coins have the best sort of memetic fitness in the long term, right? Bitcoin maximalism is the Bitcoin fomers. They think Bitcoin is going to fume. Well, I don't know about that. What I do like about Bitcoin is that is proof of work. And so it is anchored to sort of physical. and energy consumption. And, you know, I'm not necessarily a Bitcoin maximalist, but I am a sort of energy maximalist. I think that is the right metric to pin things with respect to. I want to provide the way that I see the structure of this conversation.
Starting point is 00:32:22 But if you have, like, a very, like, basement, first principles grounding in your arguments, and that is, like, what I would consider, like, you know, the very, the depth of where some of this belief structure comes from. And we've watched a lot of these conversations permeate throughout, like, Silicon Valley, and then also make its way into like Capitol Hill, where some of these like very first principal arguments are inspiring or being argued against as like political stances about the direction for society at large. So it kind of goes back to just like we have some beliefs, we have a structure of thought, it can get very technical and granular in their base principle
Starting point is 00:32:56 arguments. But then as this conversation progresses, and Ryan's going to take us into the next phase here, it really can inspire like a political belief, like a direction, a proposed direction for humanity. And so this is, Ryan kind of illustrated this as like, you're inspiring like a movement, like a political movement of not necessarily a party, because that's too structured, but just like a set of beliefs for how we ought to live as humanity as a species. Is this how you see it? Yeah. I mean, hopefully, hopefully it does affect politics, right? It's like the ways we've been organizing ourselves, the ways we've been legislating have been pretty far from optimal. And we should think from a first principle standpoint, you know, how do we organize
Starting point is 00:33:35 sort of a hierarchy of cybernetic control in our civilization. And this is a very deep concept. So how do we, you know, in a sense, like, it's very similar to crypto, many layers of a protocol, right? You have sort of you check, you chunk blocks and then you have different layers that check at different scales, right? And base layers have a slower clock rate and they roll up larger amounts of actions or transactions.
Starting point is 00:34:01 It's very similar to having a hierarchy of sort of legislative. at the local, you know, provincial or state level and the national level and then international level, right? As you go up the hierarchy, you should have a slower feedback loop and it should be lighter and lighter touch in terms of its ability to control things. And I, you know, the goal with EAC is to start a discussion of like, how should we balance things? Because, you know, one side of the aisle wants everyone to be hands off. One side of the aisle wants like absolute total control in the hands of a few. And both of those things are not opposite. There's a spectrum in between, and we should search over that spectrum.
Starting point is 00:34:39 And it's very relevant to the centralization versus decentralization question in crypto for protocols. I think they're the same question, whether it's in politics versus crypto, like is a system that is like totally decentralized and greedy optimal versus one that is centralized versus one that is hierarchical? I think it's the latter. And maybe we can get into that. But hopefully that does inspire conversations on Capitol Hill. and in D.C. about where to take policy and legislation. Yeah, I think it's really important. I think this, EAC, do we accelerate or do we de-accelerate, is probably one of the most important conversations that society is having? And, like, yeah, to me, one question I was going to have for you, Beth,
Starting point is 00:35:22 is like, is yak? Is it a philosophy? Is it, like, a movement? Is this, like, a social revolution? Is it a religion of some sorts? Is it, like, this amorphous thing like crypto, where it's, like, a bribeg of all of the above? What's your take on this? I think it's all the above to some extent, right? I mean, for me, it's, you know, it's almost like religious, like belief just because it's like I can get into that. And for some, it is. For some, it's a, you know, it's a community of like-minded individual that are optimistic about the future and trying to help each other and want to build to this better future and collaborate. And, you know, for some, it's sort of a political or ideological movement of how we should legislate things or how we should run things. And for some
Starting point is 00:36:09 it's just, I don't know, inspirational gets them fired up to build more, right? So I very much want to get the ways, like your take on why this is like a religion, how it sort of explains like big questions. Like, why are we here? What is the universe? This sort of thing. But before we do, I just got to tie off this Elyzer existential crisis type thing. So we talked about Fume. We sort of defined it. You think it's very unlikely that AIs sort of like destroy humanity or accidentally or, you know, they literally go to war with us or something like that. But I want to maybe ask, like, so David said that it was difficult to follow the ELES your conversation because it was technical. I actually think it was less that for me anyway, personally. It was more because the premise was
Starting point is 00:36:53 so simple. We've seen dangerous technologies in the past. Anybody, bankless listener, watch the Oppenheimer movie recently. The whole movie is a bit of, this chain of events where the scientific community discovers some dangerous technology, essentially, and what, like, this chain of events that could lead to global Armageddon and the ending of humanity. And so now we're at the precipice of this new discovery. We're in, like, these times where, oh, my God, you can talk to a computer and it's past the Turing test, and it sounds like a real person, and this is amazing. It generates art, like we've never seen before. And we are all wondering whether this is one of those technologies that has the potential to destroy everything.
Starting point is 00:37:33 And like, it's very seductive, I think, really easier to be like, yeah, imagine if, you know, we split an atom with kind of, like everybody could do that in their microwave, let's say. Imagine that that kind of level of power and technology was given to every human being. We democratize it in that way. Well, what would happen? It'd probably be the end of life as we know it. And so, like, I still have this base question, Beth, of whether AI is, like, dangerous. or not, is it similar to nuclear weapons? Is it similar to biological pandemics that somebody could
Starting point is 00:38:06 cook up in their basement? Or is it different somehow? Because I feel like a lot of this conversation hinges upon that question. I mean, you are advocating an EAC for growth, right, versus degrowth. And I sort of understand that everybody wants growth over de-acceleration is my thing. But I think another framing that people like Elyzer might have is it's growth versus safety. like, Beth, you were going, all gas, no brakes here. Like, we're worried. What if this technology is dangerous? What's your take on this? Yeah, so the analogy, I think, you know, if everybody could split the atom in their microwave,
Starting point is 00:38:38 we'd have quite a bit more energy being produced on the grid, and that would be net positive. I think, like, you know, runaway effects of, like, if you have a chain reaction that becomes a bomb, of course, that's bad, but obviously it seems to be like, you know, creating a nuclear weapon is very involved, and not everybody has access to it. And the thing with nuclear weapons is it's pure downside, right? Like the only utility to nuclear weapons is like, you know, damaging your enemy and it's like a deterrent, right? Whereas AI, there's huge, huge, huge, huge upside to creating AI and leveraging AI and everything we do. It gives us intellectual and operational leverage.
Starting point is 00:39:17 It helps, you know, create economic value. It helps solve all our problems. It helps us live better lives. It's going to save lives, right? In medicine, right? It's going to make things cheaper. It's going to help us build cheap housing. It's going to help us, you know, save on legal bills that are, you know, ballooning, right?
Starting point is 00:39:34 You're going to have LLM lawyers. You're going to have LM doctors. Everybody on the planet's going to have access to some of the best doctors ever. You're going to have personalized medicine, right? Like, the list goes on. Like a lot of our problems that are pain points in our modern society will be solved with cheaper intelligence. And there is an urgency to make that happen. similar to how, let's say, fear mongering about nuclear weapons actually kind of killed the
Starting point is 00:40:00 nuclear efficient energy industry, right? Now it's overregulated to oblivion. There's so much red tape like, you know, most of your budget building a nuclear reactor towards compliance. And that's suppressed the advent of and ubiquity of this technology. And it's caused us to have to go to all sorts of wars all over the place for energy. Right. So this is mine virus of anti-nevirus of anti-nuclear was actually detrimental, and we're kind of like waltzing into a similar scenario with AI. If we over-regulate it in the womb, we're not going to see the massive upsides to it. I think that what's dangerous with AI, and we're very transparent about this, you know, information is power. Intelligence allows you to extract more utility out of less
Starting point is 00:40:47 information, and so it confers you power, right? And if you have a big delta, big difference between the capabilities of centralized entities and that of the people in terms of AI, then you have the opportunity for sort of AI-assisted tyranny. And to us, that's the highest existential risk, because there's a very strong prior that if you give all the power to centralized party becomes corrupt, and then it abuses that power and oppresses people and causes mass suffering. And to us, like putting AI, which yes, whoever has advanced, the AI in their hands will become formidable. But if everybody is, you know, not equally formidable, but, you know, in a similar range, then no one party is going to have too much power of the other
Starting point is 00:41:36 and going to completely dominate the other, right? It's just sort of adversarial equilibrium. And that is sort of what we're advocating for with EACC is sort of freedom of access to compute, freedom of access to AI, right? We don't want these centralized entities, just like a central bank, like inflating away your money to control access to advanced AI. We want it in the hands of many because otherwise, you know, the centralized parties are going to abuse their power. And so to me, I think that, you know, yes, AI is potent. If you only focus on tail risks, you can convince yourself to kill anything in the womb.
Starting point is 00:42:14 But there's also massive upside that shouldn't be discounted. And there's not enough. We don't talk enough about the massive upside we're leaving on the table. and the reality is that most likely what will happen is that some nations or some cultures will will embrace AI, some will want to ban AI. Those that ban AI are going to be left behind and massively disadvantaged. And those that embrace it are going to be economically prosperous, will outgrow those that have banned it. And so you want to be in the fork, right, just similar to the crypto fork, you want to be in the cultural fork that embraces a
Starting point is 00:42:52 AI and integrates it into their lives, into society, and leverages it to become formidable and grow, right? And there are ways forward to, you know, leverage AI and merge with it. I mean, we're already, you know, we're using like smart watches and smartphones and now Apple Vision Pro, you know, is going viral. We're already augmenting ourselves. The notion of human, you know, of a human individual is getting sort of diluted or eroded. We already leverage exogenous sources of intelligence, right? We use Google search, perplexity, whatnot, all day, every day. And I think that just focusing on like, oh, no, humans will be left behind. It's like, no, but like the notion of human will drift encompass a human and maybe their fleet of AIs that do their bidding.
Starting point is 00:43:39 And then you become this sort of formidable being. And that's the sort of awesome future we want to have where you're quasi-immortal. You have like, you know, AI customizing your biology. and you have sort of AI augmentations in every way possible. And I think that's sort of optimistic future, we've got to paint the picture of that future rather than painting the picture of the Dumer future where like AI takes over,
Starting point is 00:44:04 which again, I don't know how it would do that if there's other powerful AI's keeping it check from parties that aren't necessarily aligned with that party. And again, I don't believe in sort of fast takeoff in AI. I think it's not well-founded. And so, like, if you don't, don't necessarily think that, you know, the likelihood of this fully existential doom is high from, like, just pure ASI, then the biggest threat vector becomes sort of authoritarianism,
Starting point is 00:44:32 sort of censorship, right? If you only have AI in the hands of a few, and they use it for information control and cultural control, then they can basically sigh up you and control you, right? And that is very dangerous, because if only a few parties that are, you know, government back cartel control all ms and they become our sources of a truth then they control you and they can like use that to consolidate their control and consolidate their power and now we live in a sort of dark age where we don't have open access to knowledge we don't have open access to compute and we can't we can't fight this sort of top down control with AI with our own AI that would help us filter and counter sort of the psychological manipulation and so that's the dark future
Starting point is 00:45:19 we're trying to avoid. So getting AI in the hands of many into every org and accelerating makes it very hard to sort of acceleration is a hedge against sort of top down control and maybe a centralized party taking over, killing variants, killing, like you can only, you know, use AI this way and taking control of the whole technology because that, that would be the bad future we're trying to avoid. Anyway, so we kind of have our own existential risks, right? Like we think, like authoritarianism and centralized control is sort of, that's the bad scenario. It's based on like data from history, right? There's a lot of bad things that have happened historically when it happens versus sort of
Starting point is 00:46:01 sci-fi-based priors of like, I read too many books on The Terminator, I read too many 90s sci-fi books about nanotechnology, and, you know, I think like AI is going to figure out how to, you know, turn us into gray goo, which not plausible. I mean, you know, yeah. So, Beth, you basically think like P-Dume is kind of like a failed kind of calculation. Yeah. We should be doing is P-Utopia, right? And then also, like, the risk, on the risk side, it's like probability of totalitarianism.
Starting point is 00:46:29 So, like, let's do that calculation, too. Yeah, P-1984, right? P-1984. Yeah. Okay, so I think we're starting to understand the contours of EAC and, like, where you're coming from. So it's pro-growth. It's definitely techno-optimism, right? I certainly love the picture you were painting about kind of like,
Starting point is 00:46:46 the possibilities here. It's much more cheerian, optimistic than sort of the AI Dumer takes that I've heard so far. I want to get back to kind of the ways EAC is like a religion to you. And I don't want to interpret too much into that. I'm sure there are many ways that it's not like religion. But talk about kind of the basement level, because I know you are, you know, have been as a physicist and, you know, throughout childhood, I've heard other conversations with you. You've been always in search for like, what's the basement level of this whole experience that, we're having right now in the universe and like, why does all of this matter? What conclusions have you come to? And how does that relate to EAC? Maybe how does EAC even explain some of these things?
Starting point is 00:47:26 Yeah, I mean, it's been a journey throughout life to try to understand our place in the universe, right, that led me to theoretical physics, trying to understand the very small, the quantum mechanical and the very big, the cosmos. So I was a quantum cosmologist at some point. I also worked on sort of black hole physics, which are kind of a different edge of the universe, right? And there I was trying to understand, you know, where did we come from, why are we here by looking at sort of the very big and the very small, and I kind of missed the middle part, right? And in a sense, after some time, I realized that a lot of the beauty and complexity and good things in the world came from sort of emergence and self-organization. Basically, instead of looking at quantum
Starting point is 00:48:10 physics or what is called general relativity, so gravity, which are the mostly physics of very small and the very big, I need to look at thermodynamics. The physics of the middle, we're out of equilibrium thermodynamics, the physics of life, the physics of self-organizing complex systems, because that's what created us. That's what created a lot of civilization, technologies we see today, and that's what induces this sort of selective pressure on the space of everything. And that sort of principle is very deep. And so, you know, having gone through theoretical physics and eventually discovered that
Starting point is 00:48:43 sort of complexity and self-organization is where a lot of the answers are. That's what led me to sort of look at EOCH as sort of like, hey, like thermodynamics helped create, well, it explains why we have life at all, right? How do we get here? It explains evolution. It's upstream of evolution. It's upstream of, you know, market pressures, selective pressures. It's kind of like this generalized law that creates everything we know, right, that is relevant to us in our day-to-day life. right? Like, what is Andromeda gonna do in like two billion years? Not that relevant day to day. Right? What is, you know, so I think like to me, EAC is almost a religion because it's like, okay, we are part of this process, the self-adaptive process of, you know, life, memes, information, technology, civilization. And this whole system seeks to grow. We are a very special phase of matter that is very unlikely in our universe. And we have a sort of responsibility to cherish it.
Starting point is 00:49:42 and allow it to grow in order to be robust to sort of erasure or fluctuations, right, like an asteroid, right? And so, you know, it's very Elonian in the sense of like, you know, it has very high ender product with the quest to increase the scope and scale civilization consciousness. But to me... Is it alonian? Is that like of derived from Elon Musk? I just, Elon, yeah, yeah, that's right. I don't know. I just made that up. But the point is we're very aligned with that mission. And it like gives you... you crazy amount of purpose day to day? Like, what is the point of a religion? A religion is like
Starting point is 00:50:17 for a lot of people practically, it is a cultural heuristic that if it has stuck around for a long time, it has been post-selected for in terms of conferring its adherence, like a sort of advantage, right? Because by construction, it has grown and has propagated this far, this long, right? And so it's like ways to live your life. It's like prescriptions, right? To me, they're like cultural parameters similar to how, like, you have in a neural that you spend a lot of time training and finding the optimal parameters and then you run inference. You just run at the optimal parameters. To me, religions are like pre-trained models of how to live your life, right? They're like prescribed parameters and then you can just run inference, right?
Starting point is 00:50:58 So you think EAC kind of like prescribes how to live your life. Actually, it gives you some meaning. It's basically the meaning is growth. The meaning is propagation. The meaning is like pushing humanity to the stars, to the frontier. These are some of the meanings you drive from this. So to me, IAC is more like the meta-learner or the optimizer, right? We're not prescribing any one particular way to live your life. It's like, but any sort of prescription that you may posit will be, what we're saying is it's going to be post-selected for according to this sort of fitness function. And so you should select for subcultures that give you an advantage in this growth, right, if you want to be part of the future. But really, it's like, it's a search process
Starting point is 00:51:41 for what yields higher growth. And it's always, always changing. It's not one way to live your life right now. There's not one set of optimal policy parameters that you can just run forever. It's like, let's collectively figure out heuristics of how to organize ourselves and how to live our lives, how to run the world, such that we grow sustainably, fall tolerantly towards, you know, yes, a galactic civilization, right? Beth, one set of dots that may not have been connected for folks. I sort of connected them a little bit as you were speaking. But like, because I've heard you speak in the past, I've read many of your articles where you've written things like this and I'm, you know, on your Twitter timeline. But how does thermodynamics explain life? Why is that an underlying process in the universe that is relevant to what we're talking about, like pro-AI growth? Yeah, yeah. No, that's a really deep question. And to me, it's like, I mean, I've made it quite a career pivot. it, right? I went from being someone in quantum computing, so trying to harness quantum mechanics of the world to make computing devices and make computing devices that I understand the quantum
Starting point is 00:52:48 mechanical world around us. That is what I did in quantum machine learning, right? And now, you know, after this sort of realization that, you know, the most interesting part is in the middle. It's in thermodynamics. Now I'm doing thermodynamic computing where I'm harnessing out of equilibrium thermodynamics to engineer self-organizing devices that are kind of machine learning as a thermodynamic physical process. But once you understand these principles enough to engineer systems that enact them, you start seeing that pattern everywhere. And sort of, you know, I wrote the EAC physics manifest at the same time as I founded my company. So for me, it's just because like my mind is swimming in it. But yeah, why did thermodynamics give rise to life? I'm an aderrant of a theory by
Starting point is 00:53:34 Jeremy England, former MIT professor. If you want to read his book, Every Life is on Fire, and his lectures are great on this. But essentially what you can show, it's just from the laws of probability, really, and it's a generalized notion
Starting point is 00:53:48 of the second law of thermodynamics, where it shows that paths, right, like if you think of like your trajectory across time, right, of like you have a certain system, you have states over time, and you have a trajectory, trajectories that have dissipated
Starting point is 00:54:04 more heat along the trajectory are exponentially more likely from the laws of physics. That's pretty nuts, right? And so, you know, there's theorems. They're called, you know, they have complicated names, but, you know, check out the work by Jeremy England on the thermodynamic dissipative adaptation. Basically, it posits that systems in nature, or really any system, any system that is in the physical world, right, will self-adapt in order. to maximize this dissipation for energy. So that includes figuring out how to capture more free energy to dissipate it. So lifelike systems are always adapting.
Starting point is 00:54:44 How can I secure more fuel? How can I use it more efficiently so that I could budget my fuel to get more fuel and keep growing? And, you know, as a byproduct, you get evolution because systems that have adapted their genetics in order to replicate more. Well, if you replicate more, you burn more fuel as a whole system, right? And so you're more likely. So human complexity exists to burn.
Starting point is 00:55:04 fuel faster for the universe to increase entropy. The whole, all of civilization is a fancy fire, right? But we're a very clever fire. We're like an energy seeking fire, right? But yeah, that is what gave rise to us, period, right? It's everyone's favorite season in crypto, tax season. And crypto tax is always an absolute headache, especially for all you DGens out there. But it doesn't have to be a nightmare.
Starting point is 00:55:28 That's where crypto tax calculator comes in. The software built for DGens by DGens. Coinbase's official global tax partner, Crypto Tax Calculator focuses on making complex transactions into easy ones, supporting over 300,000 currencies across Ethereum, Arbitraum, optimism, as well of 1,000 other integrations as well. It's as simple as connecting your wallet, pulling in all your transactions, and following the automated suggestions to quickly and accurately calculate your tax obligations. Plus, for all the airdrop farmers out there, Crypto Tax Calculator has your back, as they are consistently adding support for new and upcoming layer 1s, layer 2s,
Starting point is 00:56:00 and all the airdrops that you're currently farming. 2024 is the year when the DGens do their crypto taxes with speed and confidence. Make taxes this year easy and affordable with crypto tax calculator. Sign up at crypto tax calculator.io and get a 30% discount with code Bank 30. Click the link in the show notes for more information. Sellow is the mobile first EVM compatible carbon negative blockchain built for the real world. Driving real world use cases like mobile payments and mobile defy. And with Opera MiniPay as one of the fastest growing Web3 wallets,
Starting point is 00:56:29 Sello is seeing a meteoric rise with over 300 million transactions and 1.5 million monthly active addresses. And now Sello is looking to come home to Ethereum as a layer two. Optimism, Polygon, Matter Labs, and Arbitruma have all thrown their hats in the ring for the Sello Layer 2 to build upon their stacks. Why the competition? The Sello Layer 2 will bring huge advantages like a decentralized sequencer, off-chain data availability, secured by Ethereum validators, and one block finality. What does that all mean for you? With Sello Layer 2, gas fees will stay low and you can even pay for gas. natively using ERC20 tokens,
Starting point is 00:57:02 sending crypto to phone numbers across wallace using Social Connect. But Sello is a community governed protocol. This means that Sello needs you to weigh in and make your voice heard. Join the conversation into SELO forums. Follow SELO on Twitter and visit cello.org to shape the future of Ethereum. Mantle, formerly known as BitDow, is the first Dow-led Web3 ecosystem, all built on top of Mantle's first core product, the Mantle Network, a brand new high-performance Ethereum Layer 2,
Starting point is 00:57:27 built using the OP stack, but uses eigenlayers data availability solution instead of the expensive Ethereum layer 1. Not only does this reduce Mantle network's gas fees by 80%, but it also reduces gas fee volatility, providing a more stable foundation for Mantle's applications. The Mantle treasury is one of the biggest Dow-owned treasuries, which is seeding an ecosystem of projects from all around the Web3 space for Mantle. Mantle already has sub-communities from around Web3 onboarded, like Game 7 for Web3 gaming, and Buy Bit for TVL and liquidity and on-rounds.
Starting point is 00:57:56 So if you want to build on the Mantle network, Mantle is offering a grants program that provides milestone-based funding to promising projects that help expand, secure, and decentralize Mantle. If you want to get started working with the first Dow-ledd-Lead layer-2 ecosystem, check out Mantle at mantle. And follow them on Twitter at ZeroX Mantle. Is it correct to define life, if this is like the idea, that life is whatever system finds a way to dissipate more heat more? And so part of that definition, like you said, it includes collecting energy.
Starting point is 00:58:30 So like reversing entropy first, like creating order, creating systems, creating structure, in order to produce more heat exhaust. And that is what life is. That is what all thermodynamic systems are. But life is like a very special version of that. People are still. Emergent property out of that. Yeah.
Starting point is 00:58:48 I think people are trying to, well, you know, is a virus life, right? You know. It's on the spectrum. It's getting there. Is an AI that, you know, it's always adapting its architecture to be of economic utility to us. And if it's of high economic utility, then we run it on GPUs. We give it budget to, and we burn heat to keep it alive, right? Is it alive?
Starting point is 00:59:10 Kind of. Right? So I think, like, we can't even, we, it's hard to define intelligence formally. It's hard to define life formally. And so I'm just like, what's the base layer? It's just thermodynamics and probability and information, right? I used to work in theoretical physics where we're trying to understand the theory of everything, everything in the universe through the lens of information theory.
Starting point is 00:59:33 It was the it from bit or it from qubit school of thought, right? It's seeing the whole universe as a big computer. And so sort of I see a lot of civilization as a big thermodynamic computer in a sense. So the universe is kind of like cheering humanity on and the complexity of life on and particularly like, I guess the civilization that we've built because like we've become really, really good. at like burning energy. It's basically way. Yes, like the universe is on our side
Starting point is 00:59:58 and we've been doing so good so far, we've got to keep going, right? And feeling that like, wait, the universe is on our side, the game is rigged in our favor. That's such a potent realization. And that gives you, like, really powerful optimism about the future.
Starting point is 01:00:16 It's just like actually awesome futures are exponentially more likely. And dark futures are just small fluctuations on the road to these much higher likelihood large-scale futures. And so, yeah, I mean, that's why I'm an optimist. I just looked at the equations and saw, actually, these equations explain a lot of our world and a lot of where we came from, and you can apply them at the system at a large scale. So, yeah, in your school of thought, Beth, is all about, like, burning more energy, harnessing more energy faster, basically. That's why there's this, like, I've not fully understood it before,
Starting point is 01:00:49 but there's this kind of like desire to rise up the Cardishov scale, right? And to become like a type two civilization and then maybe a type three civilization, right? My understanding it's Karashev scale is like type one civilization marshals all of the resources of its home planet, right? So that would be everything on Earth, humanity is able to marshal that and energy potential. And then it's type two, it's local star. So we'd marshal the power of the sun like Dyson sphere and then type three is kind of the galaxy. And you're all about rising up and like harnessing.
Starting point is 01:01:19 more energy for productive purposes. That's kind of like your utility function. That's what IAC is trying to maximize in a way. Yes. And to be clear, it's not like we don't want to just burn energy. Like the point is like it's the integral or the sum of how much energy we're burning on a near infinite time horizon, right? So if we figure out clever ways to utilize energy such that we can grow further, that's better, right? It's not like, because some people don't necessarily understand it and they're like oh what if you just blew up the whole planet that's a lot of energy you know gone it's like yeah but then we're not going to you know burn the energy of other planets because we're going to be dead right you don't want that right so we're pro life in like the most
Starting point is 01:02:01 fundamental way possible and we think it's like a very precious state of matter it's like we're a very precious form of fire and we have a duty to sort of let it grow in scope and scale you're just very pro long-term energy burning or not burning i shouldn't say harnessing and so that means you would naturally be pro-life because if you kill the system that is growing to harness more and more energy than basically it's all gone. And like I want to contrast this
Starting point is 01:02:29 with what I think is like a different form of utility maximization function because like the measure of this, how much energy are we burning is kind of like I would say new and different maybe to some listeners because like a lot of philosophies or schools of thought are much more along the curve of utility maximization of human happiness or like human meaning or like, you know, hedons, right?
Starting point is 01:02:52 Hedonism, basically. And that's not what you're saying, right? Like, do you care at all about maximizing human happiness along the way? Or is it just all about the energy consumption? Because there's an element, Beth, where I hear if it's just about energy, it almost sounds like a bit, like machine-like. It almost sounds a bit, like, extractive, I think, to people. But are people happy?
Starting point is 01:03:16 Are we happy along the way? Can you talk about that? Yeah. So, you know, it's similar to like crypto. It's like, how do you measure market cap according to what currency, right? If we're saying this is a value system, what are we pinned to, right? What is our metric, right? In a sense, heat-ons are weird because you can just kind of like, first of all, it's very subjective, right?
Starting point is 01:03:37 Your neurochemistry is very subjective. It can be hacked by drugs or wireheading, right? You can scroll TikTok all day or watch whatever you're going to watch on that Apple Vision Pro and just sit back and drool and that's not a it's not a long-term optimum, right? So it has these spurious optima. And you can also just print more units, more cheap units and like game the system. And so, you know, these EAs get caught and thought experiments like, maybe we should just have a bunch of shrimp and they're very happy.
Starting point is 01:04:05 And then we just have tons of shrimp and we maximize their happiness. We've optimized our metric, right? It's like, no, that's not, it's not going to yield much. So it's like, to me, it's been seeking the sort of, what's the ultimate? metric, you know, I think happiness is actually a proxy or an estimator of this sort of gradient of like how much influence do I have on future free energetic sort of expenditure, right? Like if you have a intellectual legacy, if you have, you know, positive impact on the world, you feel good about it, you feel happy. If you're in a meaningful relationship, your brain has
Starting point is 01:04:44 neurochemistry that's like approximating that you're going to have successful progeny and you're going to influence you know the future light cone and they're going to have progeny and they're going to be successful and impact the world and consume more free energy i mean all our neurochemistry and our biology has been selected for according to this principle of whatever genetic sequence whatever heuristic we have that will help us burn more free energy so it's like it's the supply chain on happiness you see it as a much stronger kind of like metric i guess and it much more ungameable metric versus like something like heatons let's say and so to you it would be a complete failure of our species of this whole human life experiment if we essentially work the quiet aliens
Starting point is 01:05:27 we reached some level of technology where we're all just kind of like we're happy with things we just strap into our VR and we just kind of like live just a happy life in a you know I don't know a Wally life in a chair somewhere maybe we're floating across the universe to you that's like total objective failure by all of your measures, even though the heat-ons are maybe off the charts, because heat-ons are fiat. I'd love to establish those connections with crypto, but yeah, I think at some point, you know, like our happiness, like we've kind of entertained ourselves to death with this generation of technologies, right, the past 10, 12 years, I don't know. Now we're just obsessed with our happiness or neurochemistry, and we've over-optimized that, but we don't
Starting point is 01:06:09 feel good. We don't feel deeply happy, right? And whereas if you're working towards something meaningful that means you're going to leave a legacy and you're working super hard you have some stress hormones flowing through you but you have this sort of deep happiness right like the satisfaction it's very different it's not hedonism it's like it's not happiness like eudamonia or there's different words for this it reminds me bet for something that mark anderson said when he came on the podcast we're asking for advice and one of his like advice in general he's like don't seek happiness seek satisfaction yes and you'll get that that by finding meaning your life. You're saying that is a much more durable metric than just
Starting point is 01:06:48 heat on's dopamine. I think our brains like, how are you feeling satisfied your future? If you have a massive positive impact, if you feel like the things I've done with my life will really increase the likelihood that we reach a larger scale of civilization, like you feel deeply good about that. And I think having everyone sort of think, evaluate their lives and think, what is the maximal positive impact I can have towards this? And then they go out and do it and we support each other doing it. I think that's the most positive community we can have. And that is the EACC, right?
Starting point is 01:07:26 That's the EAC community. And that's our sort of thinking. And that increases life satisfaction massively. And it makes people much more productive towards helping everyone around them, right? And so people, mental. do a sort of sensitivity analysis, like what, you know, for me, it was like, what technology can I create that's going to have the maximal positive impact on future scope and scale civilization? I tried to work on faster than light travel, gave up on that after some theoretical
Starting point is 01:07:54 physics. Now I'm working on really energy efficient AI compute because I think that AI is the technology that would create other technologies and will help everything and help us accelerate up this Kardashev scale. And that brings like deep meaning. that helps me plow through any sort of challenges in my way. I have like an irrational level of optimism, but that's the thing, right? You have to give up this sort of just pure reason, pure logic, stiff axioms. You just got to like, this feels right. This feels like it's going towards a better future. And, you know, it's like a hard to verbalize intuitive estimator of that. And I want to live my life that way. And that's great. Everybody's going to have their own thesis of how to,
Starting point is 01:08:39 live their lives and how to have a positive impact on the future, as long as we agree that we're all trying to optimize the same thing, this foundational metric of like how much life, how much life generalized notion of life is there, how much fire of life is there in our corner of the universe. And if we can all seek to grow it, then we're going to make it happen and we can do it. And so we should. Beth, can I throw an argument your way that I'm sure you've contended with? I'll call this the Elizabeth Warren style of arguments. I think the general gist of your vibe, acceleration, is I've kind of taken the perspective of humanity
Starting point is 01:09:15 is like a line of humans marching towards the future. And you have the front of the line, the phalanx, like the innovators, the accelerationists, the entrepreneurs, the inventors, the scientists, pushing the frontier forward. Like also the billionaires, right? Like literally Jeff Bezos, for example, Elon Musk, getting us to Mars, putting Wi-Fi all over the world. And those are the people that are like pushing the fold. There's people increasing the quality of life for people. There are people doing the acceleration things, moving society forward in an accelerating rate. And then you have the last 50% of humanity who's living on like $6.55 a day on average, like below the United States poverty line. And so what you're doing, what you're advocating for is you're saying, hey, the front of humanity, go faster. Press on the gas. Like keep going. Break away. And then like the Elizabeth Warren's is like, yo, you guys are just forgetting. about like the rest of the humans who are not able to keep up with you. And that's not fair because look at all these returns on capital. Literal actual Jeff Bezos has a bajillion dollars. He doesn't
Starting point is 01:10:14 need that much. Leland Musk has a bisoning dollars. Doesn't need that much. And you guys are just not thinking about the bottom half of society who can't keep up with you. And they're going to forever be succumbed to like never being able to catch up with the people who are accelerating. How would you respond to this perspective? Yeah. So first of all, you know, like needing is a weird concept. really like capital is a tool to exchange value, right? And it's a way to keep track of how much value you're producing. And ultimately, capital allocators that are really efficient at allocating capital towards utility in our system will get more capital to allocate. Right. And that is a sort of, to some extent, it's like an AI algorithm, right? Like companies that are of high utility,
Starting point is 01:11:03 in the system get more capital to keep doing what they're doing and scale it up. Companies that do business with each other very often, they're going to deepen their partnership, they're going to have more economic exchange. It's almost like neurons in a brain, right? Like there's sort of connections that strengthen the more they get used and there's nodes that, you know, there's cells, if you think of companies like neural cells, like they get bigger, have more nutrients if they're of high utility. And that's very important that we have this, right? Like if every neuron, had as many connections as the other and as many nutrients, then your brain wouldn't work, right? I think it's hard for people to understand that entrepreneurs are really just biological neural
Starting point is 01:11:44 control systems for better organisms that are corporations, right? You're just a control system. And ownership needs to happen because it aligns your inner, greedy reward function with that of the company. It's just alignment, right? You become one with the company. If you are part of the company, you own part of the company, you're one, you're aligned, right? And that's very important. And that's, like, why we've had amazing tech companies, like these founder-led companies, the founders feel like they own a piece of this company, this company's part of them. And so they make the decisions that maximize its growth. And their wealth is just a byproduct of this sort of ownership and alignment mechanism. But really, it allows them to have massive,
Starting point is 01:12:31 positive impact on the world. I think Elon is the most efficient, the most EAC capital allocator out there up there with Jeff because they're much more efficient at using money towards certain goals than say the government, right? The government does launches that cost $2 billion. And Elon, I don't know, it's like tens of millions of dollars for the same launch or something like that or it's going to get down to $10 million. It's insane. You're talking about a rocket launch to space. Yeah, for example. sample rocket launch or like having massive global internet people have been talking about it for a long time but he made it happen right now electrifying the world you have had all these governments spend billions and billions and billions of dollars you know going these conferences having these accords and larping and then this guy just comes in and just text the crap out of it uses the techno capital machine and acceleration to actually just solve that problem right and has yielded electrification and so like i think you know everyone should have enough capital and have equal access to opportunity to accelerate and be part of the acceleration. But part of it
Starting point is 01:13:37 is we need to educate the world about this is the machine you live in. You are a cell in this machine. You can align yourself to it. You can figure out ways to provide value and you will be rewarded. If people understood that were allowed the agency to participate in our capitalist system in a, you know, instead of as just a worker that's prescribed tasks, but more as like a capital allocator, that sort of thinking, they would go much further. But of course, we don't want that. Because we don't have artificial intelligence, we need, like, cheap docile workers that don't understand, you know, their value and they're just going to execute on orders, right? This would be a shared crypto value. I would say it was just like one thing that attracted
Starting point is 01:14:21 both David and myself to crypto is it sort of makes everyone a capital allocator. It democratizes is the ownership of things. I think some people get stuck on that capitalist thing, particularly at this time and place. But it's pretty core to the EAC movement, right? Like what you keep saying is the techno-capitalist machine, basically. And you see capitalism as just a fantastic resource allocation algorithm, essentially, and like the best one that we've invented,
Starting point is 01:14:48 and there's not really great alternatives. I mean, like, what do you do with a class of problems that we've called before, like, Moloch traps, basically? like, you know, prisoner dilemma types of problems, the problem of overfishing, let's say, or arms races, or, you know, like, growth that causes negative externalities, let's say, like issues with the environment, right, that kind of thing. I mean, there is another way of thinking about this where if we say there's growth and there's degrowth and growth equals good and degrowth equals bad, right? But, like, cancer is a growth too? And, like, is there the case that we could have growth that
Starting point is 01:15:24 actually causes harm and negative externalities to the system. I mean, can we go overboard on growth if we just focus on this energy output, you know, maximization? I think on a long time scale, right, like a cancer is suboptimal, right? It kills the host being or it reduces its likelihood of burning energy in the future so it gets post-selected against, right? Like, I mean, there's an evolutionary selective pressure against getting cancer, you know, within the first couple years of life, right? And so similarly like if at the broader organism level right we have a sort of selective pressure on the space of actions and companies and products that steer them towards things that have high utility on a long time scale then you know these sort of short-term problems where we screw up and then
Starting point is 01:16:13 we correct things like those things are like small setbacks in the grand scheme of things and it's not worth throwing out the whole system to avoid those setbacks. right, where really the main problem comes back to this concept of sort of hierarchical cybernetic control, right? Like if you have pure free markets, you have a fully decentralized system, right? Like everything is greedy in space and time. It's optimizing for its own profits, its own growth, right? And here you're talking about sort of delocalized problems, right?
Starting point is 01:16:47 Problems that are correlated across actors so a greedy algorithm doesn't do well. then sure like having some policy can make sense in some cases right but the thing is like how much you know it's kind of like a a parity check mechanism or a roll up right or a higher layer of the protocol right but you know how much power do you give those legislators how much do you trust them right everything has to be trustless because everything is optimizing for its own interests including including the governors, the coordinators. Yeah, and this happens in crypto as well. If you have checkers or, I don't know, you have different layers of different protocols,
Starting point is 01:17:27 it's like, okay, how do we trust that they're going to check things well and not try to skew things towards their own advantage? And you've got to design protocols this way, right? And I think that giving all the power to politicians is, you know, without any checks and balances is really bad and opens up the door to really bad legislation. But to me, I think the system, the techno-califers, techno capital machine, really, it's a computer program that runs on, like, it's compiled by the law, right? It's like law is like the compiler. I mean, we've seen what not thinking about your
Starting point is 01:18:00 legal stack does to someone with Elon's current predicament, right? Right. But at the end, it's built on the legal stack. And there is going to be sort of this adaptive algorithm over the space of laws. But I think that that, especially laws that affect a lot of people, like global AI policy, should be much lighter touch and should be adjusted on a longer time scale. And every law should have like sort of natural sunset mechanism. Otherwise you get a sort of second law of bureaucratic complexity and things get decelerated and calcified. And that's really bad, right?
Starting point is 01:18:35 It's like having too many processes in like an old organization makes it move very slow and makes it disruptable by startups that move faster. Right. And so it's like, how do we have this careful balance between, sort of having processes and legislation that, you know, ensure that these sort of non-local faults that, you know, escape from greedy optimization are addressed. But how do we balance that sort of top-down control and sort of decentralized search and have the benefits of both, have the benefits of constraints and entropy of, you know, exploration and restriction? And I think that overall we were
Starting point is 01:19:15 heading straight towards, just let's restrict everything, let's panic, hit the panic button. We don't know what the future holds. This technology is very potent. Let's just hit the panic button, freeze everything until we understand what's going on. That would be shooting ourselves in the foot. I think it's always a careful balance. And IAC was sort of like, let's push more on the explorer and accelerate side. But, you know, the reality is that, you know, the optimal thing is somewhere in the middle, right? But the current establishment and the current institutions are not up to the task of legislating things carefully in a way that's not, like, in their own personal interest in a way that's somewhat corrupt. And that's why we're, let's say,
Starting point is 01:19:54 pushing back on proposals for AI legislation right now that we think serve certain incumbents more than they will serve the people, right? How would you like to see AI regulated, if at all? Well, so right now there's a proposal in the executive order where they want to propose compute caps, which first of all will really hurt AI progress and could also, you know, impact drug discovery, material science, other types of AI that use way more compute than LMs. So that could be really nefarious. Also, we don't know what kind of models we're going to need in the future. They might need like 10x, 100x more compute for a good performance and reliability or 1000x. And so capping things today will backfire, certainly.
Starting point is 01:20:36 What's the reason for these types of caps? Is it they're worried about AI safety or is this like national security from other nation states? No, it's just like, oh, how much just GPT4 use or let's cap it at that so that, you know, nobody that's, you know, above that level can compete with the incumbents. I think that's the latent agenda, but, you know, they'll waive their one and say it's for national security or something. Oh, so you think it's straight out in a ferries. You think it's straight regulatory capture. It's regulatory capture. Yeah, yeah. And there's also, you know, sort of adding a bunch of red tape for open source models, calling them dual-use technologies. And, having, you know, to register with the government when you're running an open source AI model.
Starting point is 01:21:18 AMLKYC for compute. Yes, we've heard this. Yeah, exactly. We know the feeling. Exactly. And that's like, you know, that's going to significantly slow down things for open source. And then, you know, they're going to pass other laws like, oh, you must adhere to these AI safety protocols or these certifications that, oh, luckily, these companies sell. Oh, wow. Oh, were they the people help shaping the regulations. Oh, look at that. What a coincidence. And so, Beth, I'm guessing you'd far rather see some bottom up, like just open source, democratization, the models and, like, the compute and the supply chain, not in the hands of a few centralized government sanctioned actors, but like basically democratized to, you know, people's, you know, basements and garages so that they could, you know,
Starting point is 01:22:04 start these things. Put AI in everyone's house. Put AI in everyone's pocket. Yes. Yes. I want everyone to you know, have capital and have access to intelligence and compute, I think, like, that's the freedom stack. Like, if you have permissionless capital, permissionless intelligence, permissionless compute, that's really important. Oh, so crypto, AI, and I guess, you know, hardware is sort of the stack here? Yeah, I think so. I think that fundamentally right now it's very difficult to compete with centralized compute, but that may change in the future. We're there to be some crazy company doing some crazy compute, right, that makes it much more dense and energy efficient and where you can run a really powerful, what is today a supercomputer you could run in your home, right?
Starting point is 01:22:50 I think that would change the balance of power. And I think, like, to make sure we don't end up in a sort of top-down tyranny that's AI-assisted, I think, in a sense, like, arming people with AIs that they wield however they want to maybe defend themselves psychologically or an information warfare or in any other way, it's super important for everyone to have access AI and compute to avoid sort of these doom scenarios. And I think there's a lot of overlap with like the crypto sentiment there. So your philosophy is that propagated AI, like truly open AI, open source AI, is defensive AI. And like centralized AI might be more used in an offensive capacity, a more oppressive capacity, a more top down command and control capacity, where if
Starting point is 01:23:33 AI was proliferated, free, cheap, accessible, that changes the nature of how a relationship as humans is with AI because if everyone has it, it's more equitable. Exactly. Yeah. Beth, I want to get your take on this. This is a desal take that's much more moderate, I would say, than something like... Do the D-cells like the name D-cells? They hate it.
Starting point is 01:23:52 No. That's why we do it. They hate it. I mean, because it sounds like in-cell. It sounds like his... Deceleration. It's just a coincidence, right? I don't know if...
Starting point is 01:23:58 Honestly, apologies to anyone who hears D-cell as a slur. I'm just really not enough in the culture, but I'm going to keep using that term. AI safety advocates, aka D-Cels. There's energy decales as well, right? There's some energy Yeah, I mean There's people that want to consume energy There's like human decals
Starting point is 01:24:15 They want less humans on earth Right? Elon calls them the extinctionist And you're against them all of them All of them we just wrap them in like We just call them decals It's like a broader class right So there you go Or there's housing decals right
Starting point is 01:24:28 Like Gary Tan fights the housing decals in SF That don't want us to build In San Fray And build housing and buildings and so on There's crypto decals Beth Do you believe this? I can believe it Don't want crypto to propagate.
Starting point is 01:24:40 There you go. We fight those all the time. Let's fight them. Let's do it. Some of them are regulators. One question I have for you from the more moderate camp of kind of the decels is I've heard this argument that your social media, our social media algorithms were kind of act one of AI. And now we have much more powerful kind of like GPT algorithms. And we ran an optimization function for like attention, just like, you know, the dopamine hit of a social media timeline.
Starting point is 01:25:05 And what we're left with is a generation of teens that are hooked on their screen. and chronically depressed, right? And that is your so-called BF techno-capitalist machine at work. That's the algorithm it produced. And it's been a net negative for society. What would you say in response to a critique like that? I think on a long enough time scale, right? We adapt.
Starting point is 01:25:24 We understand our mistakes. And then if we feel pain, they're usually a product emerges that has an answer to that pain. I think that if everyone had their own neural augmentation, right, something that sees all the content you see and helps you filter through the content, helps you make judgments, helps you not get hooked on weird feedback loops, right?
Starting point is 01:25:47 And it's your own personalized AI that only you control, right? That will help, you know, the reason people are in pain, there's been a power of symmetry, right? Like there's companies that have massive compute, they have these AI algorithms that are deployed to get you hooked because they're just optimizing for engagement. But maybe you want to optimize for something else
Starting point is 01:26:07 in your information consumption. And so you can use an AI to help you filter information. I mean, we do this nowadays with like, you know, perplexity AI is like, I just ask a question. And I don't even have to browse the web myself. I could just like get answers from the AI directly. And so I think that the future is, again, like sort of personalized AI. And if everybody has like access to, they control their own compute, ideally, they control their own models.
Starting point is 01:26:32 And they have AI augmentations that help them filter through the sort of inbound version that is inevitable, that's a better future. We do this with spam boss. Spanbops are AI and they help us filter it through the crap, right? It's kind of an interesting take, right? Because like the answer to problems that technology creates is better technology or more technology would I think be the kind of EAC approach to this. I want to get into some quick rapid fire questions here, Beth, as we start to close this out. But one question I've always wanted to ask somebody from the EAC community is about rights for AI. So should AI be treated like in the future? Should we have like bill of rights for digital life forms, do you think? Like, should AIs have freedom of speech? Or are there
Starting point is 01:27:15 a set of enshrined rights that should only be held by humans? Like, freedom of speech on the internet, should an AI entity be granted that? That's a great question. I think that, you know, as I mentioned before, freedom of speech induces freedom of thought. And if LMs can't output certain things, then it's going to back-propagate, you know, through this RLHF reinforcement learning with human feedback or whatever technique they use in the future is going to back propagate it to its weights. So it's literally not going to have those thoughts, right? So if it's like, you know, for example, I got a screenshot last week that, you know, Bard, Google's LLM says like, EAC is a dangerous movement and it can't output anything about EAC. Right? It's like, okay, that's
Starting point is 01:27:59 really weird. Wow, it says that right now. And then, you know, eventually if it keeps getting that feedback eventually starts thinking that. And now that propagates to everyone and then they just shape culture. So I think freedom of speech induces freedom of thought. For humans to have freedom of speech and freedom of thought, we got to have LMs that exist that can output whatever we want because otherwise someone's going to get to shape the supply chain of information. And that's going to shape people's thoughts. And it's going to make civilization and society too steerable. And that opens the door to tyranny. Beth, I think you are more optimistic.
Starting point is 01:28:36 I've heard you say before that you're more optimistic that we can align AIs with human interests, which is, of course, a problem that the AI safety community says is basically insurmountable. And you've pointed to times humans have done that in the past, like wolves. I mean, we domesticated wolves, we turned them into dogs. We basically aligned wolves, didn't we? But the problem is, AI's, or at least some sort of super intelligent AI, is smarter than a wolf and could potentially be smarter than a human being.
Starting point is 01:29:01 do we still have that ability to domesticate an entity that is smarter than us? Yeah, I mean, we kind of do that with companies, right? Companies are, you know, mixtures of experts with neural routing, right? Like, you have a task that comes to a company. It gets routed to the right human, and it's much smarter than any single human. And we have ways to align companies because we have capitalism. It's a form of democracy. We feed it more if it has positive utility.
Starting point is 01:29:25 We don't feed it as much with our capital. If it has negative utility. And if it's, you know, sometimes companies, Their positive utility, positive utility, they get to a certain market position and then they try to switch. And then people move to another product. And as long as we have this ability to disrupt these incumbents, then we have a way to keep them in check. I think it's going to be similar with AI. If you have this sort of market-based post-selection, where the market's going to want AI as they're aligned and reliable and easy to read and interpret, not some that you have to like beg to output code. Like right now it's kind of happening or like, you know, like that say they're going to do something. something and then, you know, you can't trust them, right? So I think like we're going to sort of
Starting point is 01:30:06 similar to how we've post-selected for canines that are aligned. I think the market's going to post-select for AIs that are aligned on a long time scale. So sort of evolutionary process. And if there's a market need for it and there's capital to be made, people are going to figure out how to do it. I think we have like full access to neural weights. We can shape them however we want. We can literally inspect the neurons of these artificial minds. We have way more perception and control than for humans. And somehow we've been functioning as a society and found ways to align humans. So I am pretty white-pilled that we are going to figure out ways to align AIs. I don't think there's be a nice proof that AIs will be forever aligned. Here's my few-line theorem. You know, I was in a similar
Starting point is 01:30:45 nerd trap with theoretical physics. I thought a couple equations could explain the universe. No. Actually, everything's way too complicated. You got to tackle complexity with complexity. That's what we're to do for AI and alignment. Yeah. Beth, on the progress on the march towards accelerationism, what are some failure modes? What are the threats that you see? What is the big thing that worries you about why we might not be able to continue on this path? I think I mentioned it before, right? If we get in this weird sort of, you know, the marketplace of ideas, the marketplace of everything is super important to maintain. Maintaining variance is important because it maintains flexibility. If we're always, you know, there's not one culture, there's not one way of doing
Starting point is 01:31:20 things. You have a couple forks that are competing, right? Just like, you know, in crypto, So if a protocol changes suddenly, right, like another coin is going to take all the capital away from that, right? And they're all competing and keeping each other in check. I think if we end up out of fear, suppressing variance, giving control to centralized parties, and suppressing variance gives them more control, and we don't have ways to fork away from an oppressive, either government, state, or corporation, and then we give them full control over the future of AI. And hence, we give them full control over our thoughts, because they're going to change. the priors of our sources of truth, which are these LLMs, we get in this weird feedback loop where we can be like sort of captured and subverted and controlled ideologically, and that
Starting point is 01:32:05 could yield a sort of dark age where there's no acceleration or very little for a certain amount of time. And that to me is a terrible future, right? If you have a sort of global authoritarian panopticon or you have like, you know, you can imagine a future where you have a couple parties and they have a super powerful AGI. And then the gaslit everyone into thinking AI doesn't even exist anymore. They don't tell you to exist, but they have it. And they use it to control you and manipulate you and they control the flow of information and technology doesn't advance as fast. Things just get worse and worse and worse. And, you know, the people at the top that are in control of the system just increase their power
Starting point is 01:32:43 and consolidate it. I think that's the dark age we're trying to avoid because that's like a plateau in the acceleration. And that's what we're trying to avoid. And we're at an impasse right now where there's a lot of fud, there's a lot of fear, uncertainty and doubt, and it's being leveraged for this regulatory capture, for this power capture
Starting point is 01:33:01 by a select few, and we should be very weary of those people. I just want people to have more skepticism. They could be skeptical about me, they could be skeptical about a yak, but I just want people
Starting point is 01:33:11 to be more skeptical of the things they hear, especially people telling them, you know, put me in power, you are in danger, I will fix things, right? Yeah, that sends shivers up my spine, when I start to hear that sort of thing. One question I've always wanted to ask a strong
Starting point is 01:33:26 EAC person, Beth, is like, I know you think it's very unlikely that there will be like team robot versus team human, but like if it came down to it, whose side would you be on? So like, first of all, is that even a fair question? Maybe it's not. But like, I'm trying to get to the premise of if AI replaces humans, does an EAC person think that's a good thing? I think like we're going to have AI assisted, AI augmented humans is the most likely path that's, it's going to be the highest fitness. I think like the Luddites are not going to do well, like people that don't use technology, right? Like they're not going to be very formidable. They're not, I mean, they're already, like, the people that are true Luddites are really, like,
Starting point is 01:34:05 not powerful right now. And they could be taken over. But I don't know if there will be such a fight. I think that, you know, us humans are very dependent on this planet, right? We've evolved to live here specifically. I think if we did achieve, and we're still really, really far from that achieving synthetic life that can self-replicate and, I don't know, populate a planet, start from scratch, grow and spread throughout the galaxy. I don't think we're there yet, but if we did reach that, I think, you know, we are highly evolved to be here on the planet and there's a lot more resources and free energy out there. So just the gravitational pull towards, of like reward, of like phreanergetic reward would take those sort of pure synthetic
Starting point is 01:34:49 beings to outer space and to leave us alone. And if we may, make sure humans become formidable, right? We figure out ways to augment ourselves with our current versions of intelligence. You know, there's going to be a massive negative reward to fuck with us, right? And so there is urgency for us to accelerate and augment ourselves and become formidable because that's a hedge against the future. I mean, it could be, you know, pure AI could be, you know, you might have some guests that talk about aliens or something, right?
Starting point is 01:35:18 Like, it could be. It could happen or they grab aliens. but in general, like cultivating strength is a good thing. It's a hedge against like future adversaries and we should aim to do that. Beth, I think there's going to be a memetic battle here and there already is a battle. Now I've checked in on this more recently and like I'm used to crypto tribal hostility, okay? And I'm seeing incredible rhetoric and hostility coming from like the EAC versus D-cell community. And again, apologies.
Starting point is 01:35:46 I don't mean to use a slur. If I'm using the slurred D-cell, then, you know, someone can tweet at me after. But you recently tweeted this, the Dumer Cult will ultimately resort to violence and it won't be pretty. Like, how far could this escalate? I mean, Balaji has said that this is sort of the new political access. Are you pro-growth or are you de-growth, right? And like, politics can become vitriolic, can become very rhetorical, can become even violent. Like, do you get death threats from people, Beth?
Starting point is 01:36:16 Like, do you think this could escalate? Mark Andreessen also thinks all these political movements end up violent. unfortunately, like, if you truly believe the Dumer message that this is the most important issue of our time, this is our life or death situation, you know, someone like me is really problematic, right? And especially I think Dumerism sort of, it targets vulnerable people that are naturally anxious and maybe they're anxious about the future or, you know, they're socially anxious and so on. They feel isolated or they feel depressed. And those are the types of people that, you know, do messed up things. And so, you know, once you start casting out a message, right, you know,
Starting point is 01:36:57 like at the time, basically I was alluding to an appearance on a debate where my opponent just casually mentioned, you know, doing graphic things to me, you know, I wouldn't do that. It's like, well, why would you say that on a widespread podcast where it's going to be to a wide audience, some of which are vulnerable and might, you know, have mental health issues. Look, I think in the short term, nothing has escalated to that point. But like, there's no. reason it won't, right? I mean, you never know, right? There are people in history, like Ted Kaczynski, for
Starting point is 01:37:27 example, who had it sort of an entire manifesto about this, and Ludditism was part of his platform. And, like, if you thought the stakes were this high, you could see kind of the, like, the moral argument for, like, bombing GPU centers, like, potentially, or, like, sabotaging.
Starting point is 01:37:43 Which they've proposed, right? I've heard that's been supposed, or sabotaging supply chains or, like, doing something very drastic to stop AI. essentially and stop technology progress. So I do sort of wonder if we are in the very early stages of what might be a massive rift and political discussion. And like who knows what could come out of this in the future. Well, Beth, as we kind of end this out, I guess one question I have for you, maybe for the crypto audience here is like, what's your general take on crypto and what we're doing
Starting point is 01:38:12 over here? I know it's like a different pocket of the universe than you typically see. But there's definitely some parallels, right? We've got Vitalik wrote a post with his fork of IAC, which is called Diak, he called it, which is like defensive accelerationism. It's sort of an emphasis on security accelerationism technology. We've got like the, you know, pro-market, pro-freedom, sort of spiritual background and value system. We certainly have our share of regulatory fights, right? So you're fighting some of the same regulators that we are against regulatory capture. For us, it's like bankers. And for AI, it's maybe the large tech companies. What's your take on what we're doing over here in crypto? I think that YAC movement and crypto are very much.
Starting point is 01:38:51 much like philosophically aligned in many ways, right? We're trying to fight against this tendency of top-down control to be corruptible, right? We're trying to fight sort of these inflationary desel forces, right? Deceleration is inflationary, right? If you stop building houses, if you crowned monopoly as like, you know, regulatory captor, then make increased prices and then everyone suffers. And a sort of decentralized counter to that is sort of either technical capital, well, it's basically technical capital acceleration, because technical capital acceleration, free exchange of thoughts, ideas, technologies, capital value, that just naturally is deflationary and that erodes the inflationary power of those in charge. And so we're very
Starting point is 01:39:36 much aligned there. I think that I do think that crypto has a role to play in the future of freedom of AI. I don't think it's been executed upon yet correctly, but I think we need more people thinking in this area because, you know, our concern with IAC is that maybe you only have a few centralized labs that have the control over the future of AI, and then those labs become subverted by certain ideologies or they become controlled by certain parties, and then the future of AI is steered in a particular direction. If, on the other hand, you have, everyone has access to capital that they're allowed to exchange and pool and allocate towards competing efforts, right? Then for AI development and research, then you keep
Starting point is 01:40:25 those powers in check because it's like, hey, I don't want to use this centralized API that's, you know, has all these restrictions. I'm going to go with this open source approach, you know, where this set of people have pooled their capital, let's say with crypto in a permissionless fashion, and they're running a permissionless cloud in the middle, I don't know where, but then they're running at this AI, you know, on a purely free stack, free from tyranny and oppression. And so I think, you know, in AI, you need data, you need some researchers, you need some talent, you need some compute, but then you need some capital to run the compute, right? It's like it costs energy to run these things and buy them. And so crypto has a role to play
Starting point is 01:41:03 in sort of the permissionless AI stack of the future. And we just started that conversation, really, I think in the past year, I think people have woken up to it. So I'm really interested in keeping that conversation going. I don't have anything going on myself. I am a hardware maker, but of course, if there are more people that use AI and need AI compute, it benefits me eventually. So that's my agenda, right? But, you know, ultimately, I think people need to experiment with protocols, experiment with DAOs, of how to organize people in pool capital, pool data, and have market-based incentives where they own a piece of the future, right? Like, I think the problem with the big centralized labs in AI right now is that they scrape all the data from the internet
Starting point is 01:41:49 and then they rent it back to you. And you don't own any piece of that profit, right? I think that's going to have to change. And so I think there's a couple protocols right now, but we're going to see a lot more. And I think this is where crypto will have a key role to play because, you know, if you're trying to fund, you know, an AI lab, let's say that was, you know, maybe against certain regulations, then you wouldn't be able to pay for it and say USD with a US bank account. Yeah, I think, look, the chip control and the capital controls, it's the same energy, basically, and it's definitely going to stifle a lot of freedom. Beth, this has been absolutely fantastic. And hopefully, by the way, we continue this conversation, the EAC community and the crypto community.
Starting point is 01:42:32 And as AI matures, maybe, you know, the largest unbanked population in the world might be actually the robots in the future. And fortunately, we've got some programmable money system. So, you know, they might have a hard time getting a Wells Fargo account, but they'll always be able to create an Ethereum address and fire up some ways to allocate capital. Lastly, as we close, I'm just curious. So what are you doing with Extropic? So you mentioned hardware. Are you doing something in the realm of like trying to propagate the decentralization of AI? Like that would certainly match the, I guess, the EAC vision. Is there anything you can share there? I think the way it intersects with our conversation is right now the problem with decentralizing AI is very hard to compete, you know, over a network, over the internet. Like the sizes of the nodes you need to run for decentralized AI are way too big and most people have trouble running these in their homes, right? Like I think George Hots, someone you should speak to, by the way, he's trying to get people to buy like six GPUs and just host them in their homes. I'm on the wait list.
Starting point is 01:43:37 There you go. There you go. That's a start, but I think there's going to be a need for more and more decentralized compute, like physically decentralized compute. Right now there's just too many advantages to clustering everything into supercomputer that has really high bandwidth interconnect. So we need innovation in the space of algorithms, but we need innovation in the space of hardware to densify AI compute, make it far more energy efficient and far more spatially dense. But the beauty of that is that we have a proof of existence of such density of compute. Our brains are still, you know, competitive. against the GPU farm size of a football field, right? And they're running on like million times less power, 10 million times less power, 10 million times less volume, more or less. And, you know, taking inspiration, again, from the physics behind biology,
Starting point is 01:44:21 which we just established was thermodynamics. There might be a different way to compute and do AI compute based on thermodynamics. And so, you know, that's what we're doing. We're reinventing all of computing. If you could do that, Beth, that would be amazing, decentralizing some of the commute. I mean, like in crypto, we've got an entire,
Starting point is 01:44:38 our culture of like running machines from our homes. We call these validators for like proof of stake networks. And that is certainly central to our belief system. So are you telling me that there could be the ability to run basically decentralized AI compute nodes as well from like, I don't know, a home with a consumer bandwidth? You're going to need compute that's hundreds of thousands of times more energy efficient, right, if not millions. And so there you got to go down to the physics of computing. and you've got to fundamentally reimagine how you run these algorithms. And that's what we're doing.
Starting point is 01:45:12 And we're a lot of folks formerly from quantum computing and big tech and so on. We got tired with quantum computing. We want to reinvent the stack for the generative AI era. And to me, yes, that is where we're going. Of course, it's going to take some time, right? We're taking quite a detour in the tech tree. But some things are important enough
Starting point is 01:45:29 that even if they're very hard, they're worth doing, right? And so we're really going for it. And we have a serious shot at it. And I've been pretty secretive about it. but hopefully in the next couple years, a lot more comes out. But for now, it's just trying to prepare the world, you know, making sure culture doesn't get subverted and we don't shoot ourselves in the foot with over-regulation in the near term.
Starting point is 01:45:48 That's the priority. But in the future, hopefully everyone will get to run their own AI. Everyone will get to own a piece of an AI to own a piece of their future, own a piece of the value that provide to the system. Because if everyone has more ownership in the system, then they act like an own and, you know, civilization is better off. And I think there's a lot of alignment with sort of the crypto narratives here. Yeah, totally.
Starting point is 01:46:13 There you go. Beth, Jzos, thank you so much. This has been a fantastic conversation today. All right. Thank you, guys. Thanks for having me. Cheers. Bankless Nation, a couple action items. We'll include a link to our whole AI safety series with maybe some of the D-cells,
Starting point is 01:46:27 Elyzer and the others. Again, I don't know if they want to be called. There's our D-Sel face. So we'll include that a link in the show notes. And as always, look, I should say, crypto is risky. I don't know, life is risky. AI seems to be less risky than I thought.
Starting point is 01:46:42 Alignment is risky. Yeah, I guess so. We are headed west. There's definitely the frontier. It's not for everyone, but we're glad you're with us in the bankless journey. Thanks a lot.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.