Bankless - 167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

Episode Date: April 17, 2023

In this highly anticipated sequel to our 1st AI conversation with Eliezer Yudkowsky, we bring you a thought-provoking discussion with Robin Hanson, a professor of economics at George Mason University ...and a research associate at the Future of Humanity Institute of Oxford University.  Eliezer painted a chilling and grim picture of a future where AI ultimately kills us all​. Robin is here to provide a different perspective. ------ ✨ DEBRIEF | Unpacking the episode:  https://www.bankless.com/debrief-robin-hanson     ------ ✨ COLLECTIBLES | Collect this episode:  https://collectibles.bankless.com/mint  ------ ✨ NEW BANKLESS PRODUCT | Token Hub https://bankless.cc/TokenHubRSS   ------ In this episode, we explore: - Why Robin believes Eliezer is wrong and that we're not all going to die from an AI takeover. But will we potentially become their pets instead? - The possibility of a civil war between multiple AIs and why it's more likely than being dominated by a single superintelligent AI. - Robin's concerns about the regulation of AI and why he believes it's a greater threat than AI itself. - A fascinating analogy: why Robin thinks alien civilizations might spread like cancer? - Finally, we dive into the world of crypto and explore Robin's views on this rapidly evolving technology. Whether you're an AI enthusiast, a crypto advocate, or just someone intrigued by the big-picture questions about humanity and its prospects, this episode is one you won't want to miss. ------ BANKLESS SPONSOR TOOLS:  ⚖️ ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum  🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://bankless.cc/kraken  🦄UNISWAP | ON-CHAIN MARKETPLACE https://bankless.cc/uniswap  👻 PHANTOM | FRIENDLY MULTICHAIN WALLET https://bankless.cc/phantom-waitlist  🦊METAMASK LEARN | HELPFUL WEB3 RESOURCE https://bankless.cc/MetaMask  ------ Topics Covered 0:00 Intro 8:42 How Robin is Weird 10:00 Are We All Going to Die? 13:50 Eliezer’s Assumption  25:00 Intelligence, Humans, & Evolution  27:31 Eliezer Counter Point  32:00 Acceleration of Change  33:18 Comparing & Contrasting Eliezer’s Argument 35:45 A New Life Form 44:24 AI Improving Itself 47:04 Self Interested Acting Agent  49:56 Human Displacement?  55:56 Many AIs  1:00:18 Humans vs. Robots  1:04:14 Pause or Continue AI Innovation? 1:10:52 Quiet Civilization  1:14:28 Grabby Aliens  1:19:55 Are Humans Grabby? 1:27:29 Grabby Aliens Explained  1:36:16 Cancer  1:40:00 Robin’s Thoughts on Crypto  1:42:20 Closing & Disclaimers  ------ Resources: Robin Hanson https://twitter.com/robinhanson  Eliezer Yudkowsky on Bankless https://www.bankless.com/159-were-all-gonna-die-with-eliezer-yudkowsky  What is the AI FOOM debate? https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate  Age of Em book - Robin Hanson https://ageofem.com/  Grabby Aliens https://grabbyaliens.com/  Kurzgesagt video https://www.youtube.com/watch?v=GDSf2h9_39I&t=1s  ----- Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research. Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here: https://www.bankless.com/disclosures 

Transcript
Discussion (0)
Starting point is 00:00:00 One of the most striking features of our world are the mechanisms we use to keep that piece and to coordinate among all these divergent conflicting things. And one of the moves that often AI people make to spin scenarios is just to assume that AIs have none of that problem. AIs do not need to coordinate. They do not have conflicts between them. They do not have internal conflicts. They do not have any issues in how to organize and how to keep the peace between them.
Starting point is 00:00:25 None of that's a problem for AIs by assumption. They're just these other thing that has no such problem. And then, of course, that leads to scenarios like that they kill us all. Welcome to bankless, where we explore the frontier of internet money and internet finance and also AI. This is how to get started, how to get better, how to front run the opportunity. This is Ryan Sean Adams. I'm here with David Hoffman, and we're here to help you become more bankless. Guys, we promised another AI episode after our episode with Eleezzer.
Starting point is 00:00:54 Well, here it is. Here's the sequel. The last episode of Eliezer-Eyukowski, we titled, Correctly, We're All Going to Die. that's basically what he said. I left that episode with a lot of misgivings. Existential dread. Yeah, existential dread. It was not good news in that episode, and I was having a difficulty processing it. But David and I talked, and we knew we had to have some follow-up episodes to tell the full story, bankless style, and go on the journey of AI, its intersection with our lives, with the world, and with crypto. So here it is. This is the answer to that. This is Robin Hansen
Starting point is 00:01:28 on the podcast today. Let me go over a few takeaways. is number one, we talk about why Robin thinks Eliezer is wrong. We're not all going to die from artificial intelligence, but we might become their pets. Number two, why we're more likely to have a civil war with AI rather than being eaten by one single artificial intelligence. Number three, why Robin is more worried about regulation of AI than actual AI. Very interesting. Number four, why alien civilization spread like cancer. This is also related to AI and super interesting. Number five, finally we get to you, what in the world does Robin Hansen think about crypto? David, why was this episode significant for you? Robin Hansen is such a great thinker. He's absolutely a polymath and really,
Starting point is 00:02:12 like Eliezer, progresses in his thoughts in the very linear logical fashion. So he's easy to follow along with. And so the first half of this episode, maybe the 45 minutes, 50 minutes, is all about just the AI alignment debate and Eliezer versus Hansen. which is a debate that has actually been going on for many, many years now. Decades. Over a decade, yeah, you're right. This is not the first time that Eliezer has heard about Robin Hansen or Robin Hansen has debated Eliezer.
Starting point is 00:02:40 This is an ongoing saga. And so this is just course material for Robin Hansen. And so we really focus on this AI alignment problem and how these thinkers think that AI will develop and progress here on planet Earth and how they will in friendly or unfriendly ways ultimately collide with humanity. So that's the first half of this episode. The second half of this episode, I think, is when this gets really, really interesting. If you just listen to the first half of this episode, you would just think, like, oh, this is the other half of the conversation to the AI debate, which it is. The second half connects this
Starting point is 00:03:13 to so many more rabbit holes and so many more topics of conversation that are, actually, I would say, deeply ingrained to bankless content themes, the themes of competition versus coercion, the themes of exploring frontiers, the thing of Moloch and the Prisoner's Dilemma, and how things coordinate across species. And so we connect AI alignment to Robin Hans's famous idea that he calls Gravy Aliens. If you haven't heard about Gravy aliens, you're in for a treat. So this goes from what is a simple counterargument to a debate that we've had, to a multifaceted exploration that is just so cursory of many, many deep subjects that I hope to explore first. their own bankless. Yeah, and honestly, David, I'm dying to record the debrief with you because I want to get
Starting point is 00:04:00 your take on this episode that was and contrast it. You can see how giddy I was in the second half of the I know, and I want to contrast it with our ELEASER episode and how these two thinkers think and who do you think has the stronger case. The debrief episode is the episode, David and I record after the episode where we just talk about what just happened, give our raw unfiltered thoughts. So we're about to record that now. If you are a bankless citizen, then you have access to that right now. If you'd like to become a citizen, click the link in the show notes, and you'll get access to our premium RSS feed where you'll have access to that. Also, this episode will become a collectible next Monday, I believe. I'm collecting this episode so hard. Me too. I've got that easier episode in my collections. I'm also
Starting point is 00:04:43 collecting this. We release episode collections for our key episode of the week every Monday. The mint time is 3 p.m. Eastern, and whatever time zone you're in, you have to convert that. that's it. We're going to get right to the episode with Robin Hansen. But before we do, we want to thank the sponsors that made this possible, including our favorite crypto exchange, Krakken, our recommended exchange for 2023, go set up an account. Cracken has been a leader in the crypto industry for the last 12 years. Dedicated to accelerating the global adoption of crypto, Krakken puts an emphasis on security, transparency, and client support, which is why over 9 million clients have come to love Krakken's products. Whether you're a big.
Starting point is 00:05:23 beginner or a pro, the Cracken Ux is simple, intuitive, and frictionless, making the Cracken app a great place for all to get involved and learn about crypto. For those with experience, the redesigned Cracken Pro app and web experience is completely customizable to your trading needs, integrating key trading features into one seamless interface. Cracken has a 24-7-365 client support team that is globally recognized. Cracken support is available wherever, whenever you need them by phone, chat, or email. And for all of you NFTers out there, the brand new Cracken NFT beta platform gives you the best NFT trading experience possible. Rarity rankings, no gas fees, and the ability to buy an NFT straight with cash. Does your crypto exchange prioritize its customers the way that Cracken does?
Starting point is 00:06:05 And if not, sign up with Cracken at crackin.com slash bankless. Hey, Bankless Nation. If you're listening to this, it's because you're on the free Bankless RSS fee. Did you know that there's an ad-free version of Bankless that comes with the Bankless premium subscription? No ads. just straight to the content. But that's just one of many things that a premium subscription gets you. There's also the token report,
Starting point is 00:06:26 a monthly bullish, bearish, neutral report on the hottest tokens of the month. And the regular updates from the token report go into the token Bible. Your first stop shop for every token worth investigating in crypto. Bankless premium also gets you a 30% discount to the permissionless conference, which means it basically just pays for itself.
Starting point is 00:06:43 There's also the Airdrop Guide to make sure you don't miss a drop in 2023. But really, the best part about Bankless Premium is hanging out with me, Ryan, and the rest of the bankless team in the inner circle Discord only for premium members. Want the Alpha? Check out Ben the analyst's DGENPIT, where you can ask him questions about the token report. Got a question? I've got my own Q&A room for any questions that you might have. At Bankless, we have huge things planned for 2023, including a new website with login with your Ethereum address capabilities, and we're super excited to ship what we are calling Bankless 2.0 Soon TM. So if you want extra help exploring the
Starting point is 00:07:18 Frontier, subscribe to bankless premium. It's under 50 cents a day and provides a wealth of knowledge and support on your journey west. I'll see you in the Discord. The Phantom wallet is coming to Ethereum. The number one wallet on Solana is bringing its millions of users and beloved UX to Ethereum and Polygon. If you haven't used Phantom before, you've been missing out. Phantom was one of the first wallets to pioneer Solana staking inside the wallet and will be offering similar staking features for Ethereum and Polygon. But that's just staking. Phantom is also the best home for your NFTs. Phantom has a complete set of features to optimize your NFT experience. Pin your favorites, hide your uglies, burn the spam, and also manage your NFT sale listings from inside the wallet. Phantom is, of course, a multi-chain
Starting point is 00:08:00 wallet, but it makes chain management easy, displaying your transactions in a human-readable format, with automatic warnings for malicious transactions or fishing websites. Phantom has already saved over 20,000 users from getting scammed or hacked. So, get on the Phantom wait list and be one of the first to access the multi-chain beta. There's a link in the show notes, or you can go to phantom.app slash waitlist to get access in late February. Bankless Nation, we are excited to introduce you to Robin Hansen. He is a professor of economics at George Mason University and a research associate at the Future of Humanity Institute at Oxford.
Starting point is 00:08:32 This takes an interdisciplinary research center approach that investigates big picture questions about humanity and its prospects. And I think explaining exactly who Robin is in what he's doing is not a trivial task because he is a polymath, certainly spans many things. he's provided many different mental models across various disciplines, but I would not call him conventional by any means. And I'm sure Bankless listener, you will see what we mean here today. Robin, welcome to bankless. Glad to be here. I think I can try to explain the kind of weird that I am.
Starting point is 00:09:03 Oh, oh, please. Yeah, go ahead. If that's a puzzle, what kind of weird isn't? Because I can't explain the kind of weird I am. Tell us how you're weird. So I think I'm conventional on methods and weird on topics. So I tend to look for neglected important topics where I can find some sort of angle. But I'm usually looking for a pretty conventional angle that is some sort of usual tools that just haven't been applied to an interesting important topic. So I'm not a radical about theories or methods, but I am about topics. So use things like science and math and statistics and all of those normal non-radical things. Right. I've spent a lifetime collecting all these usual tools, all these systems, really, and I'm more of a polymath in that I'm trying to combine them on neglected important topics.
Starting point is 00:09:54 So if you go to a talk where everybody's arguing and you pick aside, I mean, the chances you're right are kind of small in the sense that there's all these other positions. And, you know, maybe you'll be right, but probably you'll be wrong because you're picking one of these many positions, right? If you go pick a topic where nobody's talking about it, and you just say anything sensible, you can probably be right. And we've, I think, recently ran into somebody who follows that path of sorts, somebody who thinks very logically and rationally, but is applying it to more unique frontiers of the place that humanity is. And that is our recent episode with Eliezer, who followed a decently logical path that was relatively easy to follow that unfortunately led us into a dead end for humanity. And so it was something that me and Ryan, as co-hosts of this podcast, but then also many of
Starting point is 00:10:46 the listeners felt trouble with because Eliezer was able to guide us in a very simple and logical path onto the brink. And so we're hoping to continue that conversation with you, Robin, as well as being able to explore some new frontiers. Yeah, Robin, I'm just wondering if we could just wade right into the deep end of the pool here. Because what happened is basically Euler came on our podcast. We thought we were going to talk about AI and safety and alignment. and all of these things. We know he talks about that a lot, and we thought we were going to tie that to crypto.
Starting point is 00:11:14 What it ended up happening midway through that podcast, Robin, is I got an existential crisis. So did David. The rest of the agenda seemed meaningless and unimportant because here is Elyzer telling us, basically, that the AI was imminent. He didn't know whether it would happen in two years, in five years, in 10 years, and 20 years.
Starting point is 00:11:32 But he knew the final destination, which is that AIs would kill all of humanity and that we didn't have a chance. And basically, and I'm not being hyperbolic here, Robin. I know you haven't had a chance to go through that episode, but he basically says, you know, spend time with your loved ones because you do not know how much time you actually have. And so this left like me and I think many bankless listeners
Starting point is 00:11:55 on kind of a cliffhanger of like, oh my God, are we all going to die? And David tried to talk to me after that episode. He's like, Ryan, it's okay. Like, you know, but we knew we also had to like find someone who could give us another interpretation of what is. is going on with AI. And Robin, we have chat GPT4. It looks incredibly sophisticated. It looks like it's advancing at breakneck speed. And we're worried about this scenario. So, when Eliezer-Ey says, we're all going to die, what do you make of that? Do you think we're all going to die?
Starting point is 00:12:27 So, AI inspires a lot of creativity regarding fear. And I think, honestly, most people, as they live their lives, they aren't really thinking about the long-term trajectory of civilization and where it might go. And if you just make them think about that, I think just many people are able to see scenarios they think are pretty scary just based on projection of historical trends toward the future and things changing a lot. So I want to acknowledge there are some scary scenarios if you just think about things that way. And I want to be clear what those are. but I want to distinguish that from the particular extra fear you might have about AI killing us all soon. And I want to describe the particular scenario Ellie Eiser has in mind, as I understand it,
Starting point is 00:13:22 as a very particular scenario where you have to pile on a whole bunch of assumptions together to get to a particular bad end. And I want to say those assumptions seem somewhat unlikely in piling them all together, makes the whole thing seem quite unlikely. But nevertheless, if you just think about the long-term trajectory of civilization, it may well go places that would scare you if you thought about that. And so that'll be the challenge for us to separate those two. So which one would you like to go with first? I would like to start with understanding what you think his assumptions are. All right. Let's do that. And maybe starting there. Okay. So the scenario is you have an AI system, like some coherent.
Starting point is 00:14:05 system. It's got an owner and builder, people who sponsored it, who have some application for it, who are watching it and using it and testing it and, you know, the way we would do for any IAS system, right? There's AN system. And then somewhere along the line, this system decides to try to improve itself. Now, this isn't something most AI systems ever do, and people have tried that, and it usually doesn't work very well. So usually when we improve AAS systems, we do it in another. way. So we train them more and more data, give them more hardware, use a new algorithm. But the hypothesis here is we're going to train. This system is going to be assigned the task, figure out how to improve yourself. And furthermore, it's going to find a wonderful way to do that. And the fact that it found
Starting point is 00:14:53 this wonderful way makes it now special compared to all the other AI systems. This is a world with lots of AI systems. This is just one. It's not the most powerful or the most impressive or interesting, except for this one fact that it has found a way to improve itself. And this way that it can improve itself is really quite remarkable. First of all, it's a big lump. So most innovation, most improvements in all technology is lots of little things. You gradually learn lots of little things and you get better. Once in a while we have bigger lumps.
Starting point is 00:15:23 And the scenario here, there's a really huge lump. And this huge lump means the system can all of a sudden be much better at improving itself. then not only it could before, but in essence, then all the other systems in the world put together. It's really quite an achievement. This lump it finds out a way to improve itself. And in addition, this way to improve itself has two other unusual features about innovations. First, it's a remarkably broad innovation applies across a very wide range of tasks. Most innovations we have on how to improve things are relatively narrow. They'll let it improve in a narrow range of things, but not over everything.
Starting point is 00:16:01 This innovation lets you improve. a really wide range of things. And in addition, most innovations you have let you improve things and then the improvements run out until you'll find some other way to improve things again. But this innovation doesn't run out. It allows this thing to keep improving over many orders of magnitude, you know, maybe 10 orders of magnitude or something. Like, it's just really a huge innovation that just keeps less, just keeps playing out. It just keeps improving. It doesn't run into errors while it improves itself, even then as it discovers, errors, it fixes those.
Starting point is 00:16:34 Or it doesn't, obstacles or things that slow it down and then get stuck for a long time. It just keeps working. Okay. And whatever it does to pursue these innovations, these self-modifications will change it. They probably will change its software configuration, maybe its relative use of resources, the kinds of things it asked for, how it spends its time and money that it has doing things, the kind of communication it has, you know, it's changing itself. and its owners, builders, the ones who are, you know, sponsored it and made it and have uses for it,
Starting point is 00:17:11 they don't notice this at all. It is vastly improving itself and its owners is just oblivious. Now, initially, it's just some random obliviousness. Now, at some point, the system will get so capable, maybe it can figure out how to hide its new status and its new trajectory. And then it might be more plausible that it succeeds at that if it's now, very capable at hiding things. But before that, it was just doing stuff, improving itself, and its owner-managers were just oblivious. Either they saw some changes, they didn't care, they misinterpreted the changes, they had some optimistic interpretation of where that could go,
Starting point is 00:17:50 but basically, they're oblivious. So if they knew it was actually improving enormously, they could be worried, they could like step it, maybe pause it, try variations, try to study it, so they make sure they understand it, but they're not doing that. They are, just oblivious. And then the system reaches the point where it can either hide what it's doing or just rest control of itself from these owners, builders. And in addition, like if it won't rest to control itself, presumably they would notice that. But then, and they might try to retaliate against it or recruit other powers to just, you know, lock it down. But by assumption, it's at this point able to resist that. It is powerful enough to either hide what it's doing or just,
Starting point is 00:18:32 rest control and resist attempts to control it, at which point then it continues to improve, becoming so powerful that it's more powerful than all the other everything in the world, including all the other AIs. And then, soon afterwards, its goals have changed. So during this whole process,
Starting point is 00:18:54 two things have to have happened here. One is it had to become an agent. That is, most AI systems aren't agents. They don't think of themselves, says, I'm this person in the world who has this history and these goals, and this is my plan for my future. You know, they are tools that do particular things. Somewhere along the line, this one became an agent. So this one says, this is what I want and this is who I am and this is how I'm going to do it.
Starting point is 00:19:18 And in order to be an agent, it needs to have some goals. And during this process by which it improved, at some point it became an agent, and then at some point, its goals changed a lot, not just a little. In effect. Now, so any system we can think in terms of its goals, if it takes actions among a set of options, we can interpret those actions as achieving some goals versus others. And for any system, we can assign it some goals, although the range of those goals might be narrow if we only see a range of narrow actions. So we might not be able to interpret goals more generally.
Starting point is 00:19:54 So if we have an AI system that, you know, is a taxi driver, we'll be able to interpret the various routes that takes people on and how carefully it drives in terms of some of, overall goals, respect to how fast it gets people there and how safely it does, but maybe we can't interpret those goals much more widely as what would it do if it were a mountain climber or something, because it's not climbing mountains, right? But still, with respect to a certain range of activities, it had some goals. And then by assumption, basically, in this period process of growing, its goals just become, in effect, radically different. And then by assumption, radically different goals through this random process are just arbitrarily different.
Starting point is 00:20:36 And then the final claim is arbitrarily different goals. When they look at you as a human, you're mostly good for your atoms. You're not actually useful for much anything else at some point. And then you are recruited for your atoms, i.e. destroy it. And that's the end of the scenario here where we all die. So to recall the set of assumptions we've piled on together, we have an AI system that starts out with some sort of owner and builder. It is assigned the task to improve itself.
Starting point is 00:21:08 It finds this fantastic ability to improve itself, very lumpy, very broad, works over many orders of magnitude. It applies this ability. Its owners do not notice this for many orders of magnitude of improvement, presumably, at some point. Or it happens really, really quickly, potentially. Well, that would be presumably the most. most likely way you can imagine the owner's not noticing, perhaps.
Starting point is 00:21:33 But the fundamental thing is the owners don't notice. If it was slow and the owners didn't notice, the scenario still plays out. So the key reason we might postulate fast is just to create the plausibility that the owners don't notice. Because otherwise, why wouldn't they notice? But that's also part of like the size of this innovation, right? We're already improving AI systems at some rate. And so if this new method of,
Starting point is 00:21:58 improvement was only going to improve AI systems at the rate they're already improving, then this AI system won't actually stand out compared to the others. In order for this to stand out, it'll have to have a much faster rate of improvement to be distinguished from the others. And this will then have to be substantially faster, right? Because that would set the time scale there for what it would be to be in the scenario. So it both needs to be faster than the rate of growth of other AI systems at the time substantially and fast enough that the owner builders don't notice this radical change in its agenda, priorities, activities. They're just not noticing that. And then they don't notice it to the point where this thing acquires the ability to
Starting point is 00:22:42 become an agent, have goals, hide itself, or, you know, free itself, and defend itself. And then the last assumption and its goals, radical. change, even that it was friendly and cooperative with humans initially, which presumably it was. Later on, it's nothing like that. It's just a random set of goals, at which point, then, by assumption, now it kills us all. So the question is, how plausible are all those assumptions? And so we could walk through analogies and prior technologies and histories in the last few centuries. And I think Fulma advocates like Eliyzer will say, yeah, this is unusual, to recent history. But they're going to say, recent history is irrelevant for this. This is
Starting point is 00:23:33 nothing like recent history. The only things that are really relevant comparisons here is, you know, the rise of the human brain and maybe the rise of life itself and everything else is irrelevant. So then they will, you know, reject other recent few centuries technology trajectories as not relevant analogies. What did you just call a liaison, Robin, a what advocate, a fume advocate? Fume, Fume. What is Fume? Fume is just another name for this explosion that we've been talking about it. The most common word to describe it. Gotcha. Yeah, the super 10 intelligence explosion. Kurzweil's stuff, like that kind of thing. Singularity, that sort of thing. Well, so singularity is a different concept than FU. Okay. Different concept. In some sense, a fume is a kind of singularity,
Starting point is 00:24:13 but not all singularities are fooms. Robin, thank you for guiding us because we're still learning in this, right? Like, Bankless is, we had never done an AI podcast previously. We covered a lot with crypto and coordination economics, and now we'd do this AI podcast. And I feel like we just got punched in the face. Okay. So we're articulating. and slower. Yeah, we're walking slower. You re-articulating Eliezer's assumptions is, I think, very helpful to me. And so we want to get to like why you think those assumptions are unlikely to be true. But I do think you are right. In the episode with him, he basically sort of painted this fantastical story of these assumptions. And he basically said, yeah, those assumptions, the things that
Starting point is 00:24:50 you're describing, I think, and I don't want to put words in his mouth. So maybe this is what I was hearing him say. Is you're just describing intelligence, Robin. That's what intelligence does. And I'll give you exhibit A. It's called human beings. And I'll give you the algorithm. It's called evolution, gradient descent over millions of years and hundreds of millions of years. And we end up with a super, like an intelligence, but relatives of maybe the animal kingdom,
Starting point is 00:25:14 a super intelligence that exerts its dominance. And its will has changed from just procreating and spreading its genes and memetic material to something that evolution would have never, the evolutionary algorithm would have never envisioned it actually did. And so I think maybe what I was hearing the criticism would be like, we already have an example of this, Robin. It's called intelligence and it's called humans. What do you think about this? So as I said, if we just think about the long run future we're in, we can generate some scenarios of concern independent of this particular set of assumptions, Elias, and set up.
Starting point is 00:25:52 So, you know, the scenario where humans arise and then humans change the world, I guess, guess you could imagine as the scary to evolution, if evolution could be scared, but evolution doesn't really think that way. But certainly you can see that in the long run, you should expect to see a lot of change. And a lot of ways in which your descendants may be quite different from you and have agendas that are different from you and yours. I think that's just a completely reasonable expectation about the long run. So we could talk about about that in general as your fear, I just want to distinguish that from this particular set of assumptions that were piled on as the fume star. Because the fume star was like something that might
Starting point is 00:26:39 happen in the next few years, say, and it would be a very specific event. A particular computer system suffers this particular event, and then a particular thing happens. That's a much more specific thing to be worried about than the general trajectory of our descendants into the long-term future. So again, like which one would you like to talk about? I'm trying to summarize really just the perspective differences here. And I know you've had this debate with Eliezer before. So this is like review for you. I think Eliezer's conclusions is that while the future is unwritten and the paths of our future can be many and multivariate and we can have different possible outcomes, Eliezer is like, well, all roads lead to the superintelligence taking over. And I think just to summarize
Starting point is 00:27:27 your position is like, that is a possible path and it is something to consider and it is still less likely than the many, many, many other possible paths that are also perhaps an aggregate much more likely. Is that a fair summary of your position? So let's talk about this other more general framing and argument. So we could just say in history, humanity has changed a lot, not just a little, a lot. We've not just changed some particular technologies. We've changed our culture in large ways. We've changed the sort of basic values and habits that humans have.
Starting point is 00:28:07 And our ancestors from 10,000 or 100,000 or a million years ago, if they looked at us and saw what we're doing, it's not at all clear they would embrace us as, you know, their proud descendants they are proud of and have. happy to have replaced them. That's not at all obvious. Even just in the last thousand years, or even shorter, we have changed in ways in which we have repudiated many of our ancestors most deeply held values. We've rejected their religions. We've rejected their patriotism. We've rejected their sort of family allegiance and family clan sort of allegiances. We have just rejected a lot of what
Starting point is 00:28:48 our ancestors held most dear. And that's happened over and over again through a long-term history. That is, each generation we have tried to train our children to share our culture. That's just a common thing humans do. But our children have drifted away from our cultures and continue to just be different. And, you know, over a million years, humans fundamentally ourselves changed. And one of the things that happen is we became very culturally plastic. And so culture now is really able to change us a lot because we have become so able to be molded by our culture. And even if our genes haven't changed that much, well, they've changed substantially, say, in the last 10,000 years, our culture has enormously changed us. And if you project the same trend into the future, you should expect that this will happen again and
Starting point is 00:29:42 again, our descendants will change with respect to cultural evolution and their technology and the structure of their society and their priorities. And then, of course, at some point, not too distant future, we will be able to re-engineer what we are or even what our descendants are, and that will allow even more change. That is, once we can make artificial minds, for example, there's a vast space of artificial minds we can choose from, and we will explore a lot of that space, and that allows even more big possibilities for our descendants could be different from us. So this story says our descendants will become, yes, superintelligence, and yes, they will be different from us in a great many ways, which presumably also include values.
Starting point is 00:30:34 And if what you meant by alignment was, how can I guarantee that my distance, descendants do exactly what I say and believe exactly what I believe in will never disappoint me in what they do because they are fully under my control, I got to go, gee, that looks kind of hard compared to what's happened in history. So now, if that's the fear you have, I got to endorse that. That's not based on any particular scenario of a particular computer system soon and what trajectory of events it will go through. That's just projecting past trends into the future in a very straightforward way. So then I have to ask, like, is that what you're worried about? No, that is not what I'm worried about. That is my base case that, like, we're going to get more
Starting point is 00:31:19 intelligent. Technology is going to change us. Culturally, it's going to change a trajectory of how we interact. Okay, but I got to add one zinger to this. Okay. What if change speeds up a lot? So that this thing you thought was going to happen in a million years happens in 100. Well, I mean, for me personally, I'm more of a techno-optimist. So I would be more on the side of, like, within reason, of course, more embracing of these types of change. I know others aren't quite as embracing. And also, this was not the scenario at all that ELEES are presented. He presented the scenario of not rapid change that you might not like in the future and it could come within your lifetime, but actual obliteration of humanity, like literally rearranging our atoms for some other. artificial intelligence purpose. And while you agree with like there will be lots of change as there has been in the past, perhaps that change will even accelerate as we delve deeper in the kind of the technology that is in our future, you do not think that an AI will simply, the super intelligent artificial intelligence will simply obliterate humanity and kind of wipe us from creation
Starting point is 00:32:26 entirely. It won't be quite as drastic as that. Let's be careful about noticing exactly what's the difference between the scenario I presented and the scenario he presented? Because they're not as different as you might think. In both scenarios, there's a descendants. In both scenarios, the descendants have values that are different from ours. And in both scenarios, there's certainly the possibility of some sort of violence or, you know, disrespect of property rights such that the descendants take things instead of asking for them or trading for them. Because that's always been possible in history, and it can remain possible in the future. Today, most change is peaceful, lawful.
Starting point is 00:33:13 And there are, of course, still big things that happen, but mostly it's via trade and competition. And if the AIs displaced us, it's because they beat us fair and square at our usual contest that we've set up by which we compete with each other. So these scenarios aren't that different, I'm trying to point out. And then the key difference here is, one is the time scale, how fast does it happen? Another is how spread out is it? Is there the single agent who takes over everything?
Starting point is 00:33:47 Or are there millions of descendants, billions of them who slowly went out and displace us? How far do their values differ from ours? Just how much do they become indifferent to us? and then do they respect property rights? Is this a peaceful, lawful transition, or is there a revolution or war? Those are the main distinctions between these two stars we've described. Eliasus' a very fast, there's a single agent, its values change maximally, and it doesn't respect previous property rights.
Starting point is 00:34:21 Whereas the scenario I'm describing is ambiguously fast, hey, it could happen much faster than you think of millions or billions of descendants, of a perhaps gradual and intermediate level of value difference, but substantial, but primarily I would think in terms of peaceful, lawful change. I think there's a missing component to this conversation that we've been having recently. And I understand that there are things about the evolution of this AI and things that are about the evolution of humanity that are all basically synonymous, right? There's iteration, there's development, there's progress. And Robin, you gave the account for that when we raise our kids, we try and imbue them with our values and our cultures. And there is
Starting point is 00:35:09 transcription errors in that, in that only so much of our values and cultures get passed along to our kids. And perhaps as technology advances, even less passes long from generation to generation, And our culture changes over time, and this is what we call progress. And when we go back to the AI innovating on itself, there you also presented a scenario of improvement errors with that as well. Like, we don't know how perfectly it can improve. And so as it develops, it changes and adapts. And these are all similar structures. And so this is what we know. And maybe the timescales throw us off a little bit, but these are similar patterns. There's one component missing that I'd like to highlight and dive into. When we have our generations of
Starting point is 00:35:51 kids and humanity that progresses. And even if it changes, it still started from us in the first place, right? There's a logical continuation of parent to kid, parent to kid, parent to kid, and so it at least starts from a place of continuation. I think the problem with this AI alignment and super explosion issue is that in the moment that we create this AI, it actually doesn't upload our value system because we are creating a completely new life form. And so it is not biological life. It is not DNA that is growing up to an adult to combine with somebody else's DNA to create a kid who then grows up. It's like that isn't being carried forth. So in the moment that we create AI, it has no trail of evolutionary history to imbue it with values and judgment and how to perceive
Starting point is 00:36:43 the world in an aligned fashion. And so in that creation moment, it is completely, rogue and we don't know how to understand it and it doesn't know how to understand us because it is a completely new form of life with a completely new form of appreciating and understanding values. And I think that's the missing component, even though there are similarities and how these things progress, the bootloader for values and alignment is missing in this AI. And I think that we haven't touched on yet. So I do some work on aliens. We could talk about that later if you want.
Starting point is 00:37:16 I'm looking forward to that part of the conversation. by the way. But I'm quite confident that compared to all the aliens out there in the universe and all the alien AIs that they would make, the AIs that we will make will be correlated with us compared to them. We aren't making random algorithms from the randomly from the space of all possible algorithms and machines. That's not what we're making. We are making AIs to fit in our world. So, you know, the large language models made recently, the most impressive things, those are far from random algorithms in the space of all possible algorithms. They are modeling after us.
Starting point is 00:38:00 And most, in the next few decades, as we have more AI applications, machines will be made by firms trying to make profits from those AIs. And what they'll be trying to do is fit those AIs into the social slots that humans had before. So they'll be trying to make the AIs like humans in the sense that they will have to look and act like humans well enough to sit in those social slots. If you want an AI lawyer, it will have to talk to you somewhat like a human lawyer would. And similarly for an AI housekeeper, et cetera, we will be making AIs that can function and act like humans exactly so that they can be most useful in our world. And we are the ones making them. And so just out of habit, we're making them like us in some abstract sense.
Starting point is 00:38:48 Now, there's a question of how much like us? And then there's the question of, well, how much did you want? And how much is feasible? And how really close are your kids anyway? Or your grandkids? Because just remember how much we humans have changed. I think when you look at historical fiction or something, it doesn't really come across so clearly. We humans have changed a lot.
Starting point is 00:39:12 and are changing a lot, even in the last century. If you just look at the creative change of human culture and attitudes and styles in the last century, project that forward a hundred more centuries. You've got to be imagining our descendants could be quite different from us, even if they started from us. And it's interesting, mostly software changes, would you say, like at the cultural level. I mean, human hardware hasn't really changed that much. Recently, yes, because, I mean, although we have substantially changed the hard
Starting point is 00:39:42 too, but yes, most lawful. But in the future, we will be able to make hardware changes to our descendants. I have this book called The Age of M, Work, Love, and Life from Robots Rule the Earth, and it's about brain simulations. And so this is where we make very human-like creatures who are artificial, using artificial hardware, but then they can modify themselves and become more alien more easily, because they can more easily modify their hardware and software as they are basically computer simulations of human brains. So if that happens soon, then even that human line of descendants will be able to become quite different in a relatively short time. Ryan, if you thought the AI alignment problem would throw you for a tizzy, I can't wait
Starting point is 00:40:24 until we get into the conversation about synthetic biology separating humans to some be gods and others not be gods, but that'll be a different podcast. Robin, I think in your argument here, you baked in the belief, the assumption that these AIsab, AIs will adopt our values merely by like osmosis from the devs and the engineers who are coding them up, because they will code them up to do certain things and behave in certain ways, using characters on our English or our keyboards, for example, and just merely by being association of being created by us, it's actually impossible to not imbue them with our culture and our values.
Starting point is 00:41:07 Is that what you're saying? Well, there's a big, I'm Elwood. How is it that you think your children are like you? I mean, they are basically growing up in your society. Well, mainly because they're biological cells, not computers. Humans are really quite culturally plastic. Maybe that's another thing people really don't quite get. So anthropology has gone out and looked at a really wide range of human cultures
Starting point is 00:41:28 and found that humans are capable of behaving and thinking very differently, depending on the culture they grew up in. That's the basic result of anthropology. There are some rough human universals, but mostly we're talking variation. The fact that you seem very similar with all the other humans around you is not about sort of the innate human similarity you have. It's because you are in a similar culture to them. So to just re-articulate your position here, I think we are saying that Eliezer is perhaps
Starting point is 00:41:59 fearful that the super intelligent AI and humans are so far apart that they can never come to coexist. And what you're saying is that life. as a whole has similarities no matter how it manifests or how it is expressed. Is that how you would say it? I was trying to tell you that your descendants could be really different from you. I wasn't trying to convince you that there was a bound on just how different your descendants could get. I was trying to show you that in fact, your descendants could get really different, not through this fomstner, just through the simple default way that society could continue to change. If you're going to be
Starting point is 00:42:36 scared about the fumes scenario, maybe you should be scared about that one too. We could start to talk about what we might know in general about intelligent creatures and what might be the common features across them for all alien species through all of space time or something. There probably are some general things they have in common, but they might be fewer than would comfort you. I definitely want us to get there. But really quick, just picking apart the assumptions that you laid out. And I want to see which ones more specifically you might disagree with or state in a different way than LESer. You said, you know, assumption one is that the AI improves itself.
Starting point is 00:43:09 It seems core to what Eliezer thinks. Assumption two, the owners, that is the people who program it, don't take control, don't try to stop it. Assumption three, the AI becomes an agent, an assumption four, the agent's goals change, the AI's goals change, and it ends up destroying humanity. I find some of these harder to, like, believe than others, particularly assumption four. Like, I didn't understand an Eliezer's argument. The reason that suddenly the AI destroys humanity, like that maybe we could talk about, but let's start at the top, actually. Do you have a disagreement with Assumption 1 that an AI will recursively start to improve itself? Well, remember, I tried to break a one into multiple parts to show you that it requires multiple things all to come together there.
Starting point is 00:43:52 So not only does it try to improve itself, it finds this really big, lumpy improvement, which has enormously unusual scale in terms of how far it goes before it runs out and scope in terms of how many things it allows the improvement of. and magnitude is just a huge win over previous thing, those are all a prior and likely things. So it's not. The fact that tries to improve itself, that seems quite likely, sure. Somebody might, well, ask a system to try to improve itself, but that it would find such a powerful method
Starting point is 00:44:25 and then still not be noticed by its owners, that gets pretty striking as an assumption. I understand. And so that's where it's, tied into like, you find it hard to believe that the owners, the creators of this AI, wouldn't be able to stop it from doing something nefarious or devious. That is also a difficult assumption. Well, it's first just noticing that is by assumption this thing starts out at a modest level ability, right? By assumption, this thing is comparable to many, many other AIs in the
Starting point is 00:44:56 world. So by assumption, if you could notice a problem early on, then you can't stop it. because, you know, you can bring together thousands of other AIs against this one to help you stop it if you want to stop it. So at some point later on in this evolution, it may no longer be something you could stop, but, you know, by assumption that's not where this starts. It starts at, you know, being comparable to other A.I. systems. And then it has this one advantage, it can improve itself better. And then it does. And then this other assumption, what I'd label number three, the AI becomes an agent. So how will likely is an AI to become a self-interested acting agent, is that difficult to foresee? Well, of course, some owners might make it that way, but most won't. So we're narrowing down the set here. So my old friend Eric Drexler, for example, has argued that we can have an
Starting point is 00:45:54 advanced AI economy where most AIs have pretty narrow tasks. They aren't general agents trying to do everything, they drive cars to the airport or whatever, they each do a particular kinds of task. And that's, in fact, how our economy structured. You know, our economy is full of industries, made of firms who do particular tasks for us. And so a world where those firms are now much more capable and even, you know, artificially intelligent capable, but even more than superhuman capable can still be a world where each one does a pretty narrow task. And therefore, isn't a general agent that would, you know, do a normal, you know, do a normal, you know, change things if it became more powerful.
Starting point is 00:46:33 So if you had a system that was really good at route planning, say cars to get from A to B, if it was superhuman at that, it might just be really good at route planning. But if that's all it does, it's not plausibly going to suddenly transition to an agent who sees itself as having history and whole goals for the world
Starting point is 00:46:50 and trying to figure out how to preserve itself and make itself go. That's pretty implausible for a route planning AI. So in a plausible future, most AIs would be relatively narrow and have relativity tasks, but sometimes somebody might make more general AIs that had more general scope and ambitions and purposes. And then those might be the basis of a scenario here. But the people who created those AIs, they would know its unusual feature. They would know this one is an agent.
Starting point is 00:47:22 And they would presumably take that into account in their monitoring and testing of this thing. That's not, they're not ignorant of this fact. So the scenario whereby the route planning one just accidentally becomes an agent, I mean, that's logically possible. But now we've got to say, you know, how often do design systems for purpose A suddenly transform themselves into something that does all different thing be? It happens sometimes, but it's pretty rare. And so let's say it gets through all of these gates, right? We have an AI that improves itself in broad ways and in ways that are, you know, somewhat lumpy. the owners, for whatever reason, aren't able to take control. The AI, you know, tricked them in some way.
Starting point is 00:48:00 Maybe the owners have programmed this AI to become an agent, so it's an agent acting and it's free will. This last point then, Eliezer's conclusion is like the point that was most concerning, of course, is that then this AI comes and destroys humanity. And, you know, I think his rationale is basically because why not, it would have other purposes for humanity, would just, you know, step over them. What about this assumption? So imagine instead of one AI, we have a whole world of AIs who are improving themselves and their values are diverging. That's more of a default scenario. If that happens in a world of property rights, then say humans are displaced and no longer at the center of things. We're not in much demand. We basically have to retire. Humans go off to our retirement corner and spend our retirement savings. If that stays a peaceful scenario, then all of the these AIs who, you know, change and have other purposes, they don't have to kill us. They can just ignore us off in the corner, spending our retirement savings. But there's a possibility of a revolution
Starting point is 00:49:04 say whereby they decide, hey, why let these people sit in the corner? Let's grab their stuff. So, I mean, the possibility of a violent revolution has always been there and it's there in the future. But in the world we're living in, that's a rare thing. And that's good. And we understand roughly why it's rare. So the thing that's happening different in LAASR's scenario is because it's the one AI you see, it's not in a society where revolutions are threatening. It's just the one power. And then from its point of view, why let these people have their property rights? Why not take it? Now, I would say that the main thing there is not that it has different goals, but that it's singular. And therefore, not in a world.
Starting point is 00:49:50 where it needs to keep the peace with everybody else and be lawful for fear of offending others or the retribution, that it can just go grab whatever it wants. That's the distinctive feature of the scenario he's describing. In a more decentralized scenario, again, I think there's much more hope that even if AIs displaces, even if their goals become different from us, they could still keep the peace because plausibly they could be relying on the same legal institutions to keep the peace with each other as they keep with us. And that's in some sense why we don't kill all the retirees in our world and take their stuff. Today, there's all these people who are retired and like, what are they done for us lately? We could all go like kill the retirees and take their stuff, but we don't.
Starting point is 00:50:33 Why don't we do that? Well, we share these institutions with the retirees. And if we did that, that would threaten these institutions that keep the peace between the rest of us. And we would each have to wonder who's next. And this wouldn't end well. Okay, and that's why we don't kill all the retirees and take their stuff, not because they're collectively powerful and can somehow resist our efforts to kill them. We could actually kill them and take their stuff. That would actually physically work. That's not the problem with that smart. The problem is what happens next after we kill them and takes the stuff. Who do we go for next? And where does it send? So a future of AIs who become different from us and acquire new goals and our
Starting point is 00:51:15 agents threatens us if they have a revolution and kill us and take our stuff. That's the problem there. And so L.E.I's a solution, you see, makes that seem more likely by saying there's just the one agent. It has no internal coordination problems. It has no internal divisions. It's just the singular thing. And honestly, we could add that as another implicit assumption in his scenario. He assumes that as this thing grows, it has no internal conflicts. It becomes, more powerful than the entire rest of the world put together. And yet, there are no internal divisions of note. Nothing to worry about. There's no code forking. Right. It doesn't have different parts of itself that fight each other and that have to keep the peace with each other because that's why we have
Starting point is 00:52:01 law and property rights you see in our world is because we have conflicts and this is how we keep the peace with each other. And he's setting that aside by assuming that it doesn't need to keep the peace internally because it's the singular thing. So we should really really, hope for a pluralistic world of many AIs. And in fact, you think that's a more likely world anyway. Of course, yes. So we're already in a world of great many autonomous parts, right? We have not only billions of humans, but we have millions of organizations and firms and even nations and government agencies. And one of the most striking features of our world is how it's hard to coordinate among all these differing interests and organizations. And one of the most striking features of our
Starting point is 00:52:44 world of the mechanisms we use to keep that piece and to coordinate among all these divergent, conflicting things. And one of the moves that often AI people make to spin scenarios is just to assume that AIs have none of that problem. AIs do not need to coordinate. They do not have conflicts between them. They do not have internal conflicts. They do not have any issues in how to organize and how to keep the peace between them. None of that's a problem for AIs by assumption. They're just these other thing that has no such problems. And then, of course, that leads to scenarios like, then they kill us all. You know Uniswap as the world's largest decks with over $1.4 trillion in trading volume.
Starting point is 00:53:22 But it's so much more. Uniswap Labs builds products that lets you buy, sell, and use your self-custody digital assets in a safe, simple, and secure way. Uniswap can never take control or misuse your funds, the bankless way. With Uniswap, you can go directly to defy and buy crypto with your card or bank account on the Ethereum layer 1 or layer 2's. You can also swap tokens at the best possible prices on Uniswap.org. And you can also find the lowest floor price and trade NFTs across more than seven different
Starting point is 00:53:52 marketplaces with Uniswop's NFT aggregator. And coming soon, you'll be able to self-custody your assets with Uniswop's new mobile wallet. So go bankless with one of the most trusted names in D5 by going to Uniswop.org today to buy, sell, or swap tokens and NFTs. Arbitrum 1 is pioneering. pioneering the world of secure Ethereum scalability and is continuing to accelerate the Web 3 landscape. Hundreds of projects have already deployed on Arbitrum 1, producing flourishing defy and NFT ecosystems. With a recent addition of Arbitrum Nova, gaming and social daps like Reddit are also
Starting point is 00:54:27 now calling Arbitrum home. Both Arbitrum 1 and Nova leverage the security and decentralization of Ethereum and provide a builder experience that's intuitive, familiar, and fully EVM-compatible. On Arbitrum, both builders and users will experience faster transaction speeds with significantly lower gas fees. With Arbitrum's recent migration to Arbitram Nitro, it's also now 10 times faster than before. Visit Arbitrum.io
Starting point is 00:54:50 where you can join the community, dive into the developer docs, bridge your assets and start building your first app. With Arbitrum, experience Web3 development the way it was meant to be. Secure, fast, cheap, and friction-free. How many total airdrops have you gotten? This last bull market had a ton of them.
Starting point is 00:55:06 Did you get them all? Maybe you missed one. So here's what you should do. Go to Earnify and plug in your Ethereum wallet, and Earnify will tell you if you have any unclaimed airdrops that you can get. And it also does POAPs and mintable NFTs. Any kind of money that your wallet can claim, Earnify will tell you about it. And you should probably do it now because some air drops expire.
Starting point is 00:55:24 And if you sign up for Earnify, they'll email you anytime one of your wallets has a new air drop for it to make sure that you never lose anirdrop ever again. You can also upgrade to Earnify premium to unlock access to air drops that are beyond the basics and are able to set reminders for more wallets. And for just under $21 a month, it probably pays for itself with just one air drop. So plug in your wallets at Earnify and see what you get.
Starting point is 00:55:45 That's E-A-R-N-I.F-I. And make sure you never lose another air drop. Learning about crypto is hard. Until now, introducing Metamask Learn, an open educational platform about crypto, Web3, self-custody, wallet management, and all the other topics needed to onboard people into this crazy world of crypto. Metamask Learn is an interactive platform with each lesson offering a simulation for the task at hand, giving you actual practical experience for navigating Web3. The purpose of Metamask Learn is to
Starting point is 00:56:14 teach people the basics of self-custody and wallet security in a safe environment. And while Metamask Learn always takes the time to define Web3 specific vocabulary, it is still a jargon-free experience for the Crypto-Curious user. Friendly, not scary. Metamask Learn is available in 10 languages with more to be added soon, and it's meant to cater to a global Web3 audience. So, are you tired of having to explain crypto concepts to your friends? Go to Learn. dot metamask.io and add metamask learn to your guides to get onboarded into the world of web three. Right. Like, AIs are a monolith. But I think one of the reasons why I appreciate just your line of reasoning, Robin, and how you think is that you tap into what seems to be fundamental truth of
Starting point is 00:56:55 this universe that you would find here on planet Earth or in a galaxy far, far away. Certain things, I think can be assumed no matter what the environment is. And then I think a lot of your logical conclusions are just like natural extensions of that. I was just going to say, I think a lot of disagreements in the world are often based on people having sort of different sets of abstractions and mental tools and then finding it hard to merge them across topics. So I think when a community has a shared set of abstractions and mental tools, even when they disagree about details, they can use those shared abstractions to come to an agreement. But when you have people with just different sets of abstractions, that's true. So I'm
Starting point is 00:57:34 bringing a lot of economics to this. Other people might be bringing a lot of computer science, but I'm going to play my polymath card and say, I've spent a lifetime learning a lot of different sets of conceptual tools and intellectual systems, including computer science, certainly big chunks of. And so I'm trying to integrate all those tools into a overall perspective where I can sort of pull in each observation or insight into this sort of overall structure. So is this the economic reason the robots aren't going to come kill us then, maybe? Is that what you're kind of providing? Or just, if they kill us, they would do us in the usual economic ways.
Starting point is 00:58:15 So economics doesn't assure you that nobody will ever kill you, okay? They have to have good reasons. I mean, people have been killed in the world in the past, but, you know, we have an understanding of the main ways that in the last few centuries, people have been killed. You know, that's been something people have paid attention to. How do people get killed? How does that happen? And so, you know, theft is like murder is one kind of way people get killed. War is another way.
Starting point is 00:58:40 Revolution is another way. Or sometimes just displacement where something out competes you. And then you don't have any place to survive. So in some sense, like horses got out competed by cars at some point. And they suffered substantially. And we understand how that works out. So that's the sort of thing that can happen to humans. We could suffer like the way horses did.
Starting point is 00:59:00 That's interesting. So I'm not trying to tell you nothing could go wrong. Did horses suffer, though? I mean, they are... By population standards, they diminish significantly. Did any individual horse suffer and feel suffering as a result of cars? Probably not. Seems like a good life to be on an equestrian farm rather than sort of slaving in a, you know, a cityscape being whipped by a buggy master.
Starting point is 00:59:23 My understanding is horse population is now, you know, as high as it ever was. But of course, you know... This is not a fact that I keep ready to mind. It's not as high as you might have projected had they continued. previous growth rates. So there was a substantial decline and then revisited. But now most horses are pets and not workhorses, but still. I'm not sure if I'm ready to be a pet, but that's a problem for my kids probably, hopefully. Just quick scenario, Robin. What's more likely a single monolithic, super intelligent AI does the LESA thing, or we have humans have a robot human conflict war. And it's more
Starting point is 01:00:03 like kind of maybe in the traditional sense where we have two sides and there's and what's more likely. So that second one seems far more likely to me, but you should just put it into context. That is, humans are at the moment vary by an enormous number of parameters. We vary by gender and age and profession and geographic location and wealth and personality. And in politics, especially, we try to divide ourselves up and form teams and coalitions by which together we will then oppose other coalitions. And this is just an ancient human behavior where we form coalitions and fight each other. And we expect that will continue. So arguably say democracy has allowed us to have more peaceful conflicts where coalitions fight in elections rather than in wars. But even in, say,
Starting point is 01:00:54 firms, you know, there's often political coalitions that are fighting each other. And there's always the question, what is the basis of the dominant coalitions? So there's this wide range of possibilities. You could have a gender-based, you know, the men fighting the women, you could have an age one, the old people fighting the young, you could have an ethnicity one, you could have a profession one. So in a firm, it might be the engineers versus the marketers. Right. And so humans versus robots, or a robotic descendants is one possible division on which future conflicts could be based. That's completely believable. And I can't tell you that can't happen.
Starting point is 01:01:33 The main thing I'll just point out, that will be competing with all these other divisions. So will it be the humans versus robots conflict, or will it be the old versus young, or will it be the word cells versus the shape rotators? I mean, there's all these different divisions, and it could well be that there's an alliance of human word cells and AI word cells versus human shape rotators and AI shape rotators. And that becomes the future conflict, you see, because in some fundamental sense, the division of the conflict is indeterminate. That is a final thing we understand about politics is whatever divisions you have, it's unstable to the possibility of some new coalition forming instead.
Starting point is 01:02:15 That's a basic thing we understand about politics. It's hard to keep stable coalitions because they're so easily undermined by new ones. At least with the human versus robot coalition, like looking into past human behavior, we tend to be pretty racist. But I think when we have robots, it would be really easy to forget our internal conflicts when there's a completely different resource. Like, why do we fight? Why do humans fight? It's usually over resources, like economic resources. And when there is a new species that is subdividing and iterating and growing as humans do, that's also sucking up the resources, and they look like, I don't know if they're going to be metal in the future, but that's my current vision of them is like metal, silicon, terminator type robots walking around.
Starting point is 01:03:00 And there's only so many resources on the planet. And so, like, that would be a pretty easy dividing line between humans and robots that I could imagine would make that conflict much more likely. And so regardless of, like, how, maybe it's the LEAzer way in which a super monolithic, super intelligent robot comes, and we have to fight that. Or, like, at some point there's conflict, potentially, and I might even say likely, if there is a different, like, to call it a species. Recently, this is kind of aside, this is going back to, like, the superintelligent stuff, but I think we can now call this just AI conflict. The Future of Life Organization released an open letter calling for the pause of all general
Starting point is 01:03:41 AI experiments. A few people signed it. Elon Musk, Steve Wozniak, Yuval Noah Harari, Andrew Yang. It's basically a call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than chat GPT4. So this letter says, don't go beyond chat GPT4. Beyond there, it gets even scarier. Let's pause. Let's halt.
Starting point is 01:04:06 Let's figure out this AI alignment issue first. I just want to get your reaction to this letter and people signing it, Robin. Like, would you sign this letter or are you against signing this letter? and just what do you think about the idea of this letter? So first just notice that we've had a lot of pretty impressive AI for a while now. It's when the AIs are the most human-like with these large language models that people are the most scared and concerned. So that suggests that maybe a very advanced AI will look pretty human-like in many ways.
Starting point is 01:04:40 And don't forget that our descendants will start to add metal to themselves and become different in one. just like their brain emissions are metal and quite different. So again, it's not so obvious where the division line would be. But to go to this particular letter, first of all, with respect to the general concerns they have, if we had a six months pause, at the end of that, we really wouldn't know much more than we know now. The main purpose of the pause would seem to be to allow, say, the time for government to get its act together and have institutes some more official law that enforces such a pause to continue.
Starting point is 01:05:14 that would be the main purpose for the pause. You'd be wanting to support the pause if you were wanting that further event to happen. It's not like we're going to learn that much in six months. Or how about if maybe you were a competitor and wanted to catch up? So then we go to the is this, you know, so first if we could do the pause, would it be a good idea? And then one of the issues is like who would be participating in who not. So the ideal thing is say we could get a global pause somehow. Would that be a good idea?
Starting point is 01:05:41 now we're basically talking about should we basically shut down this technology for a long time until people feel safer about it. So for that issue, I think the comparison with nuclear energy is quite striking. Basically, around 1970, the world decided to back off a nuclear energy, and we basically instituted regulatory regimes that allowed the safety requirements asked of nuclear power plants to escalate arbitrarily until they started to cut into costs. And that basically guaranteed this would never become lower cost than there are other ways of doing things. And people were okay with that because they were just scared about nuclear power.
Starting point is 01:06:20 So basically, the generic fear didn't go away. And they just generically said, this just could never be safe enough for us. Whatever extra budget we have, we want it to be safer. And that's the way they put it. And so I would think a similar thing would happen if they are. The kind of reassurances people are asking for are just not going to be feasible for decades, at least. So you'd basically be asking for this to be paused for decades. And, you know, it's even hard to imagine eventually overcoming that.
Starting point is 01:06:48 Because, you know, the fundamental fears, as we've been describing, is just the idea that they might be different. And they might have different agendas and they might out-compete us and that's just not going to go away. So I would see this as basically, do you vote for substantial technological change or not? And I get why many people might think, look, we're rich enough, we're doing okay. Let's not risk the future by changing stuff. and they voted that way on nuclear power, and they might well vote that way on AI. I would rather we continue to,
Starting point is 01:07:16 I think we have enormous far we can go if we continue to improve our tech and grow, but I can understand why many people think, nope, we got lucky so far, things didn't go too bad, we're in a pretty nice place, why take a chance and change anything? So that's all, if it was possible,
Starting point is 01:07:31 to actually have a global enforcement of such a pause and then a further law. But of course, that just looks really hard. that is, you know, this technology is now pretty widely available. That is, you know, it might be that the best new systems are from the biggest companies that can afford the most hardware to put on it. But the basic software technology here is actually pretty public and pretty widely available. And so, you know, over the next few decades, even if you manage to say no more than a billion dollar project doing this, you're going to have lots of less than billion dollar projects doing this. and of course it'll be hard to have a global ban and so the U.S. now has a commanding lead and
Starting point is 01:08:12 the main effect of a delay if it's not global would be to take away the U.S. lead and it's just this looks like a hard technology to ban honestly you know you might be able to get Google and open AI and Microsoft or something to pause their efforts because you know they are big companies with pity public activities. And Robin, I'm trying to understand. So even if it was enforceable, understand the reasons you give why it's not enforceable and very difficult to do some sort of a global ban of some sort.
Starting point is 01:08:43 Let's say it was for a minute. Would you support it? Do you think this is worth pulling the fire alarm over? Again, I think it's comparable to say genetic engineering or nuclear energy or some other large technologies that we've come across in the last few decades where there really is huge potential, but there's also really big things you could be worried about.
Starting point is 01:09:00 And honestly, I think you just have to make a judgment on the overall promise versus risk framing. You can't really make a judgment here based on very particular things because that's not what this is about. We made a judgment of nuclear energy to just back off and not use it that much. That's a judgment humanity made 50 years ago. Within the last few decades, we made a similar judgment on genetic engineering, basically. Nope, we just don't want to go there for humans, at least. and we may be about to make a similar decision about AI. But honestly, this trend looks bad to me because many people think social media is a mistake,
Starting point is 01:09:39 and maybe we should undo that and go back on that. So the trend of blocking technological progress is bad to you in general, whether it's nuclear or genetic engineering or social media or AI or any of these things. Right. I actually am concerned that this is the future of humanity actually here. So I did this other work on Grabby Aliens, on sort of the distributions of aliens to space time. And in that framework, the most fundamental distinctions between alien civilizations is the one between the quiet ones who stay in one place and live out their history and go away without making much of a mark on the universe and loud ones who expand and then keep expanding until they meet other loud ones. and I can see many forces that would tend to make a civilization want to be quiet. And that's what we're talking about here.
Starting point is 01:10:31 That is, even in the last half century, the world has become a larger integrated community, especially among elites, whereby regulatory policy around the world has converged a lot, even though we have no world government. You certainly saw that in COVID, but you also see it in nuclear energy and medical ethics in many other areas. Basically, the elites around the world in each area talk mainly. to each other. They form a consensus worldwide about what the right way to deal with that area is, and then they all implement that. And so there's not actually that much global variation in policy
Starting point is 01:11:03 in a wide range of areas. And people like that, I think, compared to the old world. Certainly, it's reduced civil wars of various kinds. And people like the idea that instead of nations fighting and competing with each other, that we're all talking together and deciding what to do together and that that sort of talking may deal with global warming. It may deal with inequality. It may deal with overfishing. There's just a bunch of world problems that these people talking together feel like they're solving. And people will like this world we're moving into where we all talk together and agree together about what to do about most big problems. And that new world will just be much more regulated in the sense that they will look at something like nuclear energy. And then everybody
Starting point is 01:11:46 say, nope, we don't want to do that. And let's shame anybody who tries to do that. and slowly together limit humanity's future. And that could go on for thousands of years, and then if we ever have a point where it was possible to send out an interstellar colony to some other star, we will know that if we allow that, that's the end of this era. Once you have a colonist go somewhere else,
Starting point is 01:12:08 then they are out of your control, they are no longer part of your governance sphere, they can make choices that disagree with what you've done, they can then have descendants who disagree, they can evolve and become different from the center and come back eventually to contest control over the center. So that becomes a future world of competition and evolution that could be go very strange and stark places.
Starting point is 01:12:33 But if we would all just stay here and not let anyone leave, then we can stay in this world of us. We talk together, we decide things together. We only allow our descendants to become as weird as we want them to be. If we don't want a certain kind of weird descendants, we just shut it down. and that's the quiet civilization that we may become. And that's kind of what's at stake here, I would say, with banning AI.
Starting point is 01:12:57 It's one of many questions like that that we are answering about, do we want to allow change and new large capacities that might threaten strangeness and conflict? So I think this is actually the moment where this podcast episode goes from, continuing the conversation that we had about AI with Eliezer and all of those alignment problems in that conversation. And this actually becomes a part of a larger conversation that we've been having on bankless for a while now. And this has to do with the status quo versus innovation and progress, as well as it does with what you were just saying, Robin, about graby aliens. And so I want to try and connect these dots really quick. This idea of AI and AI innovation, along with crypto innovation, and whether or not it should be regulated by the
Starting point is 01:13:47 elites, by the status quo, and whether it should be contained, and are the elites happy with the harmony of the social order? And perhaps we shouldn't have new competition and new exploration into the frontier, because that is how we maintain the social order, because there's nothing new that's happened. What you're saying that this does is this keeps us in a, it's like an isolationist approach, except it's an isolation approach. Except it's an isolation approach. from like inside of planet earth. And I think being the future tech optimist that Ryan I are, and I think you are as well, you aren't for that. You would like to penetrate that isolationism that has from like the social elite saying, hey, let's not experiment with crypto or AI or longevity
Starting point is 01:14:33 or synthetic biology research. Let's just like keep everything harmonized and in control. And we will use our large centralized power to keep the world under control. And then we have this other conversation that we're about to go into, which is grabby aliens, which is whoever is these alien species that is expanding out into the world, chose to not do that. They chose to explore the frontier. They chose to innovate under the guise of competition, of capitalistic competitive competition to innovate and start to expand outwards into space. And I think baked in your argument is that you actually do need competition in order to explore the frontier. And so I'm wondering if, A, if that was a good summary, and B, kind of like, do you see that picture of just like
Starting point is 01:15:19 how this concern about AI or concern about progress in general is also linked to like the grabiness or quietness that you call in aliens? And maybe you can characterize these different kind of aliens and the choices that they make of it as a civilization? Yes, I thought that was a reasonable summary. I think when we see people today discuss the possibility of our descendants spreading into the galaxy, they are often wary and a bit horrified by the impact it might have. That is, the sort of people we've become over the last century are people who find that a jarring and even unpleasant scenario.
Starting point is 01:16:01 because it is actually fundamentally jarring and unpleasant. So I am with you in wanting to allow such changes, but I want to be fully honest about the cost that we are asking the world to accept. That is, if you wanted our descendants to just stay pretty much like this indefinitely, that's not what we're talking about here. if the cost of allowing our descendants to expand into the universe and explore technologies like AI and nuclear power, etc., is literally alienation. That is, we are now alienated from our ancestors. Our world and lives are different, and we feel that at some level that we were not built for the world we're in.
Starting point is 01:16:47 This is an alien world that we're in compared to the world we were built for. We feel that deep inside us. And that will continue. it will only get worse. And, you know, the time it'll get better is when we can go change who's inside us to become more compatible with these alien worlds, but that will make those descendants even more different from us. So that's really the cost you have to be asking.
Starting point is 01:17:12 So this future world of strange new technologies is also a competitive world. And that competitive world includes conflict. It includes some people, some kinds of things displacing others. some things just being shunted aside and marginalized. And it may even include war, violence. It certainly probably includes radical change to nature. Not just biology on Earth, but our descendants who go out into the universe would likely not just pass by and plant flags.
Starting point is 01:17:44 They will take things apart and rearrange them and radically change them. And sometimes that'll be ugly and sometimes it'll be violent. and sometimes it'll leave crude, ugly waste and be inefficient where if they could have done it better, that will be the course. And this universe we see now that's pristine and, you know, the way it was from long ago will just be erased. That's the cost. So I want to explore this idea of grabby aliens. And I'm sure listeners who are being thrown into this odd adjective, grabby, might be a little bit confused.
Starting point is 01:18:24 And so I'm hoping we can explain the nature of grabbiness. But I'm hoping we can actually do it inside of the context of planet Earth and human history. Because I think that naturally extrapolates into the galaxy because this is where the place of grabby aliens, that's where they play. And first, I think I want to ask you the question, humans, are we grabby? because if you look back in history, you have some sort of quiet human species, human tribes, that were found by the graby humans. You can call these the conquistadors or the conquerors, right? The Roman Empire.
Starting point is 01:19:01 Very graby empire. Any sort of empire that looked outward and expanded, I would, under your trying to understand like Robin Hansen's works of grabbiness, I would call any sort of empire that expanded grabby. And then these grabby empires found the quiet, like tribes that were probably. peaceful and grabbed them and then assimilated them into the grabbiness. And so this is kind of how I would present this inside of a context that we understand because we understand human history. But I want to ask you this very basic question of just like human nature. Are we grabby? So almost all biology
Starting point is 01:19:35 has been grubby and therefore almost all humans, but it's not so much about our nature. So the fundamental point here is there's just a selection effect. That's the key point. That is, if you have a range of creatures with different cultures or biological tendencies, and some of them go out and expand and others don't, if there is a place they can expand to and they would actually, you know, could reproduce there, then there's a selection effect by whichever ones do that, they then come to dominate the larger picture. That's just the key selection effects. So there may be many alien species and civilizations in the universe, and maybe most of them
Starting point is 01:20:14 choose not to expand, but the few ones who do allow expansion, they will come to dominate by space-time volume the activity of the universe. And that's how evolution has worked in the past. It's not that all animals or all plants are aggressive and violent and hostile. It's that they vary. And some of them have a habit of sticking one place in hiding, another have a habit of jumping out and going somewhere else when they can, and the net effect of the variation in all their habits is when there's a new island that pops up, it gets full of life because some of those things that move lay in there and grow. And any new, a mountain grows higher and then new life shows up at the top of the mountain and a new niche opens up of any sort where life is possible there,
Starting point is 01:21:04 and then some life goes there and uses it. That's just the selection effect. So that's what we should expect in the universe. There's the question of which way we will go. and if I focused on humans, I'd say it's a trade-off between what would happen if we don't coordinate and how hard we will try to coordinate. So an uncoordinated humanity, there's certainly enough variation within humanity. Some of us would go be grabby. It might not be most of us, but certainly some of us, given the opportunity, would go grab Mercury or Pluto or whatever else it is and then go out and grab further things. We might choose to prevent that. We might choose to organize and coordinate so as to not allow those things to happen. And we might succeed at that.
Starting point is 01:21:50 We have enough capability perhaps to do that. And so then it becomes a choice. Will we allow it? But basically, whatever you're talking about, something that it only takes a small fraction of us to do and we vary a lot, then the question is, will we allow that variation to make it happen or will we somehow try to lock it down? The bankless audience is pretty familiar with the idea of Moloch. It's a topic that we've revisited a number of times. Are you familiar with Molok? I'm familiar with the famous Scott Alexander essay on it. Yeah. Although I think the concept isn't entirely clear in that essay. Sure. Yeah. So Molok, just being like the idea of the prisoner's dilemma, say you have two or
Starting point is 01:22:28 almost any number of human tribes on the earth and most of them decide to be quiet and peaceful. It really only takes one to be grabby and that one will come to dominate the earth because it chose to be grabby and it grabbed everything else. So it's almost this prisoner's dilemma about if you choose to not be grabby, you are implicitly making the choice of being grabbed by the larger tribe that has elected to be grabby. And I think this is how we extrapolate this into the future with your grabby aliens thesis where there are sure, there are many civilizations out there. Maybe there are many like us that only exist on one planet. And we have a bunch of elites on the planet that say, hey, let's not investigate AI and let's not investigate longevity or genetic engineering.
Starting point is 01:23:16 Let's just stay put. And we would call these quiet aliens or, you know, us being the quiet aliens. What the choice being made is that grabby aliens are eventually going to arrive on Earth and grab us. And so if you don't become a gravy alien, you are going to be grabbed by somebody else. And so this is why I think this moment in human history when we have this letter saying, hey, let's pause. of AI research is what you are focusing on is like, well, this is a very important decision point for humanity as to whether we choose to be quiet or not quiet. And of course, this isn't the only choice, but this is one of the many choices down a long list of choices that could actually decide culturally what we want to be in, at least for the short term. Is this how you see
Starting point is 01:24:00 this fork in the road as we currently are? Well, let's just clarify, say, in a peaceful society like ours, we could think of a thief as grabby. And then we we could say, well, if we don't steal, somebody else will steal, so I guess we should steal. And you could imagine a world where that was the equilibrium. But if we coordinate to make law, then we can coordinate to watch for a thief and then repress them sufficiently so as to discourage people from being thieves. So a universe of sufficiently powerful aliens could coordinate to prevent grabbing if they wanted the claim, which I believe is true, that in fact, the universe hasn't done that.
Starting point is 01:24:42 It might be that within our human society, we have coordinated to enforce particular laws, but out there in the rest of the universe, it's just empty, and there's pretty much nobody doing anything through most of it that we can see. And so it is really just there for the grabbing. No one's going to slap our hands down for grabbing the stuff. We can just keep grabbing until we reach the other grabby aliens, at which point, then we might try to set up some peaceful law to keep the peace between us and them. But we don't have to fight wars with other grabby aliens per se.
Starting point is 01:25:15 But if there's all this empty stuff between here and there, then it seems like you either grab it or somebody else does. I'm wondering if we may have blown past some listeners here who heard us just talking about alien civilizations. They're coming to grab earth. And they're like, what are you guys talking about? Where's like all of these alien civilizations, Robin, David, we don't see them. anywhere when we look up the stars. But that is what your grabby aliens paper is all about. I think the synopsis of the graby aliens paper packs this punch. If loud aliens explain
Starting point is 01:25:46 human earliness, quiet aliens are also rare. Robin, can you sort of explain what your grabby aliens idea actually is and why there might be future alien civilizations that are expansionary and coming our way and why we might want to be a civilization that rises up and expands in our own sphere of influence in order to meet them? So we're going to go through this briefly and quickly. Turns out there's just a Kursgasat video that came out yesterday that How has 2.6 million views that's explaining some of the basics of grabbing aliens in case people want to see that. Kyrgyzat, the acute animations that do these very technical things in very nice ways.
Starting point is 01:26:27 Congratulations on that, by the way. So the key idea is we wonder about the distribution of aliens in space time. And one possible theory you might have is that we're the only ones at all. And in the entire space time that we can see, there'll never be anybody but us. In which case, the universe would just have waited for us to show up whenever we were ready. We can reject that interpretation of the universe because we are crazy early. So our best model of how advanced life I should appear says that we should be most likely to appear on a longer-lived planet toward the end of its history. And our planet is actually
Starting point is 01:27:09 very short-lived. You know, our planet will last another billion years for roughly five billion years total of history. The average planet lasts five trillion years. And because life has to go through a number of hard steps to get to where we are, there's actually a power law in terms of when it appears as a function of time, the power being the number of steps. And so, say, the steps are six, then the chance that we would appear toward the end of a longer-lived planet, rather than now on this planet, is basically that factor of a thousand in their lifetime raised to the power of six for this power law, i.e. 10 to the 18 more likely to have appeared later on in the universe. So we're crazy early relative to that standard. And the best explanation for that is there's a deadline soon. The universe is right now filling up with aliens taking over everything they can. Soon, and say a billion years or so, it'll all be full and all taken, at which point you couldn't show up later on and be in advanced civilization. Everything would be used for other stuff. And that's why you need to believe they're out there right now. So now that you've got to believe they're out there right now,
Starting point is 01:28:18 you wonder, what's going on out there? And for that, we have a three-parameter model where each of the parameters is fit to a key data point we have. And this model basically gives you the distribution of aliens in space time. And, you know, if you like, we can walk through what those parameters are and what the data point we have for each one is. But the end story is civilizations typically expand at a very fast speed, a substantial fraction of the speed of light. They appear roughly once per million galaxies, these rabbi alien civilizations.
Starting point is 01:28:52 And if we head out to meet them, we'll meet them in roughly a billion years, spanning near the speed of light. So they are quite rare, that rare. But not so rare as to be empty in the universe. That is once per million galaxies, there's many trillions of galaxies. So that means there are millions of them out there. And right now, the universe is roughly half full of them. So that seems strange.
Starting point is 01:29:20 The universe looks empty, but you have to realize there's a selection effect. Everywhere you can see is a place where, if there had been aliens there, they would be here now instead of us. So the reason things look empty is because you can't see a place where they are because they would move so fast from where they are to get to here that here would be taken. The fact that we are not now taken here says that no one could have gotten here and therefore think. If you were able to look out into the stars and see the aliens, that's nonsensical because if that would be possible, they would have already grabbed you by that time. Right, because they move so fast. There's a relatively small volume of the universe where, you could see them, and they haven't quite got here yet.
Starting point is 01:30:05 Most of the places you could see they would be here. And grabbing you doesn't necessarily mean destroying you. It just means possibly expanding to the borders such that you can't expand into their borders. It would be enveloping you and then changing how the world around you looks. So we can be pretty sure we have not now been enveloped by a Arabian civilization, because we look around us and we see pretty native stars and planets, which are not been radically changed. So yes, in the future, we might be involved and other things out there might be involved. Well, we're not now.
Starting point is 01:30:37 We couldn't see this situation we're in if those alien civilizations had come here. See, and this is actually why this intersects with the LEASER AI problem, because the way that you said that the civilizations that are out there would have come and enveloped us and then changed the environment that is around us, to think hopefully leaving us at peace. But like, this is the AI alignment problem. in another form where like another rogue alien civilization is also another paperclip maximizer and they're out just gathering all the resources doing the things that they do according to their
Starting point is 01:31:11 values hopefully their values are that when they do or expand into our civilization they leave us alone because some alignment is still there but it is the same fundamental structure of like there is these goals and alignments with the universe around them and these aliens expand and they change the atoms of the matter that they expand into. And because we haven't seen that yet, because that's the assumption that we have, but because we haven't seen that yet, you are able to, and your grabbing aliens paper actually like kind of place us in the arc of history because of this assumption that gravy, aliens are grabby and that they will attempt to grab things. And to add to that, I mean, they might be artificial intelligences as well.
Starting point is 01:31:52 Sure. Wouldn't they, Robin? Almost surely they are. Yeah. You know, anything, you know, within a thousand years, I expect our descendants to be almost entirely artificial, and certainly within a million years, and these things would be billions of years older than us. So, yes, our artificial descendants will meet their artificial descendants in maybe a billion years, and they won't have saved something like us. Now, I can give you a little more optimism in the sense that if aliens, these rabbi civilizations appear once per million galaxies, if the ratio of quiet to loud ones is even as high as a thousand to one, that would mean that in this expansion that they've been doing, that they will do, they'll only ever meet a thousand of these quiets as they expand through a million galaxies.
Starting point is 01:32:38 And so these rare places where an alien civilization appeared would be pretty special and worth saving and isolating because grabby alien civilizations should be really obsessed with what will happen when they meet the other graby civilizations. They'd be really wanting to know, what are these aliens like? Because they'll have this conflict at the border and they will wonder, are we going to be out tossed somehow? Will they trick us somehow? What's going to happen when we meet the border?
Starting point is 01:33:02 But they might make a national park out of us then. Might turn us into a zoo. So every gravity civilization will be really eager for any data they can get about what are aliens like. And so this small number of quiet civilizations they come across will be key data points. They will really treasure those data points in order to just give us some data about what could aliens be like? And so that would be a reason why if aliens came and enveloped us, they would mainly want to save us as data about the other aliens. Now, you know, that doesn't
Starting point is 01:33:32 mean they don't freeze dry us all and run experiments, etc. I mean, it's not necessarily going to let us just live our lives the way we want, but they wouldn't just erase us all either. Well, Bankless Nation, I elect Robin Hansen to make the case for not freeze drying us and to preserve us to the aliens if they come at some point in time. But this is not necessarily in your term that they're coming, but it's more kind of the rate of spread. One interesting aspect of the model is, would it be accurate to say, Robin, that the model predicts alien civilizations
Starting point is 01:33:59 to spread like cancer? And I mean that maybe mathematically, you know, without the negative connotation that that brings. Well, alien civilizations are created even more like cancer. So in your body, you have an enormous number of cells.
Starting point is 01:34:16 And in order for one of your cells to become cancerous, it needs to undergo roughly six mutations in that one cell during your lifetime. So that's basically the same sort of hard steps process that planets go through. Planets are each, in order to achieve an advanced civilization, they also need to go through roughly six mutations. That is, the mutations are each unusual thing has to happen,
Starting point is 01:34:41 and then the next unusual thing happens, and then the next unusual things happens until all six have happened. And then you get something like us. So the key idea is there's a million galaxies, each of which have millions of planets. And then all of these planets are trying to go down this path of having all these mutations, but almost none of them do successfully by their deadline of life no longer be possible on that planet. And it's a very rare planet like ours for which all six mutations happen by the deadline
Starting point is 01:35:14 of life no longer being possible on the planet. And that's how cancer is in your body. That is, 40% of people have cancer by the time they die, and that means one of their cells went through all six of these mutations, but that was really unlikely. Vast majority of cells only had one or zero mutations. And so life on a planet reaching advanced level that it could expand in the universe is mathematically exactly like cancer. And so it follows the same parallel with time, actually.
Starting point is 01:35:44 So the probability that you get cancer as a function of your life is roughly time, to the power of six because it takes roughly six mutations. That's why you usually get cancer near the very end. And the chance that planet will achieve advanced life is roughly the power of six of time. And that's why, in fact, in the universe, universe is appearing over time faster and faster, according to roughly a power of six because of this exact power law. And so the very early universe had almost nothing. And then recently, we've showed up, but around us, they're all pop, pop, pop, puppy. And the rate at which they're appearing now is much faster than it was in the past because of this parallel. And it shouldn't be lost on listeners that cancer is grabby. Cancer falls in the
Starting point is 01:36:30 grabby category. And so there's a bunch of quiet cells that are just minding their own business, doing their job in harmony with their neighbor cells. And then one cell goes rogue and decides, I'm going to grab everything that I can around me and I'm going to grow to my best ability. And so, like, it's just interesting to see no matter what scales or what mediums we perceive to be, whether it's a biological cell, it is human species as a whole, it is this theoretical AI super intelligent robot, but like these same structures continue to show up. And so, Robin, thank you for helping us navigate all of these different planes of existence and being able to reason about them all at once. Well, we did a brief survey, but happy to come and talk again. sometime if you like. Right. Yeah. There's a number of different rabbit holes that we did not go down in the interest of time. How about this? Because we've got a crypto audience. Robin, do you have any hot takes
Starting point is 01:37:23 on crypto? What do you think of this stuff? How about that? I mean, check that box. I don't have a new take on crypto. My old take has always been, you know, for any new technology, you need both some fundamental algorithms and some fundamental tech. And then you need actual customers and some people to pay attention to those customers and to their particular needs, and you have to adapt the general technology to particular customer needs. Crypto unfortunately moved itself into a regime where most of the credit and celebration and money went from having a white paper and an algorithm, and not so much for actually connecting the customers.
Starting point is 01:38:02 And so unfortunately, there's this huge appetite for tools and platforms under the theory that if we make a tool and platform, other people will do the work to connect that to customers. And unfortunately, there's not so many people who are sitting in that next roles. But them succeeding at that task is the main thing that will make the rest of crypto succeed or not. There's plenty of tools and platforms, not so many people trying to market concrete products to particular customers and holding their hand, working with them when it doesn't work with them, changing it somehow, iterating in order to make a product actually work for concrete customers.
Starting point is 01:38:38 That's how pretty much all business innovation needs to happen in crypto as well as everywhere else. Crypto isn't different by this regard. It's just that crypto sort of fell into this world where you got all the recognition and attention of money by writing white papers and implementing their first version of the algorithm that you had then shipped and then moved over to another company to write another white paper and algorithm, right? Instead of actually staying with the algorithm and trying to get customers to use it. So I wish crypto well. There's lots of interesting possibilities there, but that's in my mind. The major problem with crypto is the neglect of actual customers and the messy details of making customers happy. Everyone in the crypto industry is recently, at least the VC landscape is talking about everyone is trying to sell picks and shovels and no one's bothering to actually sift for gold.
Starting point is 01:39:28 So maybe we used to need some more gold diggers out there. I think so. And this is Robin being a utility. show me the utility. And we certainly understand that take on crypto and we'd certainly have some work to do in that area. But I think we should have you on some time again, Rob. And I know there's so much we could pick your brain about. I know you're a huge advocate for prediction markets as a way to solve for things. And this is once the promise of crypto. Lots of other creative institution ideas that I think crypto people will be more interested the most. Crypto people are pretty open to
Starting point is 01:39:58 creative institutions. Oh, we are. So you got to come back and talk to us about sort of institutions and some of the new creative institutions because, I don't know if you noticed, Robin, but around us, it seems like a lot of our institutions are crumbling or falling to pieces, are losing their trust. I agree. Or just in the worst case, we're just locking down and not allowing much innovation or change in our institutions. And so even if they're only decaying slowly, they're still not innovating and growing. This is a to be continued bankless nation. If you haven't had enough of Robin Hansen, I certainly haven't. I could talk to this man for hours. Then let us know, and we'll see if we can get them back on another time. But you have helped.
Starting point is 01:40:33 me understand a bit more about artificial intelligence. And for that, I certainly thank you. Are you going to sleep tonight? I'm going to sleep much better tonight. Honestly, you know, yes. So thank you. There are some things in there. I think I need to relisten to and think about a little bit more. These descendants being so much unlike me, that might make me concerned. But I'm far less concerned than after the LESA episode. So I appreciate that. It's a natural concern as a dad. Sure. That's right. Action items for your bankless nation. We'll include a link to the L.E. Rhekowski episode. We're all going to die. It was called. That was seriously the title, Robin. And we'll also include a link to the AI Fume debate, which we talked about that term. I just learned what that term was.
Starting point is 01:41:13 The Age of M, a book that I'm adding to my cue from Robin Hansen, of course, talking about, this is artificial minds. Is it not, Robin? Artificial implementations of ordinary human minds. There you go. That sounds fascinating. And of course, Grabby Aliens. There is Kyrgyzat video, as well as the original website, Grabbyalians.com. We'll include all of that. in the show notes. Risk and disclaimers, got to let you know, none of this has been financial advice. It's not even space-faring civilization advice, I don't think. You could definitely lose what you put in, but we are headed west. This is the frontier. It's not for everyone,
Starting point is 01:41:46 but we're glad you're with us on the bankless journey. Thanks a lot.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.