Bankless - Bryan Johnson: Don’t Die, Beating Entropy, AI Alignment & The Two-Species Future

Episode Date: November 17, 2025

Bryan Johnson lays out “Don’t Die,” a moral framework that puts existence first and turns longevity into a shared fight against entropy. We pressure-test the hard stuff: selfishness, inequalit...y, stagnation, and the so-called two-species future. Bryan argues these are solvable with smart societal engineering if “don’t die” is for everyone. He breaks down Blueprint as a practical on-ramp with evidence-based health, behavior, and social habits. We finish on crypto, from his Braintree days with Coinbase to how web3 could power a broad “don’t die” commons. ------ 📣SPOTIFY PREMIUM RSS FEED | USE CODE: SPOTIFY24  https://bankless.cc/spotify-premium ------ BANKLESS SPONSOR TOOLS: 🪙FRAXNET | MINT, REDEEM, EARN  https://bankless.cc/fraxnet 🦄UNISWAP | SWAP ON UNICHAIN https://bankless.cc/unichain 🛞MANTLE | MODULAR L2 NETWORK https://bankless.cc/Mantle 💤EIGHT SLEEP | IMPROVE YOUR SLEEP https://bankless.cc/eight-sleep 💠BIT DIGITAL ($BTBT) | ETH TREASURY  https://bankless.cc/bit-digital We’re being compensated by Bit Digital (NASDAQ BTBT) for this segment promoting their company and BTBT. The compensation is paid in cash as a one time payment. You can find additional information about Bit Digital and BTBT on their Investor page at https://bit-digital.com/investors ------ TIMESTAMPS 0:00 Don’t Die 8:18 Beating Entropy 18:35 Existence as The Highest Virtue 21:01 Isn’t Don’t Die Selfish? 27:12 AI Alignment 31:36 Collective Don’t Die 33:34 Don’t Die & AI 37:36 Longevity Escape Velocity 40:57 The Two-Species Future 47:07 A Path Without Creative Destruction? 52:40 Bryan’s 500-Year Lifespan 59:19 Blueprint 1:04:52 Crypto 1:06:23 Closing Thoughts ------ RESOURCES Bryan johnson https://x.com/bryan_johnson  Bryan Johnson Links https://bryanjohnson.komi.io/  ------ Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:00 And that's what Don't Die is about. It's a new moral philosophy that says existence itself is the highest virtue, not profit, not status, not power. Existence itself is the highest virtue. We're here with Brian Johnson. Brian, why don't you want to die? I mean, I want to live for tomorrow. So I have today's Wednesday, tomorrow's Thursday. I've got some cool things going on tomorrow.
Starting point is 00:00:32 And also Friday and then this weekend, I've got some fun plans. So I've got a pretty stacked couple days. This is the thing is when you ask somebody if they want to live forever, they're like, nah, I'll be bored or like, I'll lose my loved ones or like something like that. But really, we're all just living for tomorrow. Those are the same ideas. So just for tomorrow. You believe that you're going to have fun things to do for all of time?
Starting point is 00:00:56 Yeah, the thing is, like somehow tomorrow always has something interesting going on. So, okay, so don't die is your sort of like cult. And we're very familiar with cults in the crypto space. We kind of say that word fondly with love. Maybe you can kind of explain this cult to us, this don't die moral philosophy that you have started to like meme into existence into like the modern zeitgeist of time. It's built upon this idea.
Starting point is 00:01:26 If you look back through Western thought and you try to categorize the big epochs, You could say Plato and Aristotle, Christianity, medieval, Renaissance, Enlightenment, modern-day scientific era. We've had these big areas that span a few hundred years, and they're usually built upon just a few primary beliefs, you know, like the Enlightenment. We are potentially at the end of what has been a couple hundred-year run of how we understand ourselves in society. and when you look at the things that precipitated the change, it's usually where the system no longer serves the purpose. Basically, the system falls apart and is no longer able to serve the purposes it created.
Starting point is 00:02:14 So, for example, capitalism was meant to solve scarcity, which it has, but it's led to compulsion. And democracy was meant to do freedom of choice, but addiction is now so prevalent, were slaves to that which surrounds us. And so the bargain we've had with capitalism and democracy has turned upon itself, which systems do. And when that happens and then you have a new introduction of a technology like AI,
Starting point is 00:02:45 it creates this opening where new moral philosophies can come in and shake up and say, who are we? Why do we exist? What do we want to do? And that's what Don't Die is about. It's a new moral philosophy that says existence itself is the highest virtue, not profit, not status, not power. Existence itself is the highest virtue. And then there's a bunch of branches to that, but it's trying to get at this single thing that matters as a species in this moment.
Starting point is 00:03:17 So, Brian, when you say existence is the highest virtue, it's like, whose existence? Are we talking about the individual? Like if I embrace don't die philosophy and am I saying that my existence is the highest virtue? Are we talking about humanity as a collective or is it some combination of both? Yeah, a combination of both.
Starting point is 00:03:36 And so this is, it's not an effort to create immortality. It's an effort to say that the most sobering, I guess the most practical question anybody can ask on planet Earth right now in this part of the galaxy is what does an intelligence
Starting point is 00:03:53 species do when you give birth to superintelligence. When you give birth to something that is of the breadth and depth and potential as AI, no, we don't know exactly what it is. We don't know how it's going to manifest, but it's meaningful and it's substantial. And as that manifest, we as a species need to say, what do we do? And right now, what we do is we try to make more profit. We try to achieve higher status. We try to achieve greater power.
Starting point is 00:04:21 Like those are the end objectives of our species. And we will pursue those objectives at any cost, our own health. Like you think about like how it manifests as an individual. If you're in Web 3 and I know I have a lot of Web 3 friends, right, a lot of them are not on the health train, right? They're waking up every couple hours to take check trades. They're sleep deprived. Like they've just run themselves into the ground.
Starting point is 00:04:50 And so that is a situation where the person is trading their life for the objective of trying to make money, right, or something like that. And so the same thing is true with corporations in their pursuit, that you're willing to take environmental damage in order for this thing to happen. And so I'm not saying that capitalism is bad. I'm not saying that democracy is bad. I'm saying that when you point them at these specific objectives, you make tradeoffs. And whether we see it or not, we are making the tradeoff to say we value power, status, wealth, more than our own existence. And that's the underlying philosophy of our society. That's the moral philosophy that guides us today.
Starting point is 00:05:37 Introducing FRAXUSD, the genius aligned digital dollar from FRAX. It's secure, stable, and fully backed by institutional grade real world assets. Custodyed by BlackRock, Superstate, and fidelity. It's always redeemable one-to-one, transparently audited, and built for payments, defy, and banking. The best of all worlds. At the core is Fraxnet, an on-chain fintech platform built to align with emerging U.S. regulatory frameworks where you can mint, redeem, and use FRAXUSD with just a few clicks. Deposit U.S.DC, send a bank wire, or tokenized treasuries, and receive programmable digital
Starting point is 00:06:09 dollars straight to your wallet. FRAXNet users benefits from the underlying return of U.S. treasuries and earn just by using the system. Whether you're bridging, minting, or holding your FRAX U. U.S. works for you. Prax isn't just a protocol. It's a digital nation, powered by the Frax token
Starting point is 00:06:23 and governed by his global communities. Join that community and help shape Frax Nation's future by going to frax.com slash R slash bankless. Frax, designed for the future of compliant digital finance. Ethereum's layer two universe
Starting point is 00:06:35 is exploding with choices. But if you're looking for the best place to park and move your tokens, make your next stop Unichane. First, liquidity. Unichane hosts the most liquid Uniswap V4 deployment on any layer two, giving you deeper pools
Starting point is 00:06:46 for flagship pairs like ETHUSDC. More liquidity means better prices, less slippage, and smoother swaps, exactly what traders crave. The numbers back it up. Unichain leads all layer twos in total value locked for uniswap v4.
Starting point is 00:06:59 And it's not just deep. It's fast and fully transparent. Purpose built to be the home base for defy and cross-chain liquidity. When it comes to costs, Unichane is a no-brainer. Transaction fees come in about 95% cheaper than Ethereum main net,
Starting point is 00:07:10 slashing the price of creating or accessing liquidity. Want to stay in the loop on Unichain? Visit unichane.org Or follow at Unichane on X. for all the updates. Bit Digital, ticker BTBT, is a publicly traded ETH treasury company that combines the two biggest metas of our time, Ethereum and AI compute. Bit Digital believes that ETH will power finance and AI compute will power everything. Bit Digital gives you direct exposure to both. Bit Digital holds more than 150,000 ETH with institutional grade staking and validator operations.
Starting point is 00:07:39 On top of that, the company owns roughly 73% of white fiber, an AI infrastructure business that runs high-performance GPU data centers that adds a meaningful exposure to the the growth of AI compute with over 27 million shares. This is an ETH treasury backed by real operations designed to capture staking yield today while positioning for the future of intelligent computing tomorrow. The ticker is BTBT. This ad is not financial advice. Do your own research.
Starting point is 00:08:02 Learn more about Bit Digital and try their MNAV calculator at bit-digital.com. That's bit-hyphen digital.com. Bankless is being compensated by BitDigital for this ad. You can find out more information by clicking the link in the show notes. I think coming to this conversation, I would say maybe a different. friends, David and I had it just a pregame on this conversation. And David, I think your comment to me was like, you're all in. Like you're with the transhumanist sort of movement. You're with the don't die idea, at least conceptually. Is that right? Yeah. And my reasoning there is that,
Starting point is 00:08:34 if I understand Brian's position, is that inevitably, with enough time, the human society finds its way into, I mean, Brian said he doesn't like longevity, but like I'll just use longevity, it's like, we'll eventually get there. It's a scientific problem to solve. And especially with AI, that becomes a very solvable problem. So eventually, people not dying becomes the base case. And that is the default path of where humans go. And so I'm not saying it's like better or worse. I think it is better. But I'm not saying that. It's just like the default mode. And so Wi-Fi is it is kind of my position. Yeah. And I guess my position is, so I think both Dave and myself, we're techno-optimists, so we definitely believe
Starting point is 00:09:17 at a brighter future with technology. I think for me, this is like getting a bit further on the transhumanist kind of bandwagon. And so with a don't-die concept, coming to this conversation, Brian, I'm sort of like the undecided voter. Like, I'm not really sure. Like, something about it feels a little weird to me,
Starting point is 00:09:38 and I'm not sure what exactly it is. I'll talk maybe about the part that especially resonates with me. So part of crypto, part of Web 3 is this whole movement to bearer assets, being your own bank, going bankless, independence. It's really a movement towards freedom. So why bankless for individual sovereignty for freedom? Something that you said in I think one of your posts recently is right now society is trying to kill us for profit.
Starting point is 00:10:08 You said our society has made us unwell, metabolically, mentally, spiritually, we're addicted, social media, porn, nicotine, junk food, fast food, smartphones, streaming, energy drinks and gambling, each perfectly engineered to keep us in their grip. That is something that I think we all feel very deeply. And when you're living like this, you are not living with freedom. You can be as crypto wealthy as like Michael Saylor. But if you are addicted to a smartphone, like are you actually free?
Starting point is 00:10:44 you're checking prices every single minute and you're worried about what's going to happen next and you're doom scrolling on the timeline like, are you actually free? And so the vector that resonates the most with me personally is this freedom vector. But I, that seems kind of different than don't die because that that's just basically quality of life, me living a free life until kind of the end, and it's time to kind of like, I guess, fade into the night and pass it on to the next generation.
Starting point is 00:11:18 Can you talk about this? What is the difference between the freedom and the don't die type of movement and why have you interwoven them? Yeah, I think what you're articulating is that freedom is part of our current moral framework. We hold freedom to be sacred in our society, individual choice.
Starting point is 00:11:37 But that was not always, a virtue of society. If you go back in time, freedom was not a foundational component of how societies understood themselves. And so if you look at our time and place, we say, like, what is the moral philosophy of 2025? If you really dig deep and you identified it, freedom.
Starting point is 00:11:57 And so what I'm suggesting is if you fast forward in time to the year 2,500, and you imagine them looking back on us, like with some kind of detached perspective, perspective, then from their vantage point, I would say that they would look back to us and say in the early, you know, in the late 2020s, early 2030s, that's when humans figured out that existence was the highest virtue. Because when you give birth to superintelligence, when you have all this potential at your disposal, the immediate question is, what do you do? And the argument I'm making is that you focus, you focus. focus on entropy. Like the moment you become super intelligent,
Starting point is 00:12:43 your only enemy is entropy of the universe. That's your battery life. And then you're simply trying to say like what, now I understand the battery life, how can I prevent things from happening that would eliminate my life, like you're trying to not die. And then once you secure that,
Starting point is 00:13:01 you say now what can I do with my existence? But the primitive that we all share, Like every person shares one thing in common. Nobody wants to die right now. That is the universal want. Every single human wants that. So it's not that I don't want to die in 20 years or 50 years or 100 years. This is just I don't want to die right now.
Starting point is 00:13:25 And so what I'm trying to do with this effort is I'm trying to put my finger on what is the single most sober insight that humans can make in 2025? And I'm suggesting that we just acknowledge we're at this special moment of giving birth to something magnificent and we don't want to die right now. But that's the ethos and the moral philosophy of the environment in which you want as you give birth to AI. What you don't want is you don't want to be at war. You don't want to be trying to dominate with these technologies. I don't know if that's like the right environment for us to say can we bring in an, like a new species in or trying to maximize profit. I'm trying to say, how do I get Ryan to be so addicted to my things that he loses all his freedom in existence?
Starting point is 00:14:14 So I'm trying to look at the macro of what we are as a species. But Brian, entropy eventually does win, doesn't it? And so I guess maybe what you're saying is like kind of the part of the moral virtue of don't die is like humanity's purpose is to fight against entropy for as long as possible. That's right. I mean, we do that naturally. Like, when you get a car, you repair your car, you know, like a tire is flat or you change your oil or you replace a carburetor. We do that with our bodies, you know, like we do that kind of maintenance and repair.
Starting point is 00:14:50 It's just we don't have robust abilities to address it. But every time we acquire new powers to address entropy, we do so. So we are really good applicators of technology. And so I'm saying that as we, as AI gets better and better, it will drive a lot of innovation in terms of our ability to address our own entropy, as we call it it's aging today or in systems. And so the contemplation is we will just naturally apply these technologies to the things we care about the most.
Starting point is 00:15:20 I mean, when death is inevitable, of course, humans have been shipping product for thousands of years to help humans cope with death. And so there's like no shipping product to cope with death. It's an interesting way of thinking about it. Like you're going to die anyway, so people are like, you know what, we're going to reincarnate, or there's heaven, or like you can achieve immortality through professional accomplishment or you can have children. But people are selling products to help people cope with the fact that death is seemingly inevitable. Brian, have you ever read the book, The Denial of Death by Ernest Becker? Yeah.
Starting point is 00:15:58 Side quest, side bit of lore. I tried to email the foundation, Ernest Becker Foundation, to get him on the podcast, not realizing he. had died like 12 years ago. The whole premise of that book is that humans can't wake up and contend with their mortality without that just the weight of that just crushing their spirit every single day. And so what do they do? They push that to the back of their brain and they start working for symbols. The symbol of Christianity, the symbol of the American flag, the symbol of the Ethereum logo,
Starting point is 00:16:28 the Bitcoin logo, sports teams, fraternities, any sort of club. and this has been one of the ways that we've coped with death. And that has like arisen to some of the biggest like moral frameworks that we've ever developed, that sort of energy, right? Like democracy, Western liberal values. We have all of these moral philosophies that some of that energy goes into in establishing. I think one of the reasons why Ryan is uncomfortable here and why this is such a big deal and why this project is a project of like the highest order, the don't die project,
Starting point is 00:17:00 is because it starts to get pretty fundamental to physics. We're going and we're talking about reversing entropy. And to some degree, all life is is the controlling of entropy. Like the atoms are organized in a particular way so this organism can persist for the next day and the next day and the next day. And so it almost seems like this moral framework, the don't die moral framework, is kind of going all the way into the basement of like life's source code. and saying, hey, let's flip this bit
Starting point is 00:17:33 and tinker with our moral philosophy, but we're all the way in the basement. There's nothing deeper than this, which is why I think this project can be scary to a lot of people, because it's such a big paradigm shift. You nailed it. Yeah, that's why I think it packs.
Starting point is 00:17:51 Yeah, Ashley Vance made this comment to me. He wrote the Elon Musk biography. He wrote the first article that went viral on Bloomberg about this project. He said to me, he's never seen so much energy behind any idea. It does. It just hits at the core of our understanding of our existence.
Starting point is 00:18:10 And it immediately provokes very strong reactions that people are not undecided on this topic. You know, they want to pull out, like here are my five arguments on existence and why I think this is correct. Actually, I'm still undecided. Like, truthfully, I'm one of the people that still decided. Have there been any moral frameworks that we've had in place in the thousands of years of recorded human history that places existence as the highest virtue?
Starting point is 00:18:41 Or is this all new? I get the point, Brian, that, like, practically, that's kind of what we've been doing. We've always been trying to extend our existence, delay aging. And I'm just like you. Yes. I am planning to be here next tomorrow and a week from now and I have things to do and I very much want that. But I can't think of a moral framework anywhere that just places existence as the central thing. Is there one or is this the first time?
Starting point is 00:19:16 We've been stepping here. If you read of historical societies, there was a lot of senseless violence and killing. and we don't really tolerate that in our society. We've got universal human rights. We have laws in place about what you can and cannot do with harming someone else. The founding ideas are life, liberty, and the pursuit of happiness in the United States. So it definitely has followed a trend line where there's much more respect for the sanctity of life than there has been in previous civilizations.
Starting point is 00:19:52 So it's on that trend line. okay, it's on that trend line, but it's not ever been kind of the central thing. Yeah, I mean, actually, you think about it, Ryan, like really, religion does kind of say that, right? They're like, this is not the final frontier. Like, after we die, we go to this other's new, like humans don't want there to be an end. And therefore, we have all of these, again, products and our stories to explain how it's not the end, how there is a continuation and how your behavior directly maps to what happens to, you. you in those contexts. So, I mean, in many ways, Don't Die is the oldest idea ever articulated. I mean,
Starting point is 00:20:31 it basically has been packaged and delivered to humans in so many different ways. So this is just a new, a new framing to say, like, we actually have the technology to play, don't die, legit for the first time, without needing to extend ourselves to mythology about what happens after death. Why do you think some people don't feel comfortable with this? I know this is the first time we've met Brian, so you don't know me yet. I'm very open-minded on this, but there's something that makes me feel like... Gives you the hebi-jeebies? Something about it.
Starting point is 00:21:05 And it's some strange mix of like, I guess the feeling like, well, is this a greedy thing? It's like, I get, don't die for humanity. I'm very much... I'm there. I don't want humanity to die. I want humanity to continue. I want my kids and my grandkids to be successful and prosperous thousands of years into the future. Like, that's fantastic.
Starting point is 00:21:26 But for me individually to like seek don't die, something about that feels like selfish or copy. Like I'm not making space for the next generation. It's kind of like the geriatrics and politics. It's kind of like us saying to the baby boomer, all right, you guys have had your time. You don't have to be 85 years old. and still grasping at power in Congress. Make some room for those that are new. Maybe that's part of it.
Starting point is 00:21:55 There's other parts of it too that I don't feel completely comfortable with, but this idea that don't die, at least at the individual level, is selfish. Where's that coming from, do you think? Yeah, my guess would be, in the same way, if I say, Ryan, I really like your shirt, right? Or Ryan, you did a great job in your performance today.
Starting point is 00:22:17 And like, I gave you a compliment. The socially appropriate thing would be for you to acknowledge my compliment and maybe offer something back to me as well, right? Like the concept of reciprocity. Yeah, thank you. And your shirt looks great too, Brian. Thank you, Ryan. You're doing a great job today.
Starting point is 00:22:32 Thank you. I appreciate that. So, like, reciprocity is part of our current social norms. You know, it's like you want to lob things back and forth. You want to create equality in the conversation. And if you didn't say that, you might feel a similar deficit. If you're just like, great, thanks.
Starting point is 00:22:52 You might feel like that felt uncomfortable because I really wanted to volley something back and repay the kindness. That's right. So I'm imagining that running inside of us is our entire moral and ethical stack, which says, you know, if then statements,
Starting point is 00:23:08 if complimented, return the compliment. You know, if presented with the situation where you could sacrifice for the help, you know, for the benefit of others, then do so. And so I'm imagining that we all just basically are playing out these scripts that we've been taught our entire lives. And they feel so natural to us that they feel true. But they're not true like in the form of physics or math.
Starting point is 00:23:30 They're just ephemeral constructs that we have in our society that we are all acclimated to because that's all we've ever known. So I'm imagining that's what you feel intuitively. It's just it's the natural tension of an idea that kind of bends upon itself. and inverts everything that you've been trained to think your entire life. Yeah, it's a good human thing, I would say. It's part of being a good human is looking out for other humans around me
Starting point is 00:23:57 and not just focusing on just myself. But maybe you can talk about the don't die movement isn't just for individuals. No. Okay, it's for everybody. Like, I feel much better about it. If I'm taking everyone else with me. If everyone is with,
Starting point is 00:24:15 winning from Don't Die. If it's just me, and I'm the guy in Silicon Valley who attaches himself to a blood boy because I'm rich, do you know that kind of thing? That's like what I get in my head when I think about this is it's sort of self-serving. It's only going to be for me at the expense of others. But maybe that's not what Don't Die is. Is this for everybody? And if so, how could it be for everybody? Yeah, it's entirely for everybody. And what it's observing is, so the principle is don't die individually, don't kill each other, don't destroy the planet, and align AI with don't die. So the contemplation is that we are not individual actors. We're part of a collective on this planet. And that when I behave a certain way, I'm influencing
Starting point is 00:25:04 how you are behaving. Whether you want that to be true or not, we're all influenced by each other. and how I behave affects how my children behave, how we behave in our companies, affects how we treat the planet. These are all this, we're one integrated system. And so it's trying to acknowledge we are a whole and that everything we do affects the other. And it's also the case that if we're thinking about how we build AI,
Starting point is 00:25:34 I mean, don't die is entirely, the reason why I'm doing this is because, of AI. What we do with AI is literally the only question that matters at this point on planet Earth. There's nothing else that really matters. Like we of course, we all want to play our games if we're doing this or that. It doesn't in context of AI that we need to get this thing right. And so it's trying to say what is the philosophical framework that is relevant for AI? And what I'm suggesting is it might be a good idea if we ourselves are that philosophy. If we actually practice don't die as examples to AI, like we acknowledge existence as the highest
Starting point is 00:26:16 virtue. We don't want to be killing each other. We don't want to be killing ourselves. It's like a very practical example of this is like we've, we've said children can't smoke cigarettes because that's bad for them. But we feed them die in the form of school lunch. You know, a piece of pizza, canned vegetables, a BPA, plastic line, carton of milk. And so we're feeding our kids die. That's not a good thing. So don't die is like clean up the food system so that our children are not consuming diet at school lunch.
Starting point is 00:26:46 So it's very much like, it's a holistic, what do we do as a species kind of contemplation of, and it's not really ultimate sacrifice of you're willing to do this for the success of the species for all of intelligence. There's that scene from back to the future where Marty McFly is, he grabs the guitar and he plays Johnny Be Good, he plays some rock and roll.
Starting point is 00:27:06 And then at the end of this scene, everyone's watching him and he's like, maybe you guys aren't ready for that yet. I guess. I guess you guys aren't ready for that. I think what your answer, Brian, is like, society's totally ready for this. And there has been a resurgence of, like, health and fitness culture in the United States.
Starting point is 00:27:23 People are aware of things like microplastics of, like, red 39 or whatever that dye is. There is, like a growing crescendo of, like, actually, I'm going to really re-elevate the health as a top priority in my life. And then I think not only that is, like, there's some sort of cultural readiness. but I think you're also expressing there is a technological urgency with AI because just the relevancy of AI is like this is actually not an option.
Starting point is 00:27:50 This is actually how we get from wherever what humanity is to like the higher phase of existence that is somehow with AI. Can you talk about like the happy case of like say we do like align with AI? We have the don't die philosophy. AI absorbs that philosophy because we've been practicing it. What do we get from that? What's the end product? What are the spoils of this success?
Starting point is 00:28:15 Yeah, I like this thought experiment of imagining that we're hanging out with Homo erectus, about a million years ago. They've got an axe in their hand. And you pose the question to them through the rudimentary form of communication. Tell me about the future of our species. Tell me what's exciting about what's coming. They would probably say, well, you're going to forage more and you're going to go to war and you're going to express their mental models of reality.
Starting point is 00:28:39 but they won't say you're going to discover there's a microscopic world. There's atoms and particles. Or they'd say like in the air, you can't see it. There's waves that communicate information. Or that, you know, one day you're going to hold a little white thing and it's going to solve, address your infection. And so at that moment, homorectus does not have any mental models to say anything intelligent about the future.
Starting point is 00:29:04 So the only thing it could say is like, I don't know. I don't have the mental models. And so what I like in this case is I, of course, could fill the air with a bunch of words to your question. But I think potentially the more interesting contemplation would be, are we homorectus relative AI? Are our mental models of reality so primitive? Are our ideas just like so rudimentary relative to what's coming? We literally have no idea. Which leaves us to say one thing that's intelligent is, I don't want to die in this moment.
Starting point is 00:29:37 And so what I'm saying is really the ultimate expression of intellectual humility, we may not know what's on the horizon. We may not know what's coming. And we may not even want to express any preferences about profit, status, power, or some magical whatever we're going to create. Maybe the most prudent thing for us to say as a species is, this is big, we don't know exactly what it is. We may be limited in our ability to understand it.
Starting point is 00:30:02 Therefore, the most sober and practical thing would be to say, like, let's not die. So let's reframe our moral framework to say, like, what can we do in this moment? What is in our control? I know you said that because we have this technological urgency of AI, we should not die. But does the argument to why we should adopt the don't die moral philosophy exist in a vacuum outside of AI? Or is AI actually like a critical component of like why we need to do this? Yeah, I mean, when death is inevitable, you can yolo your way with any moral framework. And that's what society has done.
Starting point is 00:30:37 You pick your political system, you pick your economic system, you do your thing. And we're now at this moment where this is possibly the first time in history where we may not die. If that's the case, it creates this opening to say, what do we do now? And so that's why this moment is so different. And it's unlike anything we've ever experienced before. And it's why Ryan, I'm sure it causes you some consternation because it really does bump up against everything you've imagined of like, what existence is about, what you are about as a person. You have ideals of who you are, how you want to be seen, you want to be respected,
Starting point is 00:31:13 you want to be part of the tribe, you want to feel good about yourself. You know, one thing as you guys have been talking about this that makes me more comfortable is a slight addition or addendum to don't die, which is like for whatever reason, I guess with my current moral framework circa 2025, it feels okay to say, I don't want you to to die, or I don't want my kids to die, or I don't want those around me to die. That feels less selfish than focusing on, I don't die at maybe all costs. So I guess when I think of don't die, if it's like you don't die or we don't die together, that feels like much better than a focus on I don't die. Is there something to that? Like, is that, you know, I don't die? Is that, you know,
Starting point is 00:32:04 How do we, I guess if you shift the moral argument to it's you looking after everyone else, it somehow feels okay to me, whereas if it's just like focused on me alone and my needs and my need to continue life forever, regardless of who around me dies, that feels somehow morally not okay, at least with the way my brain works right now. Yeah. I mean, I would yes and that. it's definitely not a selfish framework. It's really not meant to be selfish.
Starting point is 00:32:38 This is not a pursuit of individual immortality. This is a more, I mean, really, if anything, it's more of like we can't do this individually and it has to be a collective endeavor. No one person can pull this off. And so it's probably the biggest team sport. I mean, capitalism is a team sport, where everybody on the planet
Starting point is 00:33:02 participates in this some way and Don't Die is probably even a more integrated team sport than capitalism is. Talking about Don't Die, there's been a subject. We've had Elie Zerikowski on the podcast. He's written a new book. You know, his existential concern, and he gives us a probability of about 99.9%
Starting point is 00:33:24 is basically AI is going to be misaligned and kill all of us. So it's definitely not don't die. I mean, I think his moral framework is don't die, but he's saying AI will kill us all. Does don't die help us fuse with AI in some way that doesn't kill us all? And also related to this, in that fusion, if that's the direction, this is sort of the answer that AI accelerationists always give is just like humanity and AI, they fuse together. How do we fuse together without losing our humanity? And the thing that
Starting point is 00:33:59 I don't know, is that spark of light that makes life worth living? Like, I don't want to be a robot. I'm struggling with this whole transhumanist idea in general, and I'm not sure that I want that future. Yeah. Like, I'm not sure that I shouldn't actually resist that future. Wait, so first on Eliasor, he's a friend of mine. I really appreciate him.
Starting point is 00:34:24 You know, he's earnest. He is genuinely trying to be used. useful to the world. You know, whether he's right or wrong, TBD. Like, you know, I don't think anyone knows, but I really do appreciate his earnest attempt at trying to keep the flicker of intelligence on this earth, you know, through humans alive. So it's just, it's a very hard, I mean, it's very hard to get your head around this idea.
Starting point is 00:34:49 So I appreciate his earnestness. Number two is, you know, if you go back to the example of, of homo erectus and they're saying, look, if I can't do what I'm doing now, if I can't be homo erectus, if you're telling me I'm going to evolve to be a Homo sapien, and Homo sapien is this, I don't know if I want to go forward. And so we, you know, it's,
Starting point is 00:35:13 we're an evolutionary species. We are today something that is unimaginable to previous versions of our ancestors. And so there's a possibility that the move here is to suspend all of our beliefs on who we think we are, on what we think we want,
Starting point is 00:35:32 on what we aspire to be, I'm personally there. I'm wide open on absolutely anything. And so like the way to really, if you want to give yourself a real workout on this question, here's a thought experiment for you.
Starting point is 00:35:48 Imagine that your existence is nowhere to go, nothing to do, no one to become. That's your existence. I mean, that's not really, that's not existence, is it? I mean, that doesn't feel like existence. Exactly.
Starting point is 00:36:03 Feels like jail. Exactly. This is why the thought experiment is so good. It's because those three things offend in the most potent way possible all of our sensibilities about existence. You can't find anything more offensive to our consciousness than those three ideas. And so the question is, are you open-minded enough? to say, I'll give it a shot.
Starting point is 00:36:29 Like, give me a month and let me see how I feel about it. But it's, it's, we're really, like, we have to understand our knee-jerk reactions are when things don't square with how we understand the world, we want to immediately batter it away because it's so uncomfortable. So these topics are very uncomfortable. Like, when you talk about the future, everybody's got an answer. It's like, this is going to happen. This is going to happen.
Starting point is 00:36:50 Everyone, but like, nobody knows. Everyone's just spinning up words because they're trying to fill the dead air because they don't know. Brian, when we as a species, as humanity, gain control over aging, and that's like something that we've unlocked that piece of technology on the tech tree. We just control aging. And we achieve what's known as like the, what is it called, the longevity escape velocity,
Starting point is 00:37:15 where science puts more time on our lifespan than the time is passing it takes to put it there. And so theoretically we have longevity. I would imagine that that comes. with a bunch of other unlocks too. Like it doesn't really stop at longevity. At that point, like AI will be much more powerful. You know, gene editing. As I understand, you aren't really into the world of gene editing quite yet,
Starting point is 00:37:41 but I think you're curious about it, and it potentially contains possibilities for what we can do here. It seems like there's a little bit of a Pandora's box there. It's like we open up the longevity and then out comes a bunch of consequences. for better or for worse that we then therefore have to contend with. And like the big broad strokes of it all is like humans kind of get the ability to upgrade ourselves. Like once we achieve longevity, we can kind of figure out the science to kind of upgrade
Starting point is 00:38:13 ourselves in any particular way. Does this, I'm sure this is scary to Ryan? Does this excite you? Like how do you feel about this? What about this part of this story arc excites you? What do you want to do with those powers? Yeah, if you think in parallels, you know, humans figured out how to arrange atoms in the physical world. And we built homes and skyscrapers and bridges and boats and airplanes.
Starting point is 00:38:41 We've built all sorts of objects with these atoms. We had a new programmable playbox of software where now you have zeros and ones and what can you build with that? We've had that sandbox for a couple decades now. and you look at the world of software of what we've done at that sandbox. Our biological systems are, is a new sandbox. You've got this genetic code.
Starting point is 00:39:03 You've got this amazing ability to alter it. And humans will do what we've always done is you've, once you acquire those tools to start playing with that sandbox, we're going to have all kinds of creative flourishing. We just don't even know where we can go with it. So yes, I mean, it's happening now. We see it happening in gene editing. We see it happening right in all sorts of applications.
Starting point is 00:39:26 So it's the next sandbox. And that's what I'm saying we've done this with the physical world. We've done this with the digital world. Next is the biological world. And that's what's squarely interesting right now is then once that is on the table, we have the ability to not just do physical form in the body, but also how we like our conscious existence. You know, like you start playing with like the things that give us rise.
Starting point is 00:39:50 So yeah, I mean, it's the next frontier. And this is why I'm saying that, it's healthy, even though it's uncomfortable, it's healthy to suspend belief and just say like, I'm gonna, I can pipe off and imagine a few things, but also it's really healthy for me to say like, this is probably past my ability to imagine, probably exceeds my capacity to imagine as a human,
Starting point is 00:40:11 which is a nice healthy balance where like not seeing the unknown is really good. I think one of the concerns that people have, that I have, is that, you know, previously with all of the progression of the, the Homo habilis, homo erectus, homo sapiens, they all happened in consequential order. And one ended and then the next started in some, like, fuzzy way. We are in the world, the era of technological accelerationism,
Starting point is 00:40:41 where things are happening really fast. Like five years ago is 2020. And now we have AI and robots and self-driving cars. And we have this guy, Brian Johnson, who's trying to achieve immortality. shit's getting weird. Shit's getting weird in 2025. And it only seems to be getting faster.
Starting point is 00:40:58 And I kind of now see, I know you are trying to be very humble about, you know, we can't predict the future. I can kind of see this future of we have a collection of people who achieve longevity, escape velocity. They get chips in their brain via brain link interfaces that multiple people are building.
Starting point is 00:41:15 These are real startups. And we start to edit our DNA and because we have that power and that access. And all of a sudden, We have these hyper-thinking AI interwoven immortal humans walking around next to normal humans. And it seems to be we split into Homo sapiens, which is where what everyone is today, everyone is a Homo sapiens, into what you've all known what Herori calls, like something like Homo Deus. And that's happening at the same time. And we have the Homo Deus humans and we have the Homo sapiens sapiens, crypto Twitter might call
Starting point is 00:41:52 this the permanent underclass, but we have two tiers of humans, and one starts to look a little bit like deities on top of Mount Olympus, and the rest are us. And that is a little bit concerning to me in a way. Have you thought about this? Yeah. I mean, what you're articulating is basically you took the foundational principles of today's society, which says power, wealth, status, individually as the ultimate pursuit, and you applied that to these tools. So it's a very natural pattern matching. And so I agree with you that you can pattern match and you can make those observations. Totally legit.
Starting point is 00:42:36 And what I'm saying is, I don't know if it's wise if we take our power, status, wealth, principles and carry them forward. Not to say that they're by themselves that they're bad. is that when they're the ultimate prize and we're willing to pay any price, including my life or your life, it may not be the right environment to do that. And that's why I'm saying that as we have all of these advanced technologies to create this magical world, but also to destroy ourselves and also create dystopian outcomes, that it might be a really fresh opportunity to say, what is a conducive moral framework where intelligence
Starting point is 00:43:18 thrives together. And so I'm really trying to get at the core of this problem of how to avoid those kinds of situations, but also we still want the powers of progress. We want to push things forward. We want to better ourselves. We want to cure disease. And so that's the dance I'm trying to create is not say this with certainty, but just to say like, if I can say anything intelligent at all, I go back to this basic instinct,
Starting point is 00:43:42 which is no human wants to die. If you look at planet Earth, you look at all the biology on planet Earth, even even look at humans, that's the basic instinct of biology that doesn't want to die. Not right now. So are you saying that I did the hunter-gatherer thing of like, yo, when we improve and get better, we're just going to be able to forge so much more. And I'm actually just not really able to contend with the future. And so what you're saying is like, and I can't even do that.
Starting point is 00:44:08 And so what am I doing? I'm just not doing it, but I'm doing the one thing that will buy me some time, which is not dying. Exactly right. Yeah. I mean, it's very sensible and practical of what you're saying is. you take the current technologies, you mix it up in a stew of current philosophical and ethical frameworks, and you have the output. Totally understandable. And that's what I'm saying, I don't know if that's the recipe we want to cook. Imagine a world where traditional finance
Starting point is 00:44:32 meets the power of blockchain seamlessly. That's what Mantle is pioneering with blockchain for banking, a revolutionary new category at the intersection of Tradfai and Web 3. At the heart is you are, the world's first money app built fully on chain. It gives you a Swiss iBan account, blending fiat currencies like the euro, the Swiss fronds, the United States dollar or the Rimini with crypto, all in one place. Enjoy real world usability and blockchain's trust and programmability. Transactions post directly to the blockchain, compatible with TradFi Rails and packed with integrated defy futures.
Starting point is 00:45:01 U.R. transforms Mantle Network into the ultimate platform for on-chain financial services, unifying payments, trading, and assets like the MI4, the M-Eath protocol, and functions FBTC, backed by developer grants, ecosystem incentives, and top distribution through the UR app, reward stations, and by-bit launch pool. M&T holders, every economic activity in UR drives value back to you, embodying the entire stack and future growth of this super app ecosystem. Follow Mantle on X at Mantle underscore official for the latest updates on blockchain for banking. That's X.com slash mantle underscore official. Crypto is risky.
Starting point is 00:45:34 Your sleep shouldn't be. Eight sleep's mission is simple. Better sleep through cutting edge technology. Their new Pod 5 is a smart mattress cover that fits on the top of your bed. It automatically adjust the temperature on each side so you and your partner can both sleep the way that you like. It's clinically proven to give you up to one extra hour of quality sleep per night. Eight Sleeps Pod 5 uses AI to learn your sleep patterns, regulate temperature, reduce snoring, and track key health metrics like HRV and breathing.
Starting point is 00:45:59 With a new full-body temperature-regulating blanket and built-in speaker, it is the most complete sleep upgrade yet. Upgrade your sleep and recovery with A-Sleep. Use code bankless at 8Sleep.com slash bankless to get up to $700 off the Pod5 Ultra during their holiday sale. That's 8Sleep.com slash bankless. You also get 30 days to try it, It's free. Link in the show notes for more information.
Starting point is 00:46:19 So guys, if we're not going to, it's like you were talking about, you know, nature and how it's sort of in biology that, you know, all of the natural world, including us, don't want to die. Okay. But it also seems like biological, almost like a natural principle that the old does die and make way for the new. Yeah. I mean, this is kind of a question for this don't die world. Let's say we achieve don't die. Okay. Well, this goes back to baby boomers in Congress grasping at power.
Starting point is 00:46:52 This goes back to the idea of like science proceeds one funeral at a time. This goes back to like the old has to make way for the new where you have a stagnant society. Imagine if we were locked in to the people in the 1600s and their moral framework and their idea set. And they just kept living. And they didn't innovate. They didn't make way for something new. There was no fresh ideas. There was no creative destruction.
Starting point is 00:47:18 Yeah. And so, like, could this be a path towards stagnation for us? Like, we are the don't die generation, and then that was it. And then we found out. We never improved. We just, like, figured it out, and that's it. Is there a worry that you have about that? Or, like, how do you address that?
Starting point is 00:47:39 How do you address the idea that what if Vladimir Putin didn't die, for instance, gave no way for Russian. society to evolve. Yeah. I mean, first, so this is just an, it's a societal engineering problem. So if this is why we have term limits, you know, you want to cap someone's power in a certain duration of time. And so if you just simply apply this to this technology, the same situation, you know,
Starting point is 00:48:06 you can solve it. And so it's really, it's one of the more easier, easier problems to solve as a species. and you know, if you take a given ruler who doesn't want to vote it at office, you know, societies have a way of revolting and overthrowing rulers. Like, that's always been the case. And so if someone's old and stuck in their ways, maybe a rejuvenation technology will, you know, like there's actually technology today where you can take an adult cell and turn it into a pluripotent stem cell. Like, we can do this today. Now, we don't have it as a therapy because it has off-target effects and it can lead to cancer. but that technology will be improved over time.
Starting point is 00:48:42 So maybe the solution, you take an old person and rejuvenate themselves, you make them a young person and they're fresh again. Like they're back at the same open-mindedness. Now, you can still, doesn't mean they steal in office. It can be like they still have term limits. But these kinds of ideas, it's just a societal engineering problem. And like on the scale of problems, very easy. Really?
Starting point is 00:49:01 So on the scale of problems, that seems like the hardest one, actually, because the societal engineering problems are like, those are the deep-seated coordination problems that we struggle with. And those are the one that causes wars. Right. And so power, status, wealth, the things, if you've been around for a couple, 100 years, I'd imagine you'd be able to accrue a lot of those things in your, you know, hundreds of year lifespan versus somebody who's new.
Starting point is 00:49:30 And wouldn't you power status, wealth, your way towards consolidation of those things? And then how is the new entrant supposed to disrupt you when you've accrued all of these things? Imagine Warren Buffett across thousands of years compounding his interest. He's already one of the richest guys in the world. I mean, like, that seems like that would consolidate into a setup that would be very hard to coordinate against and actually disrupt. That's sort of the stasis argument, I think. Yeah. Like, for example, one thing that's happened in the U.S. is there's a death tax, you know, when you're
Starting point is 00:50:05 pass, a meaningful portion of your estate is taxed by the government. And that's some attempt at trying to level the playing field of stopping this generational wealth from being out of control. Now, you can argue whether it's been effective or non-effective or it's too much or too little. But society has tried to acknowledge that as a problem. And we're currently in our current system, there's a pretty substantial gap in wealth. And I think everyone acknowledges that that's probably not a good idea that you have this kind of disparity. And so I'm guessing there's going to be a correction on this. And so you can imagine where in this situation,
Starting point is 00:50:41 maybe someone's wealth has a tax, you know, every some block of time to prevent that kind of accumulation. But I think that society generally, even though it's been rocky and sometimes it's resulted in wars, society does correct itself when it does go to extremes naturally. Now, this is also assuming that in this situation, we're contemplating that humans are still the primary power actors, that humans are still the ones doing these systems. Now, if AI steps in and maybe society has run much more with autonomous systems and maybe humans have less power to control these things and maybe it's more indirect. So I guess all these things for me are on the table of I'm not totally confident that what exists today in terms of power will be the same things that exist in power in five years.
Starting point is 00:51:31 from now. And so if they may, but I guess what I'm saying is the first problem is how do we not die as a species? If you're not dead, then you have the luxury of solving these other problems, which is like, how do you prevent runaway power, runaway, you know, wealth accumulation, etc. So I think I'd rather take on that problem that I would be dead. Brian, I'm curious if you could just for me. So map out what it would look like for you to live, say, 500 years, right? So. you're on the bankless podcast. We very much are tech accelerationists,
Starting point is 00:52:05 so we think that a singularity is approaching is near. But like practically, and I know some of the tech might be sci-fi, but if you were to think, you know, based on what you know right now, about a 500-year lifespan for yourself, what would that trajectory actually look like across the decades and across the centuries?
Starting point is 00:52:28 Yeah, I mean, the only thing I can map this who that has familiarity to me is homorectus, like saying, hey, Homer Rectus, like, let me explain to you what your life is going to be like as you go through your various stages. Like, first, you know, you're going to live like four times longer than you're maybe expected 20 to 30 years of life. And then you're going to go to the following stages of life. And every stage you articulate would be entirely foreign to them. There may be some things like mating, like you're going to, you know, choose a mate, you're going to have offspring. That would be a commonality. But otherwise, I think it'd be pretty novel for them to contemplate.
Starting point is 00:53:03 Like imagine trying to explain Web 3. Like, you know, what is Web 3? And then you have to like walk down like 25 layers to get to the concept of so they understand what Web 3 is. But I guess, Brian, broad strokes, are you talking about like these lifespans being purely biological? Are you talking about like taking your consciousness and putting this in Silicon somewhere
Starting point is 00:53:24 and fusing with the machines? Like, yeah, what's on the table? Yeah, I guess all I'm saying is I take, I answer this question because the majority of people that hang out in this space of trying to speak about the future, they are overflowing with ideas on what existence will become. And what I find absent is anybody saying, I have no idea. You know, like literally no clue. and anything I say is probably stupid, to that kind of extreme. I'm trying to provide a relative contrast
Starting point is 00:54:04 that not knowing is equally as intelligent as speculation or of pattern matching. It reminds me of that, you know, humorous comic where there's a gentleman, there's a person under a street lamp. Someone walks by and like, okay, what's going on? And they say, I'm looking for my keys.
Starting point is 00:54:22 Why, you know, where did you see them? Like over there? And he's like, why are you looking here? He said, because the lights here. You know, we have this, we have this proclivity to look in the light and we don't look in the dark. So it's just a natural bias we have. And so I try to play the role of reminding ourselves most times throughout history, we've had no idea. And a lot of the times we've been stunningly surprised with what's come up.
Starting point is 00:54:50 And I'm trying to balance out the contemplations with that. there was a reply to my tweet saying, hey, we were interviewing Brian Johnson podcast today. And the reply was like, I find his idea is compelling, but I wouldn't really want to do it unless my loved ones also did it with me.
Starting point is 00:55:07 And my reply to that was like, yeah, I kind of think that's right. Like, this is kind of like a society's saying to itself, I'll do it if you do it. Like, I'll go attempt to live forever. If you also go attempt to live forever. And so the way, that I think that this works is like most people say no,
Starting point is 00:55:25 most people say no, most people say no. But then all of a sudden it's in people's heads. The meme has been incepted enough. And all of a sudden, just a few people say yes. And all of a sudden, all of society says yes. And then we're like, okay, we're all doing it. How far along on that arc do you think we are? I think here's my, I'll only say one thing about the future.
Starting point is 00:55:48 Yeah, you are very hard to get to say anything about the future. Okay, so here's what I think could potentially happen. AI progresses and we have some kind of moment with AI. Maybe it's a Chernobyl moment. Maybe it's something more benign, but it's a moment. And it's a moment where the world is like, oh my God, this is real. Like I had imagined AI of like, help me writing emails and code faster and looking at my medical images better and like I had all these imagin.
Starting point is 00:56:22 but like this is a really significant situation. And maybe there's one of these occurrences, maybe there's multiple of these occurrences. But it will happen and it will create this sobriety where we say this is literally the only thing that matters, like nothing else matters because it's such a big deal. And humans, so human society typically can only have two ideas in their mind, right or wrong.
Starting point is 00:56:45 And so if you think recently to like COVID, as nuanced and if a complicated as situation as that was, the world bifurcated into two opinions. Masks, no masks. Vaccine, no vaccine. Shut down, no shutdown. It just forks.
Starting point is 00:57:02 And if we think good or bad and those two forks. And in that moment, it will create this bifurcation with AI of die, don't die. Where there will be people who will say, it's not worth doing blank
Starting point is 00:57:14 for the pursuit of power of wealth status. I see the situation here for what it is. I don't want to die. And that's my guess is that don't die right now, it really hurts to think about because it challenges everything you understand about existence. But if you remember in the early,
Starting point is 00:57:32 like the first month of COVID, do you remember the entire world shut down on a dime? Everybody's plans stopped in a week. They're a month. It was unbelievable. That's how bad humanity doesn't want to die. Now, of course, like, after we were like, okay, like, we understand it's like not the plague or something. Then we started like going about human and things.
Starting point is 00:57:55 We started fighting about shutdowns and vaccines, all the kind of stuff. But like never underestimate, never short the idea of how badly someone doesn't want to die. And so that's what I think is going to happen. And I think people are going to come around very, very hard to don't die, even though right now it's kind of a trickle where it's like people are starting to like try to get acclimated to it. Like, what is this thing? How should I feel about it? Is it selfish? Is it not? Is it good? Is it bad? It's just a natural dance humans do to try to digest a new idea.
Starting point is 00:58:24 I think for me personally, after understanding more of the philosophy behind it, I feel better about it. So there has been a move in that direction for me. And maybe we can talk about the practical implementation of Don't Die. So we've been talking a lot about the future and the philosophy and the like, is it good or bad and why? you've got a whole practical program around this called Blueprint. And I think, Brian, you know is that you raise 60 million to kind of bring Blueprint to the masses. So for myself, who's kind of totally with you that society has made us unwell
Starting point is 00:59:03 and we have all of these addictions and all of these problems and they're trying to kill us for profit, how could Blueprint help someone like me? Because I don't want to dedicate my life in the way that you have, like I've got other thing, you know, you're doing the whole
Starting point is 00:59:20 all in Brian Johnson don't die thing. And that's definitely not me. But I share some of the goals that you have. Would blueprint be for me or like what's the program designed for? What does it do?
Starting point is 00:59:34 Yeah, exactly for you. So the origin of this is I basically like a script played out early 2000s entrepreneurship culture where I started, I started as an entrepreneur. I didn't sleep much. I didn't exercise.
Starting point is 00:59:51 I didn't eat well. I was ragged. And, you know, in those environments, like, you brag that you only got three or four hours to sleep because you want to be seen with high status. I'm on your group that you work very hard. You don't really need to, like, you don't really need sleep. Like, you're beyond that. Totally.
Starting point is 01:00:05 So I got depressed. I got hopelessly depressed. Now, fortunately, I sold my company, made a bunch of money, and I realized that I was trapped. in that cultural system that says, kill yourself because the money's worth it. Right. It's a kind of weird trade. And then I was owned by all the addictions of society, fast food, junk food, etc. And so I had to dig myself out of that.
Starting point is 01:00:31 So I hired a team of quite a few doctors. I started spending a lot of money and I basically tried to build the world's most formidable evidence-based protocol. Like, if you just look at the scientific evidence and you take you the process where I try to measure the biological age of every organ of my body. I was 42 at the time, but my brain is a certain age. My left area is age 64.
Starting point is 01:00:52 My heart is 37. My cardiovascular ability. You have different ages across your entire body. And you look at the scientific evidence and say, how do you then slow down the aging or reverse it? I did that for several years, trying to achieve exceptional biomarkers. And so in short, what I was trying to build
Starting point is 01:01:08 is my autonomous self, which means instead of me going out and trying to like forage food every day and figure out how to do this thing. I want it to say, I'm going to measure my body extensively. I'm going to give it to AI, you know, run it through computational processes, compare it with the evidence,
Starting point is 01:01:26 and then bring it back with protocols. And I'm just going to follow the protocol. And so what Blueprint is trying to do is it's trying to say if the person says, I want to be healthy, but I don't want to spend my time researching if seed oils are good for me or bad for me. I don't want to spend my time seeing how much protein I have to consume. I spend my time doing blank.
Starting point is 01:01:43 we're just going to say, do this. And it's going to be based upon your biomarkers, on your genetics, on your situation. We're going to try to just basically automate the entire process. Now, we're not there yet, but the idea is people don't want to spend the time and they don't want to be confused and have to chase this thing down because it's an endless endeavor, as we all know. So yes, like you would be our perfect customer of what we're trying to solve is how do we, basically, how do we give you exceptional well-being with the least,
Starting point is 01:02:13 out of effort possible. Does, are the limits of that well-being, health, nutrition, exercise, those sorts of things? I mean, there are so many well-being and even, I believe, longevity studies that actually link lifespan to things like the quality of relationships in your life. I mean, exactly. I go there as well. I mean, yeah, talk about that. Can I buy a friend?
Starting point is 01:02:35 Can I buy a girlfriend? So, yes, we are working on that. Yeah, like, we're trying to. outperform every health system in the world, every concierge system in the world because their model is typically to sell you stuff, like to sell you therapies. But the highest value therapies are having a good relationship and having friends and going to bed on time and not eating fast food. So it's really about a lot of behavioral things. And so there's a lot of ways we're working on helping you with those hooks so that you start incorporating good
Starting point is 01:03:08 habits. And this is why I don't die is not a selfish endeavor. You are your friends. You're you are your family, you are your coworkers. Like you all, you naturally adopt those practices. So it's basically saying this is a team sport. And so that's why we're going to take this on is, I mean, I learned this principle when I was raising my son. I was teaching him how to swim. I was doing some research as a young father.
Starting point is 01:03:29 How do I teach my child how to swim? I saw there were three ways. One, you could push him in the pool and say, good luck. Two, you jump in the pool and say, you know, come to me. Or three, you show him a video of his friend swimming. The third one is the one that works best. when your friends do something, you want it to do that as well.
Starting point is 01:03:47 And so, yes, this is entirely like, how do we actually adopt positive lifestyles and then therapies were appropriate? Brian, this was great, but I really depreciate it. You know, because we're normally a crypto podcast, I do have to ask this, though. If you're building Venmo today, sir, would you use cryptocurrency?
Starting point is 01:04:03 Yeah, so I built brain tree. I started brain tree in 2007. I sold it in 2013. I think we were the first company to, integrate Coinbase. Wow. So, yeah, so I was bullish on crypto. And had I not sold Braintree, I'm guessing I would have been all in on Web 3 over the past
Starting point is 01:04:26 12 years that all the things that have developed. So I do, I've, I've looked at the, how the industry has matured and I have to say, what an amazing space to build in. There's so many cool things being built. and I really admire companies like, you know, Brian Armstrong then at the time, like he was in and, you know, he's just plugged away. And so, yeah, I'm very bullish on Web 3. In fact, I've been poking at crypto for the past year, trying to figure out how to find the marriage between Don't Die and a Web 3. I don't want to do a token. You know, I don't want to do something
Starting point is 01:05:05 where it's like, people are like, this is not a money grab. This is like, how do you build sophisticated infrastructure for Don't Die. So I've been poking at it, and I haven't found it yet. But it's definitely on my radar. I really love what the industry's doing. And I'm excited about the ways it can work together. Well, we appreciate you being patient with finding the right solution in crypto. We know bad things can happen when people are rushed into crypto.
Starting point is 01:05:31 Brian, this has been fantastic. I've learned a lot. I'm very inspired. Before I got into crypto, I was on my own career pursuit, trying to figure out how to integrate physical therapy, mental health, and nutrition into one private practice. And I kind of think if you add a bunch of science and research, you actually end up with what you're doing. And so it's very refreshing to kind of come back around full circle and touch base with who's really pushing the frontier in that world. And so thank you for doing
Starting point is 01:05:58 what you're doing and wish you the best of luck. Thank you, David. I have to ask, where are you both at after this conversation? Brian, you want to go first? I'm warmer on it. I'm definitely warmer. I'm still skeptical of the details of, like, transhumanism and, like, how that might, how we might merge and what longevity would actually look like. I also very much think that quality of life is kind of important, which is why I like the idea of you kind of expanding into other areas. I think the thing I'm most at peace with after this is it's not just don't die for me.
Starting point is 01:06:35 It's don't die for us. And I think that flips the whole. whole, isn't this whole thing selfish on its head and fits better with my moral framework of things? So I'm leaving this conversation warmer for sure. Yeah, cool. David, what about you? Yeah. So, like, I once upon a time was kind of pursuing longevity in a very loose sense way back, way back in the day before I found crypto, was like reading David Sinclair and like some of those like more old school books. And kind of also doing like a little bit of just like self-experimentation. like carnivore diet, keto, like all that kind of stuff, fasting.
Starting point is 01:07:13 And I found it was actually very solitary. So you can't really hang out with friends or go on dates if you're a carnivore. Like it doesn't, it's not totally compatible. And that was actually the thing that I found to be the most, having the most friction. And so the notion that there is a system out there, which makes it easy for me and not just me, but my friends and my local family. and this has become a social norm is what excites me about this. And so I was always warm on it to begin with.
Starting point is 01:07:45 Getting into crypto, I was like, well, I've just abandoned all of my healthy habits and now I sit in front of the computer for 16 hours a day. And so now I'm trying to get those back, but I find it highly enjoyable that this is now being pushed to be like a social norm
Starting point is 01:07:59 so we can all kind of like revel in it. And so much more warm, yeah. Yeah, cool. Brian, thanks for coming on today. Yep.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.