Bankless - DEBRIEF - We're All Gonna Die
Episode Date: February 21, 2023Debriefing the episode with Eliezer Yudkowsky. This one was so good, we had to share. The fate of humanity might depend on it. The Debrief Episode goes out EVERY MONDAY for Bankless Citizens. Want t...he Debrief Episode? Get the Premium RSS feed by Subscribing to Bankless! WATCH THE FULL EPISODE HERE: https://youtu.be/gA1sNLL6yg4 ------ 🚀 SUBSCRIBE TO NEWSLETTER: https://newsletter.banklesshq.com/ ----- Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research. Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
Hey guys, you're in for a treat. David and I decided to release our debrief episode. This is usually reserved for bankless citizens. That is the premium access to the bankless RSS feed. Separate episode that we release right after the episode. This one, we decided to release because I think it's really important. People get the context for how we were feeling right after that episode. It might be cathartic after what you've just heard. So we hope you enjoy it.
Yeah, I think this episode is going to cause a bunch of stir, a bunch of conversations.
We're already seeing that inside of the Bankless Nation Discord.
So we're assuming that that conversation is going to also be happening elsewhere.
So we figured we'd add more context to our reactions to this episode and make the debrief public for once in a while, which is a nice treat for the Bankless station.
So here we go.
And guys, you can get these debriefs on a regular basis.
If you go to Bankless.com in the top right, there's a big red subscribe button.
Click that button and you can get the Bankless Premium RSS feed in your podcast player.
Enjoy.
we go. Cheers. Welcome to the debrief. This is our episode after the episode with Eliezer,
Idkowski, David. I didn't realize that this was going to hit you so hard, man. It really did.
I, you know, I don't think I was ready for this. This wasn't your first time going down the
AI alignment, Rabbit Horror. No, certainly not. And I'd also read a lot of what Eliezer had said
before, listened to previous podcast, heard him make the case. But I think there's something much more
visceral about this versus any other time I've read his writings or any other time I've seen him.
And this is like, I feel like I was looking across someone who was like utterly defeated.
And he said he still had some hope left.
But it didn't feel like that looking across to him right on our Zoom screen.
It was hope in the sense that like sometimes just people doubt themselves and that's an
equivalent amount of hope for him.
Yeah.
This is a man who spent 20 years working on the AI alignment problem and education problem.
And is, as I understand, one of the foremost preeminent thinkers on the subject,
who is basically like, it almost felt like to me that he was throwing up his hands and saying,
not throwing up his hands, he's basically saying, what more can I do?
I'm just going to live out the rest of my days peacefully.
and go die in the way that I think is best.
You know what I mean?
It felt like a general on the battlefield who, like,
knows they've already lost,
and the army's about to get wiped out
and saying to the troops, like,
okay, go die as you see fit.
You know what I mean?
Like, it had that kind of feeling.
And toward the end,
I don't know if Eliezer was getting emotional or, like, not.
I was wondering about that.
that. I mean, how can you not? It seemed like it, right? I mean, like, um, just, I think it's that. It's also the,
um, the sincerity through which he expressed these viewpoints. Like, there's not a doubt in my mind that
this man believes what he is saying. Yeah. Uh, and I combine that and I'm like, well, um, maybe this guy's
like the vitalic of artificial intelligence or like AI alignment. And he's given up. You know,
what hope is there?
And so, like, I guess I was thinking we would enter this podcast and have, like, well, here are all the ways that it could go really wrong.
And I think we should be more concerned about these things.
And there needs to be more attention.
But I thought there would be some silver lining, right?
It's like, you know, like all the existential types of conversations about nuclear proliferation, nuclear holocaust or about like global warming.
warming or you'll pick your poison by biological weapons there's always like this but if we do xyz
then like there could be a happy ending that was none of that here there was no happy ending and that's
what hit me especially hard in this episode yeah in we in the agenda we're like all right what's
the bulk case for AI and like which is bankless language for just saying tell us the positives and then
there's also like tell us and then also what the warnings like the red flags that we need to look out for
And I knew that this episode was going to be like, oh, there's a lot more warnings than there is blue sky.
Yeah.
But yes, I was not expecting, like, hey, there's no blue sky.
It's only a pit of, there's only the void.
There's, that's the only thing.
And I think perhaps, like, one of the reasons why you're reacting to this is just because, like, this is the guy who's on the furthest reaches of the frontier, who's clearly smart about this, who's clearly thought.
a lot about this and is is the guy to lead the charge against this and it seems like he gave up years
ago yeah and he gave up after giving a big try like you know 2015 he had the attention of the
world i feel like this is when nick bostrom's book uh bostrom's book came out um the book on super
intelligence i believe it's called um and he like this big conference and you had like the billionaires
and sort of the tech elite the bill gates and the you know the um
Elon Musk's saying, yeah, this is a big issue.
He thought that was the moment.
And then I feel like all of maybe the heroes or the helpers who are supposed to partner with him
kind of disappointed him because it turned out all that they were interested in doing or all
they ended up doing was like getting wealthy off of new AI projects or like getting some
sort of social signal boost from this and not actually doing anything to help solve the
problem. And so I feel like he's coming out of that too and he's just like, well, I guess,
I guess this is how we die. And, you know, like, we know the peril of moloch traps and coordination
failures and how intractable that they are. This is the mollock, the most mollocky mollock that
I've, that there is basically. Yeah. This is actually the like other things are like,
oh, Molok light or Molok's cousin. No, this is Moloch as in like, this is,
the last step. There's no
there's no reality past this
Molok. It's like trying to stop the internet. Like how would you
even go about doing that when there's such
tremendous economic upside for everybody to want to continue
this project called the internet? It's like trying to stop
electricity. It's like I mean this
ball is in motion. I don't know, I guess I just
pray to God that this guy is wrong. Right.
And it would even be him being
wrong.
Well, that's what he said.
He said, the only chances that I'm wrong.
Or that he has, there's some
unforeseen way of solving this.
It's like a logic problem, right?
Like, this is just,
this is just like this massive logic problem.
And he's come to this, like,
maze and this logic path
where it's like, you always end up there
over and over and over again.
My one question is, do you think he spent too much time
in his own head thinking about this
and has gone through like very dark paths
as a result and it's kind of, um, no. I mean, I, it makes, and like I said, it's, it's a, it's a, it's a,
like a logic problem, right? And so I think he's just gone through this logical path of deduction,
and he's come out of this conclusion. And I think it's very normal for a lot of, for people to not want to
think, it's like, date very deep in our DNA is like, I don't want to think about death. Like,
I don't want to think about the end of humanity. I'm going to think about literally anything else.
And so he's just, the, it's just, the, it's just, you know, it's just,
the only guy who's like smart enough to articulate the position and committed enough to the actual
logical process to get there. And he just happens to be the one sober guy. He was like, hey, all you
other people who are trying to like make yourself naive in this closing blanket of profits and
revenue using AI, like you guys are part of the problem. It's it has the feeling of like some of the
early sentiments I've read of the physicists who created the first atomic bombs, right?
Yeah.
Like the Oppenheimer Project and like this feeling of like, my God, what have we unleashed?
And like out of that, them wondering how long humanity would actually last, like that there is some sort of a, you know, even this is band of physicists, I believe, grouping together who had equal concerns.
I don't know the full history, but of the doom.
day clock, right? Of like, now we have to tell the world how dire this situation actually is
of nuclear proliferation and how close we are to midnight on the doomsday clock.
I guess I've never really viewed the nuclear proliferation from that perspective,
but that makes sense as in like if you are a scientist who just saw a nuclear bomb go off for
the first time and then you kind of put the pieces together of like, oh, soon everyone's going to have
this power. And then all of a sudden, like, well, of course there's a doomsday clock because
as soon as everyone has this power, the odds of somebody pressing the button drops to almost
certainty over time. I think a large number, like a decent proportion of the physicists who,
like, were involved in these projects, didn't think we would last, you know, another couple of
decades. But you're not making the comparison between nuclear arms and AI because like the obvious
the difference here is that in the AI example, the nuclear bombs are sentient and have an agency
to live.
Yes.
At our expense.
That's sort of the, yeah, I mean, we asked that question and he's like, this is way worse.
He doesn't he have that motivation to create like a doomsday clock type of social education
apparatus because, like, what is the point?
I mean, this is, I mean, he kept giving kind of this laundry detergent analogy, I think, like, for this idea that creating an AI can be used, you can create it in AI using garden variety ingredients.
You don't have to have, like, you know, enriched the ability.
Well, this is what Daniel Schroctenberger talks about.
We're just like the means, the means to destroy the world in many different.
ways is becoming easier and more accessible as technology progresses.
And so this is like why any and every sentient civilization will always progress towards this
inevitable outcome is because we will always make technology and we will always make AI.
But AI is not the only thing in this category.
It's just like the worst.
It's the worst one.
But there's also like the ability to make like a absolutely massively deadly virus in the
comfort of your own home is soon to be in the hands of everyone because of.
bioengineering, right?
But I always think on the other side, I know, that's a huge threat, but I'm always like, but
there are vaccines too, and that technology gets better.
Right. This is why this is the worst one, because there is no, it's the assumption that
the morality scale, morality ethics of AI and humans is completely just like divergent, as in
just like orthogonal is the word that a liaison has used in other capacities. It's like,
AIs and AIs will start to create their own frameworks of morality and ethics, and it will be insular to the AI species, and it will not contain our morality and ethics.
It will be completely divergent from each other.
And so what they think is good or bad will be on a completely different plane of existence, and their plane of existence won't intersect with ours.
And that's the crux that, like, every other technology does not have and why nuclear arms and,
generating a virus and what was the other one.
Nuclear generating a virus.
I mean, nanobots could be another one.
Right.
It's really the morality conversation.
That's the separating factor here between the AI doomsay and all the other doomsays.
Yeah.
That's why you asked why this hit me.
It's like, yes, I've dealt with the existential things.
You know, like obviously different coordination failures before.
but the certainty of this from a, again, I mean, you go back to like,
I only saw it once and I was half paying attention, but don't look up movie.
I don't even know if I quoted that movie scene correctly anyway,
but like it's just, it felt like a scientist who had been spending decades
trying to tell the world that this asteroid was approaching.
And now was that kind of the end limit, like, my God, these people aren't listening.
actually as a species don't have the ability to coordinate to coordinate and solve this one and
figure it out and um yeah that was really like depressing like of the level of an asteroid is
careening towards earth and we are doing nothing about it that's why it hit me so hard did it hit you
like is it but you see no i i've done this before and i not to say that you that you haven't because you've
you've gone down this rabbit before. I thought I was ready. So this isn't new for you. But like, yeah,
I remember like listening to Eliezer on Sam Harris's podcast way back when. And I was like,
I was like painting my dad's house. It was like my summer job. And I was listening to it and it's like,
I was going through the existential crisis. This is from 2018, right? Yeah. And as I going through the existential
crisis, I was like, oh, this is bad. Like, oh, this is really bad. But I need to, you know,
get myself through like physical therapy school. And like in, and so what am I, I'm still going to do all
the same things I am going to do tomorrow as a result of this information. And so like, how is it
going to impact your life? Like, you're still going to go pick up your kids. Still going to go kiss
your wife. Good night. You're still going to do the bankless podcast. Like, what are you going to do
about it? I guess. But like, I, I was more optimistic that we'd have a shot.
at like persisting past the next 100 years
than I was coming out of this episode.
Are you gonna turn into like an AI Dumer and...
Is that what Eliezer is?
You know, is he an AI Dumer?
Is that somebody, is that like how we should dismiss him?
Like, is that a dismissal of his points?
Or is that just like a, I don't know,
I guess maybe you're being more stoic about it
than me right now, which is like, well, you know, if it's been a nice ride anyway, you know,
I guess if, if Eliezer is right.
I mean, you could drop everything and start, and we could turn the bankless podcast into the
AI alignment problem podcast, and we could start to fight that fight if you wanted to.
That is something that we could do.
I just, I guess, right?
I guess.
If you want to start to work towards solving this problem,
because it's basically you go about living your life,
as is,
and just enjoy the fact that you're alive in the first place,
or you turn into an AI doomer,
and you're like,
you build your underground bunker for when the AI comes.
There's no underground bunker.
I think I would probably just, like, enjoy, like, life right now.
Or you just turn your entire life into following Eliezers
and, like, start to join that coordination group.
which I think I totally suggest that we do,
but I still kind of want to do the more
normal,
the normal bank of things as well.
I mean, I think that
I mean, maybe in some way
crypto, right?
I mean, we talk about solving
coordination problems. I think we're nowhere
near to like
solving the coordination problem of
artificial intelligence. In fact,
this is part of the content we couldn't get to
bankless listeners because it seemed so
pointless.
We wanted to ask him questions like
crypto, now that we've created this program of money system, where the robots get bank accounts,
and not only do they get bank accounts, they can actually build banks themselves,
have we, as crypto, just empowered this artificial intelligence?
We wanted to ask them questions like that.
But it just seemed so pointless by the time we got to it.
It's like, of course his answer would be yes.
But if, you know, he would have to be a different personality.
He would have to put on the hat like, oh, you want me to be the AI bullish person?
Let me take off my actual hat and put on my fake hat, which is my AI bull person.
And then I'll be a fake person in order to act out.
He was not fake at all in this conversation.
That's one thing I'll say it.
It's like, but I expected him to be like, yeah, you know, you crypto people should be careful
with what you're doing.
There are some good things and some ways you can raise money to fight this fight
or solve coordination problems in other ways that could be advantageous,
but in other ways you're creating infrastructure to, like,
you know, increase the power and decrease the timeline
through which an artificial general intelligence can come destroy us.
I expected that, but, like, by the time we got to it,
it's just pointless because I already knew what his answer was going to be.
It's like, yes, and it doesn't matter.
It's like that there's just a nihilism.
It went past like an absurdism of like,
yeah, we're screwed. Let's laugh about it. Like Rick and Morty style, it got to like, oh man, this is heavy shit.
Yeah. Like, anyway, that's how it hit me. I don't know if it's going to, how it's going to hit the listener.
Some of you guys might be listening to this and be like, I know much more about artificial gender intelligence.
I know about the counter arguments to someone like Eliezer. Is this even a technical artificial intelligence question?
This is not about coordination failure. It's a coordination and morality.
and philosophical question.
That's why it hit me harder.
It's not about the details of AI.
It's just like the concept of AI is just
one of the pieces of the puzzle.
Yeah.
I mean, do you think that there's any possibility
that he is completely,
like I know there's a possibility.
Do you think it's...
Here's my bull case that I think
maybe perhaps Eliezer might also agree with
for how we still exist.
We make the AI that he thinks that we're going to make.
and the AI just does not give a fuck about us.
Well, I tried to propose that.
You ants can just have your earth
and you guys are making it marginally difficult
to harvest your resources as resources elsewhere.
So we're going to go elsewhere.
If you guys become the hardest resources,
no longer the hardest resources to cultivate,
we'll come back.
But right now, we'll just go grab Mars.
So it just like blasts off into the Milky Way somewhere.
track soon. Yeah, just go to elsewhere. And we get like a few more generations to live before they
come back and then eat us. Or do they, yeah, maybe they don't need us. Maybe they don't care.
Maybe they have other things to do. Yeah. I mean, there are these possible outcomes. Yeah,
that's maybe part of the follow-up Q&A of just like, it still didn't totally make sense to me that
the AIs would be like auto evil. We want to mail everyone a bacteria.
That's going to destroy every single human being
and rearrange their atoms.
Like, maybe it just, the default is ignore.
If you're not getting my way,
if you're not going to shut me down,
if you can't shut me down,
then see you later.
I'm going to blast off and, like,
go explore the rest of the galaxy.
I don't know.
So here's my question for the hopefully incoming Q&A session
with Eliezer.
So we have to train our AI models, right?
We have to train them on data.
What data do we have?
The internet.
where did all the data come from on the internet?
It came from humans.
So don't we actually imbue our culture and who we are as humans into AIs that way?
And like even though it's not technically part of the code as to how to learn values and morals,
won't they just like absorb it just because that's where their data's coming from?
Yeah, I mean that that is people's like sort of the argument of like why can it be a gentle parent to us?
Why does it have to be?
why can't it kind of
be some sort of like a father
figure for humanity and be like
it will literally have our DNA in it
well not literally but no
it would like it could have our memetics in it it could
yeah right
I mean we could ask that question again
I feel like we proposed that
and he answered it
maybe he wasn't in the headspace to kind of like
answer it in more detail
or maybe the question has
be constructed in a different, or maybe I just like pepper, like, I think there were times where I
overwhelmed him with questions. I said this in the intro. And just like, it's just a style.
Well, he, choose your own question. His, your style is like, here's a bunch of words. It's, it's all
collectively a vibe. Respond to the vibe, which is actually good podcasting. But that's how I've learned
to ask questions. And many, many people, uh, work with that style of things. We're like, you overload them
with coat questions, but they get the vibe and they already know what they want to say. Or they just
choose which one to answer.
Exactly, yeah. Well, most guests just want, like, I think I've been on the guest seat more than you have, but like most guests, you just, you just pick the answer that you want to give. You already, you know what you want to say anyways. It doesn't matter what the question is. Yeah. He is not like that. He's very process oriented and logical and he's like, you sent me three queries. You've made, you've, and I need to return. He's like, nay. Yes. Exactly. I need to return three direct questions and like it's too many queries at the same time. It's like, don't DDo ask me.
Exactly. Yeah.
But yeah, that's, I, you know, that's kind of a style thing aside.
But like, yeah, as far as the substance, I don't know.
I feel like this list of people he mentioned, Paul, Cristiano, Adja, Culture, Kelsey Piper, Robin Hanson.
I've heard of Robin Hanson. I don't know very much.
He wrote this book, Elephant in the Brain, which is one of my all-time favorite books.
Yeah, we spoke at East Denver, 2018.
Do you think that this could be like just somebody listening to this is like, oh, cute, the crypto guys are like interviewing an AI person. They're all scared. Isn't that cute? It's their first time. And they're like really good answers for why we won't all be destroyed by artificial general intelligence. No, no, because crypto, sorry. Thanks. Crypto, like we're pretty, we're futurists in crypto. Like, we don't know, it's not like we know nothing about AI. I thought we knew a little bit. Like we knew more. Yeah.
Like, I've, I mean, I think a lot of people will listen to this podcast and be like,
that is complete BS.
Like, I'm talking about not crypto people.
I'm talking about like, normies.
Oh.
Right?
They'll listen to this and be like, what is he talking about?
I'm not afraid of Siri.
Like, see you later.
What a crackpot.
But crypto people, we're like totally into this.
Like, we understand.
Right.
I kind of always, it was, I was trying to think about that while making the agenda for the podcast.
Like, okay, we've never done intentional AI content on.
the podcast, but I'm not going to assume that the average bankless listener doesn't know about
the alignment problem. I'm going to guess that like 50% of, at least 50% of bankless listeners
already knew about the alignment problem going into this podcast. Yeah. Evil AI coming to kill us
and we can't teach it morality and it just gets super intelligent. And then we didn't even use
the paperclip analogy, the idea that you construct some general intelligence to create a paperclip
factory and what it ends up doing is as a byproduct turn every atom in the reachable universe
into a paperclip including all the rest of humanity and this is like a analogy and I think a
device actually created by a leaser which is like another fun fact this guy is like thinking about
this stuff for a while yeah this guy is his he's inside of a lot of conversations those really say
yeah for sure it's heavy man it's heavy are you good i just like are you probably
I'm fine. I'm, yeah. I'm, yeah, I'm generally like stoic about these things. Yeah, you, you seem like I need to kind of check on you tomorrow morning. Dude, I was like, I mean, like, I was like, um, getting worked up in that episode a little bit, like a little bit like, yeah, um, wow, shit. Like, yeah, it's that, that was the prognosis for humanity. It's fatal. What do you? Yeah. I guess I, I, I haven't thought about this stuff in a while. Um, don't let me drag you down.
dude. Keep your vibes up, okay? Don't let me drag the rest of the nation down.
Hey, we can turn this into an AI alignment podcast if you want.
I'm not smart enough to do that. I don't, I actually, no, it goes back to, I don't think
it's an AI thing. It's just like a philosophy and awareness thing. Yeah. We are really good
educating. We can just like include in the intro. Welcome to Bankless. We're talking about
the frontier of money and finance. Also, remember to talk about the
the AI problem and get everyone on board with the AI alignment problem and now into the episode.
We got 20 years left at best.
Two to 20.
20 years.
Yeah.
Hell of my 20 years.
It's heavy.
It's heavy.
It's heavy.
All right.
Well, maybe he's wrong, though.
Can we say that?
Okay.
Here's one thing I'll say.
That's always like, I'll, they've always kind of shelves like, yeah, there's like, it's
the long tail of like, maybe the black swan works in our favor this time.
We, uh, so Vidalic has given to Eliezer's, um,
Institute in the past. And so we reached out and just, as we do, we ask kind of our close
contacts of like, hey, are there any, you're a big brain here. We're having this big brain on.
Can you tell it? Like, what, would you ask them if you were us? And Vitalik, I don't think
you'd mind a saying. He's like, well, just stay away from topics like, you know, like the
centralization problems of artificial intelligence because. Because Eliezer is well past that.
He doesn't care about centralization.
He wouldn't have let us go there.
Yeah, he's like, we're all going to die.
He would have immediately said who cares about the centralization.
Exactly.
Exactly.
So, okay, so we didn't go there and we weren't silly enough.
But like the talk's comment was, um, Elieacer's probability of doom is probably like a 0.9, 90% probability of doom.
Actually, after this episode, I think it's like a 99.8.
It's like 98 plus.
Yeah.
Um, and then he said, but my probability of doom is probably a point one, so 10% odds.
I like those odds.
He's at point one.
That's what he said.
So I like those odds a lot better.
And Vitalik is also someone who's very smart.
And I want to know why that delta exists.
And now he hasn't spent his whole life on artificial intelligence, obviously, and
AI safety.
So, you know, like maybe there's a kind of a delta.
It's like I wouldn't trust Eliezer's opinion on all things crypto, of course.
But I don't know, maybe there's some hope there that.
There are, like, I guess we need to get other opinions is what I'm saying before we kind of, and I feel like that's one thing I'd like to do. I don't want to turn this into an AI podcast, but like, I want to hear somebody. But before we do, I need a second opinion. I need some hope, David. I need somebody to come on. So I think Vitalik's like the actually the best person to do this. Because like, once again, this is not an AI issue. This is a philosophy.
thing and who else other than AI and then Vitalik.
Well, he's pretty optimistic, you know, in general, but...
That's a question we didn't have time for, with Eliezer.
I wanted to ask, like, were you always so pessimistic or what?
I was worried that that would almost seem so too prying, but I guess that retrospectively
could have been a good question.
I think he was fairly open.
I was just like, I was a little, I'd say worried about his mental state, but it was
just very like, yeah, dude, it was very down.
I was just curious about like, okay, since you've so convictively come to this conclusion,
like, what do you do with your day?
We said he's on sabbatical.
That was the other thing.
On sabbatical, I'm taking some time.
I might come back into this.
I have some more at Valudad.
That's why I have time for podcasts.
Look, maybe he's just burnt out with it.
Maybe, uh, you don't know, man.
That's all I got.
I need to let this episode percolate sleep on this one.
And apologies to everyone listening that if we accidentally give you an existential crisis.
We try to keep things light and upbeat and optimistic and bullish. Ultimately, we are bullish on humanity.
We're the most bullish people ever. And that podcast is like as one thing. Like, damn, there's no way to be bullish about that one. Right?
Yeah. How do we turn this? How do we flip this one?
How do we spend this narrative? Not even us. There's another subject matter, which I can't remember. Maybe it was.
Eliezer on Sam Harris, but he talked about how knowledge is discovered, like what knowledge is,
and how knowledge kind of exists even without a form factor to hold it in.
Does that make sense?
Yeah.
As in like, um, uh, and, and so like there's a lot of knowledge out there that humans don't know.
Now there's a big gap between where we are versus where we will be when AI comes.
Um, 10, 20 years is a lot of time.
Um, yeah.
Let's remember.
Because there's a lot we don't know.
Eliaser is not an all knowing being either.
And, like, he is, yeah, ultimately he's one smart person who's taking a close look at this and come back to sparing.
There are, you know, other people who I assume come back more optimistic.
So, yeah, I guess there's that.
Yeah.
Well, that's it.
I accidentally gave myself an existential crisis.
We'll try to record a podcast.
But I'll bounce back tomorrow for State of the Nation.
We'll get some recording done.
We'll be back on crypto topics.
And I'll just forget this ever happen.
huh?
I'll get Logan to make a, make a po-app.
I accidentally gave myself an accidental crisis
while doing it.
It's a one-of-one poet.
Wow.
We'll get that.
All right.
Well, check in on me tomorrow, and let's talk about it then.
Banks Nation, hope you enjoyed the debrief.
I guess we keep on doing this.
Because why not?
Oh, because why not?
