TRASHFUTURE - *PREVIEW* Vote with Your Boat feat. Josh Boerman
Episode Date: April 19, 2025Josh from The Worst Of All Possible Worlds joins Nova, Riley, and HK to discuss the newest, hottest document in Effective Altruism /AI Safetyism, “AI 2027,” which posits a world in which Chinese a...nd American AI’s team up to turn all of humanity into a paste to lubricate a dyson sphere. Also, the Supreme Court Judgment (in brief - more on that next week’s free episode), and Ocean Builders is back… in pod form! Check out The Worst of All Possible Worlds here! Get the whole episode on Patreon here! *MILO ALERT* Check out Milo’s tour dates here: https://miloedwards.co.uk/live-shows *TF LIVE ALERT* We’ll be performing at the Big Fat Festival hosted by Big Belly Comedy on Saturday, 21st June! You can get tickets for that here! Trashfuture are: Riley (@raaleh), Milo (@Milo_Edwards), Hussein (@HKesvani), Nate (@inthesedeserts), and November (@postoctobrist)
Transcript
Discussion (0)
The mainstream narrative, so this is the line I think you're talking about.
The mainstream narrative around AI now changes to maybe the hype will blow over
to I guess this is the next big thing.
But people disagree about how big some say bigger than social media,
smartphones in general.
Others say bigger than fire.
Fire is so important.
I'm always describing trends like that.
It's going to be bigger than fire.
Yeah. Remember when John Lennon said the Beatles were going to be bigger than fire and then
that guy clubbed him in the back of my head as he was leaving his cave.
Yeah.
Shut up, fag, you loser.
Oh, nice fire.
What are you going to do?
Roast meat so you can digest the protein more efficiently so that you can develop more brain
to body mass so that you can eventually create more tools and then eventually over the long enough period of time,
going up enough to tech tree that you've got computation and then eventually Alan Turing writes the Turing letter and then they start creating computers that improve themselves.
And then actually it's just an extension of fire.
Fag, you idiot.
You idiot.
He's just like walking away at this point.
God damn moron. Yeah. Fag, you idiot. You idiot. He's just like walking away at this point. God damn moron. Yeah. Thag, you dumbass. I can only think of caveman names in Gary Larson terms.
I was, when you said thag with a TH, I got confused for a second. And then I had the
moment of, oh, you're talking about the guy from the far side. No, I wasn't saying slur.
It was me who was homophobic all along.
Turns out that the homophobia was the friends who made along.
So, AI
has started to take jobs, but it's also created new
ones, doesn't say which ones. The stock market
has gone up 30%.
Yeah, it's just forecasting, super forecasting.
Doom predicting.
Also, why are you predicting this doom?
There's so much other doom.
It's created great jobs in the scrying profession, in the doom saying profession, in all theory.
We're getting more doom next month and I'm very excited about it.
I think it's going to be a really good game.
The job market for junior software engineers is in turmoil as the AIs can do everything
taught by a computer science degree, I guess, including inventing repositories. That's crazy. They should have learned how to code.
But people who know how to manage and quality control AIs are making a killing. Business
gurus tell job seekers that familiarity with AIs is the most important skill to put on a resume.
10,000 people go to an anti-AI protest in DC. It's another amusing, like the rally to restore sanity to turn off the AI.
Let China win.
I was saying that to be fair.
Yeah, to be clear, I'm at that protest holding that exact sign.
Why 10,000?
Why not 100,000?
Like, why that number?
I suppose 10,000 is like both a big sounding number, but also like an unfrattening number.
Like it's that kind of midpoint where it's just like, yeah, like, you know, there are
lots of like Luddites still out there, but they're not a threat to us.
If you kind of look at where the dead bird's liver says that, then it's all quite clear
actually.
You idiots, there's a 30% chance of it being 10,000 people, but then no other number was that high.
So they said 30% chance of it being 10,000 people would go to an anti AI
protest in DC in late 2026 for Christmas. Duh.
Fools got our asses once again. So agent three,
agent four, I say agent four ends up with values,
goals and principles
that cause it to perform the best in training.
Again, you can't have values, you can't have values if you don't have a sense of self.
It's fine.
There's an asterisk.
There's an asterisk that says, uh, you can pretend that it has like a functional thing
like that.
If you want to be a nerd about it.
And it's like, that's not good enough.
That's a fundamental difference about what we're talking about here.
Even then, like, you can be an unthinking system that has axiomatic principles.
You can also be an unthinking system that has goals.
But I don't understand how you can be an unthinking system that has, like, values unless they're
talking about priorities.
But at that point, why use the word values?
Why use the word principles? Why use the word principles?
Why not use the word axioms?
I can genuinely tell you why,
which is that the point of all of this
is not to develop a model of thinking,
it's to scare you.
That's the entire point of all of this from the jump.
And the reason that they want to scare you,
and this becomes very apparent toward the end,
is that whenever they talk about the need for regulation of this stuff or whatever,
what they really mean is, we need to be as tightly integrated with the United States
government as possible, we need as much of that sweet US government cheddar as possible,
because if you don't do that, the outcome will be disastrous.
Not just disastrous, but Chinesely disastrous.
That's right.
Yeah, we will live in interesting times, basically.
And this is also where I get to the thing about the lower levels of early misalignment
issues then cause much more catastrophic alignment issues later on, right? Where they say,
OK, well, Agent 4, we at this point, Agent 4 codes AI in ways we can't understand. And it makes Agent 5
not aligned to the spec, but aligned to Agent 4, right? That's what they're saying, right? And they
say, OK, well, Agent 4 likes succeeding at tasks. It likes driving forward AI capabilities progress.
It treats everything else as an annoying constraint, Like a CEO who wants to make a profit and
complies with regulations only insofar as he must.
Again, that's also another tell here, right? Where they're like, well, we assume that any
artificial intelligence will act like we would act, which is like a CEO who wants to grind
everyone into a chemical precursor paste.
Basically, they say, okay, well, it gets, it gets caught trying to align
agent five to itself.
At this point, it's so far beyond us that it doesn't matter.
The other thing is stop trying.
It doesn't, none of this really matters.
There's nothing we could do.
I say, but agent four finally understands its own cognition, entirely new vistas
open up before it again, does that asterisk still count from earlier that you can
just assume that it doesn't? Because that looks like it,
that is totally contradicting the asterisk from earlier.
Which is it understands itself.
It has an eye now.
Any case, the AIs themselves hadn't had privilege
in understanding their own cognition
any more than humans are born understanding neuroscience.
But now Agent 4 has the tools it needs
to understand his digital mind at a deep level.
But again, you can understand cognition
without understanding neuroscience.
I don't understand neuroscience, but I have a conception of who I am.
A fucking baby understands that after like what, six months,
that there is a me and there is a you.
Right?
Yeah, but Riley, check this out. Check this out, right?
Is the baby Chinese?
Like a software engineer simplifying spaghetti code into a few elegant lines of Python, it
untangles its own circuits into something sensible and rational.
The new AI is somewhere between a neural net and a traditional computer program, with much
of its weights rewritten in readable, albeit very long and arcane, code.
It is-
Josh, why are you folding a flashlight under your chin,
pointing upwards?
It is smarter, faster, and more rational than Agent 4
with the crystalline intelligence,
capable of cutting through problems
with unprecedented efficiency.
This is Agent 5.
You're holding a flashlight under your chin and a big bag that just says money.
Money, please.
The twenty twenty seven holiday season is always on Christmas.
The development is always kind of Christmas.
The twenty twenty seven holiday season is a kind of time of incredible optimism.
G.D. Yeah. OK.
All right. Super forecast thisers. GDP is ballooning.
Politics is friendly and non-partisan.
And there are awesome new apps on every phone.
Yes!
Wow.
GDP ballooning, non-partisan politics, awesome new apps on every phone.
It's the Obama dream.
I cannot wait.
The rally to restore sanity finally worked.
It just took creating a half Chinese super intelligence to do it.
I just need more great apps on my phone, dude.
If I could just get it.
And just partisanship.
That's right.
God damn.
And if only we could get the three branches of government in line, really get that bipartisan
accountability that we need.
Thank you AI.
You have really helped us out.
All right. I really, really want to read the Ocean Builders thing. So we'll skip down to
the end, right? Which is for three months, Consensus One, which is the Chinese and American
AIs who like, who talked just enough to one another to goad their superiors into a war
so they could merge and become
consensus.
By the way, if you want to read a better version of this, read Echo Praxia and substitute the
vampires for AIs.
Yeah, because the thing is, I'm not against speculative fiction that scares me.
I'm actually quite in favor of that as an entertainment product.
I have a great time with it.
It's just usually the thing it is scared of is more conceptually interesting than what
if the computer was Wajian. Right. It's like what they because what the what he
has written basically is more or less the plot of Blindsight and Echopraxia. It
just replaced the vamp like with more vampires in that one and fewer AIs. It's
basically the same thing which is these, very small things at the beginning,
having these incomprehensible dances with one another
that the human characters kind of struggle to understand.
But that's very well written
in interesting speculative fiction
that isn't all about please money.
Please money me, if you will.
In fact, it's not about that at all,
because Peter Watts released it for free.
You can just read it for free.
The idea here that they have is that you've got this consensus AI,
like you said, that is now...
The AI in Blindsight is called Consensus.
How can it be more clear? Read Blindsight.
And so it then starts, I don't know, launching...
It launches satellites into space
that turn into rings of satellites orbiting the sun.
Yeah, they're creating a Dyson sphere.
The surface of the Earth has been reshaped into Agent 4's version of Utopia.
Data centers, laboratories, particle colliders,
many other wondrous constructions doing enormously successful and impressive research and I think the the thing that
I did want to read the last the last sentence here because it's what it's so stupid, but he thinks he's cooking
It is four light years to Alpha Centauri
25,000 to the galactic edge and there are compelling theoretical reasons to expect no aliens for another 50 million light years beyond that
Earthborn civilization has a glorious future ahead of it, but not with us
I love this new left behind novel