TRASHFUTURE - What If Your Podcaster Was a Robot?
Episode Date: June 6, 2023Hussein, Alice, and Riley talk about a new startup that aims to replace us all with chatbots based on Socrates, Steve Jobs, and Jordan Peterson reading the news, then we discuss the wild world of AI s...care-vertising and its impact on the world's most gullible government: the United Kingdom. If you want access to our Patreon bonus episodes, early releases of free episodes, and powerful Discord server, sign up here: https://www.patreon.com/trashfuture *STREAM ALERT* Check out our Twitch stream, which airs 9-11 pm UK time every Monday and Thursday, at the following link: https://www.twitch.tv/trashfuturepodcast *WEB DESIGN ALERT* Tom Allen is a friend of the show (and the designer behind our website). If you need web design help, reach out to him here:  https://www.tomallen.media/ *MILO ALERT* Check out Milo’s upcoming live shows here: https://www.miloedwards.co.uk/live-shows and check out a recording of Milo’s special PINDOS available on YouTube here! https://www.youtube.com/watch?v=oRI7uwTPJtg *ROME ALERT* Milo and Phoebe have teamed up with friend of the show Patrick Wyman to finally put their classical education to good use and discuss every episode of season 1 of Rome. You can download the 12 episode series from Bandcamp here (1st episode is free): https://romepodcast.bandcamp.com/album/rome-season-1 Trashfuture are: Riley (@raaleh), Milo (@Milo_Edwards), Hussein (@HKesvani), Nate (@inthesedeserts), and Alice (@AliceAvizandum)
Transcript
Discussion (0)
Hello everyone and welcome to this free episode of TF. It is me Riley. I am joined in
studio by Hussein. Uh, hi. Hello. It's up. And, uh, I am joined in, uh, TF, uh, Glasgow Towers by Alice.
It's the free one.
It is.
And, uh, that is who we've got for you today.
You can tell because no one is, uh, yelling over me in a dumb radio voice and doing
the endos.
And it was doing an accent.
Uh, that's right.
We have banished a lot of introducing characters from family guy over a period of about 20 minutes
in a way that incapacitates every single one of us.
Yeah.
I mean, like, I feel like none of us can really do voices.
I was sort of tempted to do a kind of like, Melvin Bragg style.
Like, hello, it's the, it's the free one.
Um, except that's not really Melvin Bragg.
Um, it's not, I guess I could, I could do you like an old newsroom. Like, yeah, it's the free one, except that's not really Melvin Bragg. I guess I could do you like an old newsroom like,
yeah, it's the free one.
It's like, that's a good attempt, man.
That's a good, pretty, that's a good, yeah.
British movie tone, path A presents the free one.
It's the free one.
Milo has been banished to the lagoon.
However, we are longs.
Yeah, he's been forming a more equitable society over there.
Fucker.
But we've got a little bit of news for you of startup.
And then of course, we're going to be talking about some more AI.
First piece of news, ladies and gentlemen, we got it.
That's right.
The transgender rights campaigners and advocates for Russia. According to me and my friends in Russia have finally enacted something we long thought was impossible.
Some measure of consequences for sexual misconduct within the realm of British media.
This being observacolomist Nick Cohen and his downfall.
So long has this been known
and I believe actively and institutionally covered up
by the Guardian and Observer where it's not working.
I mean, long has it been posted about
and not to give fucking left,
Twitter any credit for anything ever.
But this was something that was talked about quite openly on social media and
notably was not responded to with any legal threats because it was true.
So, and of course, what had to happen to bring out this kind of secret in the very sort of
cloistered and chummy world of UK media was as usual, the
New York Times has to just look at Britain and then see what is obviously fucking going
on here and just be able to, and it's like the New York Times, that's the one place where
it is compelled to tell the truth about.
Yes.
Yeah.
It's curious, isn't it?
Yeah.
So, Jane, who write the story and who I know a little bit
From like previous like working in similar places
It's a British reporter and so she works for the London Bureau of the New York Times
But and it's not it's not to sort of say but like the point is invalid
But it's more just like it's very interesting that
Even like a British journalist who has sort of been around for like a long time and been established for a long time
The only way that they could do this story was through an American outlook.
Because like the big part of the story is also that the financial times had this scoop about Nick
and they had the details and they, and like the thing that made it a story was that like
the financial times I think had eight women who had come on record to like say yeah this guy
Or eight women who is kind of said that yeah like this guy it like has been exhibiting like
These harassing behaviors and has been doing so for a long time and the financial times were the one that were like Yeah, he's not really enough of the business guy to like for us to do the story
Which is like number one is bullshit, but I think very emblematic of this like how
But I think very emblematic of this, like, how cloisted British media is to the point where like, you just need a US outlet to actually tell kind of proper stories about what is actually happening in Britain.
And people still getting mad about it, like, you know, of course, what does people think?
Like, why is the New York Times like, erasing our sexual harass?
Hadley Freeman, the Guardian's own Woody Allen correspondent said that it was an
example of sort of like the pretended cultural superiority of Americans that they were publishing
this and that it was very, hmm, which come on. The thing that was confused is me about this,
though, right. And I know it's just regular patriarchy. I know it's just regular institutional misogyny. But no one can seem to explain to me why it's worth
circling the wagons like to this extent for Nick Cohen. I've read his calls. Who is now a substacker?
Yes. Yeah. What does he still friends with. What value does he add besides being friends with them?
It makes sense for them to build the 14-foot wall
concrete bunker around him.
I think the truth is he knows secrets.
That's my feeling.
You think he has like a compromise just by virtue of like being in.
I don't know if his compromise,
but what I do know based on like kind of again sort of knowing
circles that he is associated with is that like in terms of like the British media landscape and
this isn't just like newspapers is also like you know the bar it's like panels on radio shows it is
kind of being on the board of like like certain kind of like quote unquote free speech organizations.
But yeah, I like it's a stature, but I think kind of really sort of sets him
as being kind of different in the sense of not necessarily being like a Tyson of journalism by any means,
but like read someone who really acknowledges about like long-gevity is about or long-gevity is about like staying
like kind of being part being part of all these other sort of.
Going to the right parties, knowing the right people.
Yeah, I mean, I guess the other thing I would say,
and this sort of ties into another thing that's in the news,
which is a labor MP getting the whip suspended,
and then we don't know what else,
what other sort of like criminal sanctions might follow from this
for like also being a creep towards women.
And it just strikes me that this is,
you know, it's not a surprise to anyone,
but this is absolutely ubiquitous
in British politics, British media,
across a wide spectrum of it.
And just the reflexive urge to cover up
and to excuse.
It's just, it's quite something.
It's absolutely craven.
And it's the kind of thing that you can do these
like gotcha tweets that don't matter at all,
but are very instructive where you look at the people
who are now defending Nick Cohen
and you can show them their old columns
where today it will be, oh, well, he was an alcoholic at the time six months ago, a column
about how toxic men will use alcoholism to excuse their own actions and stuff like this.
Well, I think we can also notice, right, before we move on on as well that as you sort of alluded to earlier, Alice, he now has a brigade-sized element of the UK's columnist bathroom cops, sort
of providing cover around these circled wagon.
The trans angle was interesting too, because like, you know, the New York Times obviously
no friend to trans people, but what they did do was they permitted their journalists to
ask one follow-up question, which is one more than British
media will let you do.
And so when they called Nick Cohen about this and they were like, how do you respond to
these allegations?
He immediately responded with your characteristic bluster of, well, I have many enemies, such
as the trans rights activists and Russians. And they asked him the
one more question, which was, okay, what about the other seven women? And he went, ah,
I just, that, fuck.
Yeah, exactly. The art of the single follow-up question. The art of the interview, you know.
All right, all right. Let's, let's, let's move on. Let's move on.
Elon Musk, of course, as well in the news or his brainship firm, Neuralink, having received approval from the US FDA to conduct tests on humans.
He is right, right? He is no longer going to be murdering apes.
He's instead going to be murdering some slightly less intelligent Twitter blue users.
He's climbing his way up the like hierarchy of simmians,
and now he's gonna kill some humans.
I don't do this, don't trust Elon Musk with your brain.
Do not let, I mean, this is the thing.
He has finally, he's tried to do it with posting for years. He bought Twitter to try and do it and it didn't work.
He has finally, at long last, found a way of living rent free in someone's head. I just
think it's funny that he's like, he hasn't killed enough monkeys. No, he has killed like
1500 monkeys. That's so many monkeys. And then the FDA apparently will like,
that's the last monkey who will ever get killed by this.
After this.
Yeah, there's a guy in the FDA that fucking hates monkeys
and they're like, yeah, this is good actually.
I'm looking forward to like,
people just malfunctioning for like no reason.
In the same way that like Teslas do,
like they'll sort of be,
and like the only way to fix them is,
I don't know, like I don't know,
fucking turn them like horizontal. Like that and the only way to fix them is, I don't know, I don't know fucking term them like horizontal, that's the only way to make them function.
So I think people are just going to start randomly bursting into flames, but they're going
to be doing a sort of a dance from like a meme dance from 2017.
Yeah.
They're going to be doing the Fortnite flash dance.
Well, this is also like connected because I feel like what Elon has sort of tried, like
undecided from his twist adventure, it, like, even though he's bought the platform
and rigged it in favor of all the blue checks,
none of them are actually laughing at any of his jokes.
Like, if you look at him when he makes jokes online,
you look at the replies to him.
These are all people who are just kind of, like,
advertising shitcoins at him.
Like, he is being spammed by the people who are paying him.
It's insane.
And so I imagine what he, so I imagine
my feeling is the whole like brain chip thing is so that he can finally create people who he can
like directly beam his posts to and they will laugh in the real world. Like I'll imagine a post.
Well, they'll see the post that he's put out in their brain and they'll go, Ha, that's so funny. He'll be able to, he'll be able to force you to picture, like a bad
luck Brian or one of those other 2009 era, nine gag meme.
You know what's, you know what's really funny is the greatest proponents and only adopters
of Neuralink will 100% be the same people who used to post the like NPC, Wojjak meme where it's like
slotting a chip into the hand. The one the other thing is right? The technology of a brain
computer interface, it's not necessarily like bullshit. This is allowing paralyzed people to walk.
And anything to do with like the biological functions
of the brain is gonna be a little bit body horror anyway
because brains are just like that.
But yeah, the main thing that I don't trust about this
is Elon Musk.
And you should not either.
You should not give Elon Musk a way to make you dream
of new and improved Rh DeSantis Twitter space.
Do not give him no.
I'm saying things.
I hate this.
Or any time you think a like a slightly negative thought like, oh, I'm out of peanut butter,
you get to hear his voice saying concerning.
Looking into it.
Yeah.
Interesting.
Oh, God.
Yeah.
Why do I feel so strangely
about like, Emeralds now?
Is that announcing Thursday's news on Twitter,
and I really talked of an important first step
that will one day allow our technology
to help many people, one day.
And this is how he was, if you recall, also Musk has been saying
that this is gonna be in people's heads by 2020.
And again, like, he's going to have no shortage of people signing up to it, but the problem is going to be finding where to plug the chip in.
You're just going to keep drilling. It's going to be how much skull is there here?
It just keeps going. It's gotten biologically. There has to be a brain in here somewhere.
All right.
I feel this is very cyberpunk in a bad way.
It's because cyber one thing cyberpunk usually does,
right?
Is that the the Corpo leaders, you know,
the like like Erasaka or whatever,
they're usually they're usually represented almost like a Damia, right?
Sure.
They are serious and to be taken seriously, they're grim.
And they'll kill without think,
without a moment's compunction.
Like they are stony faced killers.
Elon Musk is.
Yes, yeah, that's a good thing.
Yeah, he is not guck. Go for there. Yeah. Yeah. He is, he
is not a kind of like, you know, a, a, a, a, a, a, a, a clown who's danced his way to the
top of the cyberpunk distilled. He keeps orbiting those people. Like, remember, a, a few
months ago, and we said that Tim Apple was going to have a bone sword for buying Twitter. Tim Apple is a corporate damn you. He is like 100% that guy in a cyberpunk setting. You would have
to change very little. You put some like weird electronic shit on the side of his face and that's it.
Elon Musk, I mean, not so much.
Yeah, so very excited for the slight twist
on the world of Cyberpunk 2077,
that still has 55 year old memes all over the place.
Yeah, the guy who did those oatmeal comics
is, you know, his head is gonna burst into flames
in a public place and we're just gonna kind of note
that and move on, I guess. You're looking into it.
Yeah, you're never gonna...
Yeah, that's the sort of giant blimp that has the laser sky writing on it.
It's just the words looking into it.
Alright, let's talk about a startup.
The startup is called Wondercraft.
Wondercraft.
Wondercraft AI specifically.
Wundercraft AI.
Does it... are we 3D printing little like items here? Little... desk toys?
No, no, no, it doesn't 3D print anything physical.
Does that have something to do with Clip-Up? Not Clip-Up. No. Not okay so
something related but not quite Clip-Up. Give me a visual. Okay. Alright alright.
Blank reinvented. That's not a hint. That's the opposite of a hint. Okay. Art, I guess.
Start your blank today.
You're just thinking woman.
It's like an invention woman to the...
It's an invention woman to the...
Like an outboarder, like you'd like, grow and fall.
Well, I'm wondering whether it's like a type of, you know, you can...
You know, because a lot of these AI guys are like obsessed with like, AI girlfriends and stuff.
So is this a service that like allows you to make
a bespoke girlfriend in the same way
that you can make a world of warcraft character?
Not a girlfriend, no.
Let me carry on a little bit.
Create your own studio quality.
Could still be a woman.
Could still be a studio quality woman.
Hot cost. That's the one. No, fuck, I was joking. It's a rather job. Could still be a studio quality woman Hot cast
No, fuck I was joking. It's rather a job. We're out of the job. It's coming from jobs
It's over. Fucking jobs are being automated. It's the most over its ever been yeah
Our precious phony baloney jobs
I was fine laughing when it was all of your phony baloney jobs. Now it's my phony baloney job on the line.
I'm, I am Elliott Zayyukowski, Pilled.
We got to start doing airstrikes on all AI chips.
A large language model should be as illegal as being a member of ISIS.
You can't do this to me.
I need this job.
So create studio quality podcast in seconds, powered by AI. And this isn't just some random little tool, someone's worked up that can do a few seconds. It's backed by why a combinator
to the tune of quite a bit of money. And it is already sort of up and running.
They've already automated Joe Rogan. They've been doing it for six months.
You didn't even notice.
Anytime you think you hear him say, wow, that's not Joe Rogan.
That's the large language model.
Joe Rogan is in like the sub-sub basement getting tortured.
Yeah, they're just trying to ring the last dopey question out of him.
So here's what it says.
You can clone your own voice. Just share a 60-second recording of your own voice,
and enjoy seamless podcast creation with your own personal touch.
I do think about the fact that like hundreds, if not thousands of hours of my voice,
are just out there on the internet for anyone to do what they want with.
Don't remind them of that.
Yeah, this is good for those days. Don't don't remind them of that
You can make any of us say whatever you want and you can take that information
however you like to Yeah, this is why it's important also don't be a fucking creep like just just leave it alone
This is why it's very important that we always have a little bit of over talks
You can't isolate any of our audio cleanly. Yeah, I think it's I think it's very important now that we sort of like talk a little bit of overtalk so you can't isolate any of our audio cleanly. I think it's very important
now that we sort of like talk a little bit like we sort of like speak with like higher or lower
octaves than usual. Yeah, I have a show of voice rather than your regular voice. My defensive technique.
Yeah, my defensive technique is I'm never going to say any of the phonemes that are used in any slur
So you can't recombine them to make me say a slur
I'm only like only speaking like in consonants from now on. Yeah
So I'm skipping like every third syllable
So you also let AI write your script
So just write a few bullet points
He won your episode to be about feed it to the AI and let it take care of the rest.
I mean, who's your last?
I have a nuanced position on this, right?
Which is you can and should do this to me,
but only after I die.
I am thoroughly in favor of the kind of like
ringing out of like people's like,
you know, and Hollywood users like dead actors,
faces and stuff.
I love that shit.
That's hysterical to me. So, so the future is like, even if you're dead, like you actors, faces and stuff. I love that shit. That's hysterical to me.
So the future is like, even if you're dead,
like you'll still be able to meet
your sort of podcasting schedule.
Yeah, yeah, yeah.
What I want to happen, if I, if I, you know,
God forbid, if I get hit by a bus tomorrow, right?
I want you all to have to put up with AI Alice
for as long as the podcast lasts.
Yeah, we all just slowly get hit by buses or...
So we get narrowly hit by a bus that yeah,
yeah, gradual.
The office pool of which member of TF is most likely to first get a neuralink?
Yeah, I think it's Milo.
He's definitely going to get a neuralink.
So introducing Wondercraft AI.
Today marks an exciting day for podcast enthusiasts.
There's never been one of those.
And content creators everywhere,
as we introduce WonderCraft AI,
the platform that turns your written content
into high quality podcast executives. Wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, wait, of production, right? So we can just automate our own jobs. We can just set AI Riley and AI Hussein and AI Alice
talking, fuck off to the pub, put the episode out,
charge as usual for it.
I'm in favor of this now.
Okay, if ever the episodes that come out in the future
seem a little bit odd or different
or somehow in the uncanny value.
Right, leave. That's a large language model.
I actually can't respond to that.
Isn't the risk also that if you get all the AIs of each of us to talk to each other,
that could be a bit risky.
I feel like you need some humans, right?
To just sort of keep it in check, but it's a really dangerous position to get all of our AIs sort of like.
Yeah, it could go really surreal.
They could end up talking about the founding myth
of Venice for like half an hour.
Yeah, but realistically, like what this actually is, right,
is it is the same like large push by venture capital
that it now that it's noticed that it can actually
more easily recreate,
creative or creative adjacent industries rather than say like, I don't know,
financial research, right? That those things that people thought were going to be very automatable are actually quite tricky to automate. And the things people thought were going to be
difficult to automate are actually easy to automate if you don't care
if it's good basically.
Sure.
You know, that this is basically the effort, as always, is not being aimed at podcasters,
it's being aimed at podcast studios and podcast networks and the idea being like the fantasy
that we see over and over and over and over again with all of these products is that you can have an artistic project that's just a notes right that you can instead of having to write a script instead of having to have people sitting up microphones and having to have microphones.
Instead of having to have on a movie set or whatever you can just be because what is the executive do is they do cocaine and they give you notes. And it's just, what if the entire process
was just the giving of notes?
That's it.
Well, one of the things I was actually
was completely right is that like the people
who I imagine would be quite shafted by this
are kind of freelance producers,
like people who sort of make lots of,
because I did a bit of corporate podcasting
production work during the lockdown.
And it was like a very
soul destroying process where again, it was very much like a guy gave me notes and gave
me shitty audio to work with. But at the time, like, you know, people were speaking on
this podcast, those people didn't, they really didn't want to do it. They sort, they were
very begrudgingly doing it because someone in their sort of like PR content department
or whatever was like, hey, we sort of need to do this for some reason because everyone else is.
And this kind of feels like the logical extension of like,
the selling point is like,
well, everyone else is making podcasts,
so you should do the same,
but also like his a way of basically doing it
for the sake of doing it.
So it's like, it's not creating podcasts for podcasts,
like genuine podcast lovers like in quotation marks.
It is really just there to kind of produce
a very, very cheap product, very quickly, but like no one really cares about a producing a
cheap product.
I know.
You're saying that it's not coming for us.
It's coming for the like, I don't know, Pinkerton Insights sort of coming for it.
Yeah, it's coming for like the fucking I don't know like the eternal but the the HSBC podcast of an advert that I saw this morning on the tube
Where one of the advertisements for it was like how can you how can you feel like you're a worthwhile employee in your workplace
Listen to this podcast that will tell you all the answers and so I listen to that podcast because I wanted to feel I wanted to find out the answer of how I can be a
because I wanted to feel, I wanted to find out the answer of how I can be a meaningful contributor to the company that I work for. And in the 15 minutes, there was no mentions of like, you know,
you need to enjoy those, there was not even any mentions of like teamwork. What one of the bits
of advice it did give out was if you come in half an hour early and stay half an hour late, you can have a conversation with your boss's boss
and that can help you in your career. And that's also one reason why it's really good to go back
into the office. It would be very funny to me, just like, after a podcast.
I've got to propose nothing, one of those sort of like very workshoped, very corporate podcast that I'm really listening to,
just sort of adopted the styles of like us,
or like the Adam Friedland show.
Like you're listening to like the KPMG
like Workflow Tips podcast,
and they're doing like Slurs Chat.
Yeah, they're doing it at the extended lagoon, Riff.
Um, the other thing I see here as well, right, Yeah, they're doing it at the extended lagoon riff
The other thing I see here as well is that's part of this larger trend of what the AI people imagine other people want to hear
You're like yeah, the answer is of course JP Morgan chase present star Van Adam like
It's the against her is JP more than chase presents like socrates in Einstein giving their dropshipping inside.
I've seen these in this such dog shit.
Because right, the idea is, and this is more of an, I think this is more of something
that's thought of by AI boosters and AI hucksters rather than like serious AI people.
Which is the the lielandlies of this world.
Of course.
And the idea that you're going to somehow be able to create
just still from their writing,
the essence of Socrates and Einstein
and have more than just two spark notes pages
being read at one another, right?
And anything is going to go beyond
just the barest surface level that will be beyond
some kind of key jingling is again laughable
and goes to show how little any of the people
who are really excited about this stuff
actually understand fucking anyone.
I'm so excited to listen to like
PricewaterhouseCoufer presents epic rap battles of history.
So this is a blog post from WonderCraft. As we ventured into launching WonderCraft just a month ago,
a question loomed over us. What are the ingredients that make a podcast resonate with the audience?
First you just have a racist guy. Is it the charisma of the hosts? Is it the caliber of guests they invite?
I can assure you from personal experience,
neither of those matter at all.
It's bits, you need bits.
You need bits, the depth and relevance of the topics discussed,
or perhaps the regularity with which episodes are released.
Now, based on the facts and the guts of topics discussed.
Yeah, based on what product these guys are selling,
which answer do you think they land on?
I reckon regularity.
That's the one.
Yeah, that's what decides.
It's just, is it out every day at the same time?
Who cares, what's in it?
Is it there?
I know they're entirely wrong about that.
Yeah. I look at, well, there's your problem. And it's, let's say, it'd be charitable and
say, irregular, really schedule. And think, I don't know. Sometimes people like stuff that
isn't such a high production value. Now, granted, it is that way because that's the production
value we have in stock. But like, even so, getting
your sort of like highly machined podcast about like, I don't know, Socrates giving you business
secrets of the pharaohs or whatever, I don't know, I just don't see it.
Well, you know, you pointed out something that I didn't think about, which is that people
are different. And when people listen to podcasts,
they'd listen to it for different reasons,
rather than for one reason.
But like so many things, right?
These guys only know how to make products for one another.
Right?
That's why every time you see,
like, and I always think about Twitter for this,
which is how the people, including Jack Dorsey, right?
How they always like to imagine Twitter being used, or any of these social networks, or any of these products, right? Even Dorsey, right, how they always like to imagine Twitter being used
or any of these social networks
or any of these products, right?
Even like Uber, right?
It's only imagined as being used,
driven by and for basically middle class people at least,
because they cannot imagine
that many sort of social levels away from themselves,
is that they are imagining a podcast and they created one as a trial as a proof of concept
that just and they just copied and pasted the top stories from hacker news, which is also
owned by Y Combinator, which is why they could do that.
And then put it into their little box, had it summarize them, put the bullet points
into the podcast, and then AI generated a podcast that summarized
the top stories on hacker news and that's their like big proof of concept, which is that
they can create a thing that can deliver an audio summary so long as someone else has
actually written the thing.
But with so many of these, let's say bits of, you know, let's say on hacker news or other
bits of journalism,
most of those are gonna start getting generated by AI's. And then we're at-
Getting into Jason's thing about Habsburg AI,
where it's like it becomes so incestuous and self-referential
that it like loses the ability to be normal.
Well, I've seen this referred to,
because one of the ways a lot of academics talk about AI
is they talk about it as analogous to a human brain.
And so they'll talk, so AI when it is confidently wrong about something, they'll talk about
it as hallucinating.
And as AI copies too many things that have also been generated by AI, the further it actually
abstracts from any human labor, right, with just to write something even,
the more it gets what this paper refers to as dementia.
And that happens very fucking quick, right?
You don't, and also it's like you haven't,
at no point have they actually made anything new,
the actual thing is just recapping something else.
Once again, you have some pretty impressive
audio production technology,
but you cannot escape the fact that without someone
actually doing something, someone else actually doing
something, you have nothing.
All you have nonsense.
Also, also social trust, like that's one reason,
like we joke about parosociality, right?
But like us being familiar to the audience, sort of like,
there's a connection there that you're not getting with, you know,
AI voice number three.
Yeah, also even for like, maybe you would do,
maybe you want to be like fucking like the guy and her,
and you're like, you want to fuck the AI voice number three.
I don't know.
But even the, even the like tech guys who are, who will likely kind of boost
of this for some reason, like, even the podcasts that they
like, the reason why they like it is not really because of
the show, it's because like, they follow these guys on
Twitter and, you know, these Twitter guys are now doing a
podcast and that's kind of, you know, a fun thing. Like,
their show is not fun by any means, but it's kind of like,
even the logic behind this doesn't fit with how these guys actually
kind of consume media.
So really, it's just kind of there.
So it exists for the purpose of like,
oh look, we can kind of do a thing.
And it doesn't really matter if like,
that thing is kind of counterintuitive to how,
the industry sort of works or like,
how people who actually work in podcasting as in like
Making a living out of it what they actually do all that matters is that it exists and so
Even if like the Bill Gates talking to Socrates weird podcast, but like it's just very jarring and doesn't make sense
Like even if that is not gonna really be listened to by anyone serious
Like its existence is enough to justify it is effectively what seems to be
the kind of guiding philosophy behind these like this, find this guy.
It's the same kind of impulse that leads you to want to demonstrate how smart you are
by saying you've read the dictionary because it's the book with the most variety of words.
And right, it is, it is at every point, it is the appearance of depth. Even with this, the example of the hacker news summary podcast
It is nothing but the appearance of content. You're still just putting your
Vanier over something someone else is done. So what they say they say the success of the hacker news recap is an indicative of a broader potential
That anyone can step into the world of podcasting with wondercraftcraft's technology, professional podcasting expertise, or a captivating voice
is no longer an ad prerequisite. But I like the need that-
That was one of the things that's one of the things I like about podcasting is the barrier
to entry is relatively low. Okay, there's improvements you can make on the top end or whatever. But you can start doing this with a laptop microphone
and cheap headphones.
Well, what they say is all that's needed
is content of value.
And what that story is about once again,
is the fantasy of the executive who no longer needs
to have anybody working for them to produce a to
produce a piece of media. Right. But then that fantasy, like you still need that you
can never do away with the person entirely or your AI very quickly starts like buying
a hundred gallons of milk and pissing in the closet.
Yeah. You remember the sort of the like the wally thing, the fear that like humans are going to become
like very dependent on AI and you know, once it's like operating on its own, it just outpaces
this, it seems like the opposite happens.
So like once it starts relying on itself, it just it falls down.
It doesn't have the juice, doesn't have the spark, you know?
Well thus far at least, but to me that also me that also right this is this also relates to the whole idea that
oh well that it's just one fucking calconis was talking about this
right now with a i i can see what happens in succession when tom runs the
company
it's like awesome like there is no rest of the show there's no rest of the
monolisa
that it ends at the frame it, it ends at the end credits.
And it goes back to like...
Yeah, it's sort of like process of curation
and of like deciding where stuff ends if it means anything.
Like, yeah.
And what this kind of impulse of applying this
to any kind of creative activity is,
is it's trying to allow the curator and the creator
to swap places by making the creator.
Unless you can just say, I like succession,
I'm gonna curate my, and I'm such a good curator of it,
I'm gonna curate myself another episode
because I can describe what I would like.
And that's why I talked about it.
I just made AI imagine the rest of Michelangelo's David,
and it's like a marble cube.
That's why it's like, it's AI generation
is the anti-printing press
because it makes communal communication impossible.
If it's also the anti-art,
because it makes discerning,
because it destroys a piece of art
and turns it just into another bit of content.
And it confuses in it,
it muddles all the human element of it
by making it impossible to discern it.
I mean, it's the same impulse that leads Disney
to keep milking the same IP to glut-shitify everything.
It's the glut-shitification of all that exists.
Anyway, anyway.
I wanna go on about AI,
but I wanna leave Wondercraft behind for now.
Until it takes all of our jobs.
Yeah, I would like to leave wondercraft well behind the bottom of a large well-work
and pertany one.
The positive news is that for situations in Milo as a way, we can simply just implement
the AI Milo.
That's true.
So, so listeners will never have to go and go and have to go and have a sex noise or
some kind.
Yeah, without without. Yeah, we haven't seen some of the old classic characters in a while.
AI Milo. AI Milo. Deploy. Jerk Vanderflaer.
Okay, all right. So the other thing I want to talk about, of course, is the artificial intelligence scare advertising drive is continuing and it now has demonstrably worked on the UK.
So, I'll start with this.
A Center for AI Safety has released another statement which has been signed on to now by
not just that last group of people who signed on to the previous statement from a thing
another AI institute
Including Elon Musk and others
This one includes signatories who are quite serious such as Sundar Psi
It includes Elon Musk. I don't know much. It does include Elon Musk specifically includes Sam Altman
It includes the bomb the data centers guy as well
What's the second? Yeah, yeah, it includes him and the thousands of academics. And the statement says, it's very short.
Mitigating the risk of extinction from AI
should be a global priority alongside other societal scale
risks such as pandemic and nuclear war.
And number one, again, like this is,
this has to be seen in my view as these guys advertising their products. And also has to be seen, in my view, as these guys advertising their products.
Also saying to-
We're sort of like, alive to the possibility of AI doing bad shit here, I would say.
But what strikes me is it's a problem of frame of reference.
We can only sort of talk about it in terms of like, it's going to do sky-net and
not in terms of like, it's going to make the self-driving car kill your Nat or whatever.
And so this is also right, remember that this, these guys, this group of executives is
kind of doing the global government tour or at least the sort of Western government tour.
And they got all, and they were spoke before congress a few weeks ago, they got all the real us politicians to listen to them.
They agreed it was too important for Greg Stubi to do a dog and pony show.
They they didn't get like the make a wish congress people to come and see that particular
um no this time we got the guys who know what they're talking about.
Yeah and then and everyone and the same thing is happening here, where after meeting Sunak and after meeting
several labor front benches, once again, everybody is roughly singing the same tune.
So Sunak met with P. Shai, Altman, as well as a few others, and has basically said that
the AI white paper written by Michelle Donald, and I think we talked about a few months
ago as well, is now completely out of date, because it talks about things like bias,
and the implementation in workplaces, again,
from a very bad shit.
And not interested in that sky now.
When is it gonna launch the missiles?
And if you think that this is not confined to the Tories,
at all, labor for the long term,
which is a labor, a long
termist labor pressure group, which we should talk about.
That's a horrible vibe.
You know, like Nick Bostrom and Keir Starmer hanging out and just talking about, yeah,
Christ.
Hey, I'm Milo, your turn.
So this is, they're all trying to enshrine.
Like everybody agrees on enshrining this kind of artificial intelligence driven
wiggism more or less into all forms of government.
So David Davis and other Torrey said, the whole question of responsibility and liability
has to be tightly defined.
Let's say I dismiss you from a job on the basis that the AI recommends I do so and I still
liable for that action.
That's a revealing example, I would say.
That's saying the quiet part, a little bit louder than necessary, because most of what
we're worried about AI is not so much SkyNet and more about it automating all of your
phony-blowny jobs. right. And the idea that like, if I use this to do that, am I like on the hook for it
in an employment tribunal or whatever, that's an interesting question to be asking at
this point.
If you get kicked off your corporate podcast because an AI replaces you, do you get severance? Are you obliged to like, if they make an AI out of your voice,
does that count as you training your replacement?
So it's also, this is the other way that a lot of these people see AI, which is a layer,
that's how they want the C AI, which is a layer between a powerful decision makers and
the consequences of their actions. Because you can say, oh, it wasn't me.A.I. which is a layer between powerful decision makers and the consequences of their
actions.
Because you can say, oh, it wasn't me.
It was the computer.
The computer said no.
Friend, computer said no.
And here's labor.
Lucy Powell, Duke spokesperson for Digital Culture, Media and Sports said the A.I.
White paper is a sticking plaster, relying on overstretch regulators to manage multiple
impacts, without huge areas to fall through the gaps.
That's probably true.
But again, the area she's talking about is sky net areas. And then Darren Jones from the
business select committee said, we need the UK to promote an international and host
an international AI agency along the lines of the IAEA.
All of this the AI, IAEA.
Yeah, the the EIEIO.
And the IAEA, is that what we're doing here?
Yeah, that's such a, it's not a global concern
until you have a very unwieldy acronym.
And I mean, I'm not against that.
I'm not against regulation.
I'm definitely not against like world regulation
of these things or indeed much else. It's just, it's pretty clear
where the attempt is here to like, you know, get a lever over this before it gets started, right?
Well, it's the, what I'm noticing, what I noticed from the US, what I noticed here is that
politics is at most sides of the political spectrum across the Atlantic and you know also elsewhere
and like Canada Australian stuff are pretty much agreed that the main threat is SkyNet
and that the way to do that to prevent SkyNet, at least the way in the US they're talking about
preventing SkyNet is is onerous licensing for like having a certain amount of compute.
And then over here, we're still mooding how it would work.
But I'll talk about the AI big way.
That's not convenient, first of all, to be like, we have to protect these guys' existing
business.
Once they're in, we're going to like seal it off completely around them.
They would never do SkyNet.
Yeah, because otherwise, if we don't do that, if we don't enshrine them as a monopoly now,
SkyNet will happen.
So Altman basically gave his vision of what he thinks that this regulatory vision should
be.
He says that governments have to work to smooth the integration of superintelligence with
society, either by setting up a...
It's not a thing yet, not even guaranteed
that it's going to be a thing,
not inevitable, but this is the thing, right?
I think this is an attempt to craft the narrative,
not just in a hype way, right?
I think this is a way that we have decided
what the program is gonna be, right?
And Riley, you and I have spoken about this,
this is gonna be the subject of the next episode,
it was like the ideology and the ideological implications of that. But what strikes me about
this is it's an attempt to say super intelligence like general AI, whatever you want to call it,
whatever we actually get, whether it works or not, whether it works as intended, whatever
the effect of it are, we are going to call it that. We are going to understand it within the framework of those things.
Regardless of what it actually does, this is the narrative now.
Yeah, it's that if you are enough of a wig, right, anything will seem like a, like a,
a, a god that we built, eventually.
Yeah.
Yeah.
If you, if you get the microchips small enough, if you add enough compute, and again, there are
lots of quite lively debates and the philosophy of mind and stuff that it is really just about
a sufficient amount of computing power.
But that's not-
There are lively debates in the, like, the, the, the, the, the, the, the, the, the,
the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the,
the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the
the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the
the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the
the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the
the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the
the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the the, the, the, the, the, the, the, the, the, the, the, the, the, sort of like, we don't know the answer to that. I'm not
against regulation in a precautionary sense because of course I'm not. But I think you
have to pay attention to the philosophical implications here of being, we are going
to regulate this ahead of time. And in so doing, we are going to very strictly define anything that happens is AI now.
Well, indeed, and I'll go through more of what they say they want. So Altman and Co have written
that they think that yet you say governments should work to smooth the integration of super
intelligence with society or collectively agree that the rate of growth and AI capability should
be limited to a certain rate per year. It should never stop, but it should be limited to a rate per year.
But like, we've got a whole new line.
We're doing sort of like stonks on AI.
And that there needs to be something like the International Atomic Energy Agency for
like AI and compute.
So that idea now, that now idea is just, it is, all is just made its way into the regulators the these guys have
just walked in uh to like Washington into Westminster did like a little fun thing with
chat GPT like when we made those fake here stormers speeches and now they're like just write the law
just go ahead and write the law I'm sure you'll do it well.
The kiss Thomas saw one of the fake speeches he was like oh my fucking god.
Yeah.
We just stop this shit right now.
He felt in that moment the same way I felt when we found out about the AI podcast.
So they also said what they don't think it should apply to.
These are the Sam Altman and Co.
They've said today's systems create tremendous value in the world and what they do have
risks, the level of those risks feels commensurate with other internet technologies and societies likely approaches seem appropriate.
But if you're talking about like risks posed
by other internet technologies,
everyone is insane and depressed now,
because other internet technologies that we had
just basically warmed their way into people's lives
and gave them all mental illness.
Yeah, but all of that shit, all of the social trust shit,
that's not important because what we've got to be worrying about
is eight-foot-tall metal skeletons
to laser guns, yeah.
The technology we have has made a certain subset
of the population, like I do think like,
like fervent cubely for whatever
is akin to a kind of like being quite mentally unwell
and we didn't need AI for that.
You know, that's the result of mass alienation.
That's the result of an incredibly weak society
that causes people to become terrified of one another
and to latch on to increasingly sort of deranged visions of reality.
And so the idea of saying, well, those risks feel commensurate with other internet technologies
that it feels like, well, hang on.
The risks of other internet technologies that we brought in were quite significant,
not because they were inherently risky, but because we are so unable to absorb even a small
shock to our incredibly fucking sick polity.
Yeah, so it says, by contrast,
the systems we are concerned about, they say,
will have power beyond any technology yet created,
and we should not be careful to water down the focus on them
by applying similar standards to technology below this bar.
Oh, I absolutely see where the fix is there.
If anything, I find that a little bit heavy-handed.
That's quite like, do not look behind this curtain, you know?
It will also like, we talk about, even if you don't want to talk about similar technology,
like similar, you might say, watershed technologies, like, like, like, smartphones and social
media, if you want to talk about just AI, right?
If you wanna talk about the AI risks,
the AI risk is not super intelligence.
It's the fact that an eating disorder charity
fired all of its call center staff
replaced them with a chatbot.
The chatbot encouraged them to kill themselves.
Yeah.
That's a risk.
That's not super intelligent.
In this intelligence.
It listens to too many AI podcasts. Yeah, that is a real
actual
that doesn't matter that doesn't matter because that's not an existential threat and on the suck like the
on the long view right me and Keir Starmer are out here fucking the 10,000 year clock and
while we're doing that we're're thinking, that doesn't matter,
that doesn't matter. What matters is like, you know, the survival of the species, which could be
jeopardized by this, but like, infinity time smarter, which we're pretty certain is happening
any day now. Yeah, because like, it doesn't matter if the AI that you're creating, like, makes people
miserable with anything, you sort of want that to happen because without that miserable population who's going to listen to
your AI and enhance podcasts. So who's going to like, well, that's a economic function too,
is it it's going to emisorate people by like automating their jobs and making the product of work
worse? Like that's what it's for. But also like embedded into the ideology behind this,
is this the idea that the actual AI stuff
that is going to make people more miserable,
I think the eating disorder charity
is a really good example of that.
But even just all these types of AI tools
that you can see that are designed to optimize workflow,
for example, or will be implemented into Amazon warehouses
or in like various sorts
of other like gig economy jobs that are created out of this.
Like they don't see this as the existential threat.
And neither does like this government or the British government in particular who I imagine
are also quite interested in this in part because like they're sort of desperate for any
kind of tech industry to like latch on to, right?
Like, I mean, all I was going to say was I might be impression
that I got from like, Stammer and his team and like,
they're interested in this.
I think I think actually one of them did say something about like,
oh, we think that Britain can be like a leader in sort of like AI
technologies. And so we have like an economic investment into this,
as well as like, you know, and part
of our kind of concerns about its risk is because we want to create like a healthy economy
in which like good AI can be created. I think, well, that was like a very, very cheap
paraphrase of it. So I'm not sure whether like they exactly said that.
But this is the reason that every British politician has to say that, of course, is it goes
even back to what we were talking about at the very beginning is the incredibly insular and self-referential structure of British
media and politics, which is that as soon as something is a big topic that someone says
something about everybody needs an opinion on it.
So we were going to be a world leader in cryptocurrency a couple of years ago if you remember.
Yeah, we were going to have Britcoin.
We were going to have a Royal Mint NFT.
And again, I think that the people who say that AI
is exactly the same as crypto.
Like, yeah, it's got the same kind of hype cycle
and a lot of the same people are into it.
But what I've always said is,
this is actually valuable to capital
because it allows them to discipline labor more, right?
But there, it's the, to the British politician,
to the underdeveloped brain of a British politician, it's the, the, to the British politician to the underdeveloped brain
of a British politician, it's all just the next shiny thing that we need to be a world
leader in because it's happening. And we should be going where the stuff is. And that leads
us, of course, to see the EU's regulatory stance and AI, which almost caused Sam Altman
to threaten, what, dead cause Sam Altman, in fact, to threaten to pull open AI out of Europe, right?
Because there was actually,
coal is bluffing, I'm glad the EU did.
Yeah, but, and then the US vision,
which of course the US is never gonna be able
to properly regulate it,
because how do you coordinate that
between states and the federal level?
Is the UK saying, oh, we're gonna be the middle power.
We're gonna, we're going to have,
like, let a thousand chatbots bloom,
let a million data center.
Where we use that.
Yeah, but every three months, someone with an eating disorder
is torn apart by an AI.
Yeah, but I mean, like, basing the EIO,
or whatever we're fucking calling it in the UK,
would like be a genuine geopolitical coup?
Because that's, it's gonna be a very influential thing
if it becomes like a regulatory body.
On the other hand,
imagine basing it in this country with these politicians
and in a timeline where like all of them
have a neural link installed. You know, so this is, it's like, you take the place where we used to have, you know, the
factory, that got torn down and what got built in its place was a call center which got torn
down and was built in its place as an Amazon warehouse.
And that got emptied out, now host a giant data center.
And you know, that's a little bit, I think, what the fantasy is, right?
Which is, if we just put enough faith in a wiggish view of technology,
we're not going to have to do difficult politics. We are just going to be, we're going to grab onto
the hype cycle at the right time, just like we did with the steam loom. We're going to do exactly
that again. They talk about it in those terms of like a new industrial revolution, right?
I just like that they talk about it in those terms of like a new industrial revolution right like
the end and not a more equitable one by the way when people say that they just say new industrial revolution and like interesting what happened in the last ones
I don't worry about it. What was it power by who was required to whose labor was required to create a movement of popular now
some kind of a some kind of a drop some kind of I think a rhombus trade in the Atlantic. I don't remember exactly what it was.
Anyway, by the way, if case you're interested,
the EU regulations are essentially that
have a transparency requirements,
such as disclosing if something is generated by AI,
but also publishing summaries of copyrighted data used
for training.
Yeah, I really like this,
because like, I think it's a useful weapon against AI.
One of the only useful things you can use copyright for I would say is where did you get this
from?
Who did you know like chasing around like a dog that has something in his mouth like where
is this from?
And then there are a number of things that they want to prohibit AI being used for.
It's just most policing and biometric things, as well as emotion recognition systems, scraping,
creating facial recognition data bases.
The things that cause AI companies to say, oh, we don't want to go be there, because you
won't let us do anything fun.
And we'll have to say when we stole something.
But I think around all this off, actually,
I want to read an article by Ian Martin in the Times,
which is entitled, To Defend the West,
We Must Win the AI Race.
Pause is on an offence.
That's a can go.
Thank you.
I am rubbing my hands with Glee.
Pause is on an option when allowing autocracies
to gain superiority in these technologies could
spell disaster.
Albert Einstein, this is a very heavy hand in article by the way.
Albert Einstein came to regret his warning to Roosevelt in 39.
The Germans had split the atom he wrote and the Nazis could try to create nuclear weapons.
Roosevelt quickly mobilized America's elite universities and authorized the Manhattan
project.
The letter triggered a chain reaction in the US government that led to Hiroshima and Nagasaki
and the Cold War arms race.
With Einstein eventually telling Newsweek, had I known that the Germans would not succeed,
I would have done nothing.
Well, perhaps, but staying silent would have risked the Soviet Union or the Nazis getting
there first.
So basically, Ian Martin is saying, look, it turns out, you know, it was better to have the
world than the brink of destruction and sort of still being on the brink of destruction.
Yes, still is.
Yes.
Yeah.
So think of the Cold War nuclear arms race as a kind of, well, I guess we should do it again,
but on the computer.
Like, if you actually believe that AI is gonna do SkyNet,
and your answer is just, well, I guess it's SkyNet,
I suppose, better our SkyNet than theirs.
If he is leaning into it then,
there is a sort of logical conclusion to this,
which is, if we're doing balance of terror stuff with AI,
the same as we do for nuclear weapons,
you can't have a mutually assured destruction
without the mutual part.
So what you should be advocating for,
and I expect to see this in the coming paragraphs,
is a ruthless program of spying
on the part of the people's Republic of China
to build up indigenous AI capacity
so that like everyone has it
and now we're in sort of like balance.
Because it's most dangerous if like, you know,
we have it and they don't, right, under that logic. So.
Well, the argument of course here being that the most dangerous part of the Cold War was just
after 1945, just before the Soviets had their own bomb. Yeah. When the US was like, we can do
fucking anything. Yeah, absolutely. Yeah. So it says, the bomb and the debate and whether
its invention was unstoppable will be big this summer. Through it says, the bomb and the debate, and whether it's invention was unstoppable,
will be big this summer.
Through their labors, mankind acquired the godlike capacity
to all but eliminate life on earth.
And I can't, I mean, how do you,
how are you an AI booster?
How do you read this as an AI booster
and think what I am doing is good,
unless you have the belief that they all have,
which is the premise of, almost the premise of the Turing letter, the premise of a self-improving
algorithm has taken on a life of its own outside of humanity and almost like a kind of
world spirit is going to instantiate itself no matter what.
And so at least we can trust ourselves to do it responsibly.
It is a belief in, and is genuinely a belief in the ghost in the machine that is trying
to exist.
It's all different levels of like Elon Musk hitting on grimes with Rocco's Basilisca
thing, which neither of them really understand and which is stupid anyway.
And I always go back to this idea
of AI as the opposite of the printing press, right?
Because what actually,
let's leave aside the SkyNet thing, right?
If you just say,
if you talk about the risks that Ian Martin is talking about,
he says,
then 20 years our food may be managed by AI,
decisions about what to grow,
whether to manufacture more
fewer fish fingers and where to deliver them will be made
with little human involvement.
Replicate this from the management of the power grid
and undersea cables which are all eminently hackable
and all requiring robust national defense,
which I then have to ask, why then seed them?
Why is it that the working out of the machine spirit
has to then be given undersea cables,
national equipment and fuel?
Well, it's inevitable.
It's inevitable because of how much more efficient
it would make the fish fingers.
And I'm often saying the fish finger distribution
is inefficient, so no choice, but to do SkyNet.
I feel like the, see if the impression I'm sort of
getting from this is very, I think you were sort of like touching it,
like not long ago, where it's about like having,
the race to have like the take on it.
So he, he's imagining a type of AI
that is basically self-governing, right?
The sea, it sort of seems like the fear,
which is very much like the rocker bass list fear.
It's just like, oh, this machine,
be like this system will just kind of like self-governed and have all the sort of like existential threats
that come with it and therefore.
But like, to me, this is really confusing
because it's just sort of like, yeah,
but people do manage the AI and people do create it.
And like, you know, there is sort of like a lineage
that sort of goes into the design of it.
And the AI that you're imagining is not really the AI
that exists and is being advocated for.
If anything, the AI that you were imagining,
like the actual sort of AI boosters who are,
all the AI tech guys that are going to benefit from it,
don't want that either.
So it feels like you're sort of talking about the wrong thing.
Like so many timeswriters, what he's doing is
he's reacting to the marketing poster
as opposed to this thing substantially.
And this is why it sort of feels very jarring
because it's sort of like,
but yeah, you're talking about something
that isn't going to happen,
not because it doesn't have the potential to,
or that like the technology isn't in,
because it could very well,
like that stuff could develop.
It's this that it's not going to develop
for lots of lots of reasons that
aren't really about tech and much more to do with how the kind of who accumulates a capital
in the tech space.
Like any Natsuk thing, the angle here I think is, well, obviously we wouldn't make SkyNet. But what if China made SkyNet?
Because I don't know, they just, they just felt like it.
If the West and its allies failed to emerge as the winners of the AI race, yes, of course,
because they're racing to make SkyNet because we're racing to make SkyNet. So we have
to race faster to make SkyNet. Of course.
SkyNet's getting made. Well, first of all, let's just let's just listen.
Listen, there's going to be a sky net.
And when there is, we better have an American flag on it.
If the West Nets allies fail to emerge as the winners of the AI race,
we'll be at the mercy of dictators who can swarm us with 20,000 drones,
communicating with each other rather than you know, hold on, hold on.
We're already at the mercy of dictators, because like as we have seen,
there are dictatorships with nuclear weapons., but what if what if worse?
You know, what if it was world destroying that be worse?
What could possibly be worse?
Uh,
Guy net.
Big scary skeleton terminator.
I mean, it's okay.
There already is the balance of terror.
How do you as you say?
Yeah, how do you make it more?
Like vast computer power can relentlessly search for weaknesses through which to launch cyber attacks or shut down our financial
Return of the life. I will say this to undercut myself a bit.
There already is a we already have balance of terror at home because we already have nuclear weapons
most of which are controlled by like,
a guy with a switch under a bunker in Montana or whatever.
Like, you can't say,
oh, well, you know, if China invents SkyNet,
they're gonna like entirely fuck us over
and we'll be powerless because what if we say,
okay, well, if you do SkyNet, who else will nuke you?
What if that?
Well, that actually, again, I don't,
that's already is the policy,
like it's already pretty clear that,
I've already made very clear that that would be an act of war,
sure, of course.
So I don't really,
So what's new?
Nothing other than just like we have to do Skynet
because it would make me feel bad if we didn't.
This is an example of someone upon whom the marketing
of like the scare of advertising has worked entirely.
And of course it has, because this is a British media creature.
Like this is a form of life upon which advertising fucking works.
Yeah.
Much of the debate has centered so far on the question of whether the machines will
come to be cleverer than their masters and develop feelings and desires.
My concern is not so much the questions of machines becoming sentient.
Um, uh, to, to, to human sentiment and feeling will still be powerful in art culture, creativity,
the search for a sublime, etc.
There are enormous ethical dilemmas confronting our leaders.
We hear a lot from them about climate change and from the global industry dedicated to stopping it
But this the need to beat autocracies in the AI arms race and work out how by the process
How we might retain human control is bigger than that and the key challenge. It's a dangerous race
And we have to win we are going to break the 1.5 degrees warming like two months from now
How is this the bigger problem also?
degrees warming like two months from now. How is this the bigger problem? Also, who says we're staying democracies given the sort of like,
insipient rise of the far right, more or less everywhere? Like, what if it just
becomes a contest between like two different forms of autocracy? I won't care
because I'll be getting bulldozed into a mass grave at that point, but like,
just as an academic question, what if that?
American flag on it. British is gonna have a union jack on it.
On our Brits guy net.
Well, the thing is I always figured that like if America
went like explicitly fascist or Britain went explicitly
fascist, we would do the sort of hearts of iron thing
and change the flag to look more evil.
Yeah, we're gonna have a big, no, we're gonna have
a big computer chip on it instead of like the cross
collation of coves, because that's gonna be
with actually more dangerous.
Damn, crazy.
Anyway, anyway, but I just just sort of just have a cast thought.
Okay, great.
Which I can pitch to the times as a column, which is when are we going to have one of those
international hazard symbols like the radiation triangles but for like, computer cognition. When we're going to have a sticker that
you can like slap on the side of a computer case to indicate, hey, this thing's got feelings.
All right, all right. Well, let's just. Yeah, let's make one of those and sell it as
stickers. Well, this can this computer has mental health. Yeah. Yeah. Has a triangle with like this computer has mental health,
printed, patent pending,
check on your check on your chip.
So I've already know.
Check on your chip.
Perfect.
Okay.
Can't do it.
Anyone making you sure that of that,
we're going to fucking sue you.
Yeah.
We will take that as an act of war.
Yeah.
Yeah.
We will we will nuke you.
Anyway. Yeah. Just around all this out though, I think like the
it's it's something that you know to be just to continue to sort of you could you will once you see
it notice it everywhere in these discussions, you will see the discussion of the dangers of AI
every time it's talked about in government be steered away from identifiable impacts and identifiable people and onto theoretical impacts on theoretical people, but the impacts are so extreme that
they outweigh the identifiable impacts on identifiable people.
And it is something, it is nothing new under the sun.
It's something that sort of our various governments have done more or less forever, right?
But you know, you're, you're not going to stop seeing it.
Anyway, I think that's all there is for today.
So I want to thank you all for listening.
Thank you, my wonderful co-hosts for being here with me.
And remind you that there is a Patreon.
It is for $5.
You can get an extra episode every week.
For $10.
You can get an additional two on top of that. An extra Britonology and an extra writtenology.
We should add a sort of like a very high tier, like a sort of million dollar mark where
you just gain access to the AI that like replicates us.
You can just generate as many episodes as you want to listen to.
You can't release them though.
I mean, the future is also just like getting your favorite hosts on one podcast to another
episode of a show of one of your other favorite shows, right?
Oh, yeah.
You could get us to do one of those.
You can get.
Well, there's your problem.
Yeah, well, there's this across over episode.
Fuck.
Vasek, Vasek, that's going to happen as well.
Shit.
Yeah.
If you don't, I'll just, we'll just go on each other's podcast.
You don't need to generate it with the eye. Anyway, anyway, shit. Yeah. If you don't, I'll just, we'll just go on each other's podcast. You don't need to generate it with AI.
Anyway, anyway, anyway.
The pod the pod saved John's are gonna do trash future.
And what do you say A.I. Liam?
Yeah.
Ah, anyway, um, other things.
Yeah, other things.
Soy face noise.
Ah.
Other things.
There is a stream Thursdays and Mondays on Twitch.
You can watch it.
Watch it by plugging slop.delivery into your browser.
That's right. Milo is probably doing shows. You can go on his website. I'm sure it will be linked.
Anything else? Anything else? I think we're good. All right. We will see you on the bonus episode
in a couple of days. Bye everyone. Bye.
I'm gonna start taking up.