Behind the Bastards - Part One: Tech Bros Have Built A Cult Around AI
Episode Date: January 30, 2024Robert sits down with Ify Nwadiwe to talk about the weird cult that's formed around AI, and some of the most powerful people in Silicon Valley. (Adapted from his article in Rolling Stone: https://www....rollingstone.com/culture/culture-features/ai-companies-advocates-cult-1234954528/) (2 Part Series) Sources: What Makes a Cult a Cult? | The New Yorker The Wide Angle: Understanding TESCREAL — Silicon Valley’s Rightward Turn | Washington Spectator We need to examine the beliefs of today’s tech luminaries This A.I. Subculture’s Motto: Go, Go, Go - The New York Times (archive.is) Effective Accelerationism — e/acc - e/acc newsletter (archive.is) The Techno-Optimist Manifesto | Andreessen Horowitz (archive.is) https://abtinmus.medium.com/nick-land-for-dumbies-technological-singularity-and-capital-as-a-runaway-system-4fac0c9a163e https://web.archive.org/web/20141007023855/http://alternative-right.blogspot.com/2014/10/hyper-racism.html https://www.journals.vu.lt/problemos/article/view/25345 https://regressstudies.substack.com/p/technocapital-is-eating-my-brains See omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
I'm Nicole Perkins and in my new podcast, The Godmother, I'm inviting you to 1930s New York.
I want to tell you the extraordinary story of Eunice Carter, the trailblazing black female lawyer
who put New York's most notorious gangsters behind bars. Somehow, she's been largely forgotten.
Listen to The Godmother with me, Nicole Perkins, on the iHeartRadio app, Apple Podcasts,
or wherever you get your podcasts.
What up guys?
Hola, qué tal?
It's your girl Chiquis from the Chiquis and Chill
and Dear Chiquis Podcasts.
And guess what?
We're back for another season.
Get ready for all new episodes
where I'll be dishing out honest advice,
discussing important topics like relationships, women's health and spirituality.
I'm sharing my experiences with you guys and I feel that everything that I've
gone through has made me a wiser person.
And if I can help anyone else through my experiences, I feel like I'm living my
godly purpose.
Listen to Chikis and Chill and your Chikis on the iHeartRadio app, Apple
podcast, or wherever you get your podcasts. ban of human civilization in ridiculous history. Join us to hear the many ways history can be
oh so ridiculous. Listen to ridiculous history on the iHeart Radio app, Apple podcast or wherever
you find your favorite shows. Welcome back to Behind the Bastards.
I don't know why I did that kind of like,
it was like I was doing a Halloween opening.
Didn't work at all.
Horrible idea.
But I'm very happy to announce our guest today,
back after a long hiatus from this show,
but not from our hearts,
Ify and Wadiway!
Hey!
How's it going?
I'm happy to be back, yeah. Oh, so happy to have you back, Ify and Wadiway! Hey! How's it going? I'm happy to be back, yeah.
So happy to have you back.
Ify, you are a significant part of the Dropout,
a wonderful channel on YouTube
that is the survivor of college humor,
which is kind of like,
we're Ken in the comedy, internet comedy world.
Oh yeah!
And you guys have been doing
some really cool stuff lately.
Like some of the funnest, like most interesting,
like quasi-game show comedy bits.
Like I've watched hours of it, yeah.
Yeah, I was, you know, I was talking to Sam and Dave,
the kind of like higher-ups there, which is, you know,
which is a testament to how cool the company is.
Cause I haven't even spoke to the higher ups at ICC
or any of the other companies.
But I was like, oh yeah, this is like where
like panel comedy lives.
And now after midnight's back,
but I feel like Americans run away from panel comedy shows.
And I like an excuse to riff in goof with friends.
Yes.
And I think that YouTube and the kind of,
what people get out of a lot of like streaming stuff too,
like not like streaming TV, but like, you know,
streamers is that goofing with friends thing.
Like I watched Red Letter Media's stuff
for a lot of the same reason.
And I feel like there's a way to save that stuff.
It's just not putting it on television
at 1130 at night necessarily.
Right?
Yeah, yeah.
People should be a little more lucid when they're taking in that kind of media.
But where else are you going to watch, you know, block?
I was trying to think of the name of the show.
It was the one with the guy, he has the, it's not a top hat.
It's like a fedora almost. And he's a detective and all dads love it.
Bosch, Bosch is the most famous.
Yeah, Bosch.
Oh God, yeah, I've never seen Bosch.
My mom loves Bosch.
I don't even use the tools that Bosch makes.
Yeah.
They love Bosch.
Ah, the Bosch heads, yeah.
I would laugh, but I just got into Reacher,
which is like, you know, it's the stupidest thing on TV, the Bosch heads. Yeah. I would laugh, but I just got into Reacher, which is like,
you know, it's the stupidest thing on TV, but I love it.
It's like I used to watch a lot of Walker, Texas Ranger as a kid.
It's the inheritor of that, but not racist, or as racist, I guess.
Yeah, I'm not gonna say not at all.
Look, that's why I'm not even gonna judge,
because I remember one time, me and M, we were up in the cabin in Rightwood,
and we needed to kill time, and M were up in the cabin in right wood and we needed to kill time
and we were like, let's watch this 911 show, whatever, just see.
And we were like seven episodes in, locked, we were like, hold on, this is, this kind
of goes.
Angela Bass had no what she doing.
Oh yeah, no, she's a, she's a pro.
And you know, you can't get work like that or work like you and your colleagues do at
the dropout without human beings actually doing things, which is why today we're here
to talk about AI.
And particularly we're going to talk about how a lot of the conversation and a lot of
the fandom around AI is has turned into more or less a cult.
So that's that's that's the premise of this episode.
If I want to start by going back to a place I was about a week ago as we record this, the
Consumer Electronics Show.
Every year they've been holding it for a long time now, and it's kind of how the tech industry
talks about itself and what it has planned for the future.
A lot of it's hype, you know, they're kind of talking up what's coming out, that they're
hoping we'll get buzz, we'll make money, but you also get an idea of like's hype, you know, they're kind of talking up what's coming out that they're hoping will get buzz will make money
But you also get an idea of like what do you think we want and what are you trying to get us excited about? And I think the most revealing product that I saw this year was the rabbit R1 and it's it's a little square shaped gadget
It's got a screen. It's got a little camera that can swivel
And it's it's an AI that basically you talk to it like you would an Alexa, but it can use your apps.
And it's supposed to reduce friction in your life
by basically routing every move you make online
through this machine intelligence.
So you tell it what you wanna do and it does it
instead of like you using like physically
using your smartphone as much.
You still have to click it sometimes.
And I wanna play you a clip of this
where the CEO Jesse Liu's keynote speech.
Our smartphones has become the best device
to kill time instead of saving them.
It's just harder for them to do things.
Many people before us have tried to build a simpler
and more intuitive computers with AI a decade ago.
Companies like Apple, Microsoft, and Amazon made Siri, Contana, and Alexa with these smart speakers.
Often, they either don't know what you're talking about or fail to accomplish the tasks we asked for.
Recent achievements in large language models,
however, or LLMs, a type of AI technology,
have made it much easier for machines to understand you.
The popularity of LLM chatbots over the past years
has shown that the natural language-based experience
is the past forward.
Now, I don't know that I entirely agree with that
because I think the biggest influence
that these chatbots have had on me
is that whenever I try to deal with like an airline
or something, I get stuck on chat GPT
and it's a pain in the ass to do anything.
I, what's so funny about the rise in AI right now
is like, if we really think about it,
and God damn it, Robert,
you threaded the needle right there.
When you really think about it,
all AI is, is just the evolution of the shittiest part
of calling customer service.
Yes, yes.
Like all the, the one thing that we do as soon as we call
is like zero, zero, zero. Let me get straight to it.
Let me talk to a human.
And these, these like eggheads are like, what if we did more of that?
What if we remove your solace from the situation?
Of your life?
And it's, it's the kind of thing he, he seems so off from my experience
where he's like, the problem with phones is that it's too kind of thing. He seems so off from my experience
where he's like the problem with phones
is that it's too hard to do things.
No, it's too easy for me to order a bunch of junk food
and have a stranger deliver it.
That's been a problem for me, right?
Oh yeah.
It's too easy for me to waste six hours on Twitter.
Like that's what the issue is.
Do you know how much the easiness of everything
has made me wake up the next morning
with cold food on my porch that I never touched?
Because it's too easy.
It was too easy.
Sometimes we need those bumps in the road
to keep drunk, stoned iffy,
from ordering a triple double burger
that I'm hoping gets to me before I fall asleep.
It never does.
I've had so many problems with my phone,
but I don't think I've ever had the problem of like,
this thing's just too hard to use.
It's just not a problem.
I know that anyone has.
But Jesse complains that there's too much friction
with smartphones and his device,
that rabbit is gonna let you like,
you can just tell it, book me a flight,
or a hotel on Expedia, and you don't even have to know,
it'll just pick a hotel for you a lot of the time,
or what flight it thinks is the most efficient.
Jesse's goal is to basically create AI agents
for customers, which live in this little device you wear
and act as you online to handle tasks you'd normally
use your phone to do.
So you can tell your rabbit to book you an Uber,
you can have a book you a flight,
or you can have it plan your trip to a foreign country.
Ooh, that sounds really bad.
Yes, it sounds so fucking bad.
Rabbit's like, hey, how would you like a middle seat
on the air line?
Yeah, middle's efficient as hell!
Yeah.
Spirit's so cheap!
Now you can direct it more, but then that just seems like,
well yeah, that's what I'm already doing on my computer.
Why is it easiest just to like work through a vocal chat bot
that might not understand me?
Or at least we'll be as much friction as like,
yeah, when I touch my phone and it hits the wrong thing,
right, I just don't see that I'm saving much here.
No one also seems to know how Rabbit's gonna integrate
with all these apps,
because that means their device has to have access
to them for you.
And that's kind of a big ask
for all of these different companies.
That said, and no one knows, by the way,
how secure it's going to be,
but know what at CES was listening either,
because the first 10,000 pre-order models
that opened at CES sold out instantly.
Wow.
That doesn't mean a lot of normal people are going to buy it.
It means a lot of tech freaks wanted this thing.
Yeah.
That is the thing too is like, yeah, if you're at CES, you're already, you're already drinking
the Kool-Aid.
Yes.
I, you know, and I'm very much, it's so funny because I'm very much that guy or I was very
much that guy who wanted to be on that cusp of technology.
And I feel like, you know, in my early twenties, it seems so cool because you're like, yeah,
I want to be Iron Man.
I want to just have full control.
And then you kind of get to the point where you start, if you have enough self reflection,
you notice that you're just kind of,
you're using technology to do things
that you could just do for much simpler and cheaper.
Like for example, like when you get these apps
and then I'm like sitting in a photo editor for like 50 minutes
just to make it look like it was shot on film.
And I was like, I can buy a disposable can.
Like that exists.
And what's great is when you now, when you develop them,
they put it, they email it to you.
So it can go on Instagram.
It's gonna take longer, but arguably
you might get some post flick clarity.
Maybe that pic that didn't look good
isn't now on the internet.
Or you know, you can like spread it out.
But I am getting very like
You are so much more thoughtful in that than like anybody involved in this product has
been over the course of their entire life.
Now a couple of skeptics who have given reviews have already noted problems Richard Lawler
of the Verbs was like this thing is not built for left handed people to use like they forgot
that left handedhanded people existed
and so they designed it in a way that specifically
a pain in the ass.
Which, yes.
There's also, they brag, they have this camera
that can like move on its own so it can cover stuff
in front of it or behind it.
And a commenter on Lawdler's article was like,
it's a pretty fundamental design principle
that you don't add moving parts if you don't need them.
And there's plenty of space in this
for a camera in the front and the back,
which is one less point of failure,
one less thing for shit to get gunked up.
And this is actually bad design.
They're bragging about this, but it's a bad idea.
Yeah.
Yeah, there's a couple of other issues in there.
You know, we'll see.
It looks like it's gonna be a lot thicker than a smartphone.
I just don't know the degree to which a regular,
it's the same Google Glass issue, right?
Yeah.
Do you want a second thing that you have to keep on you, like alongside your smartphone?
That was just going to be my question, so I'm glad.
So is this something that should be replacing your phone or is this another thing that you
are supposed to be holding on to?
I think the goal is for it to eventually replace it, but at this point, you will still need
to have both.
So like carry another thing, you know,
you know, walking around like a drug dealer, you have two different.
It's kind of it's a big device and it's not tiny and it has a big screen.
So it's just like, well, you just made a different kind of smartphone.
Again, what am I gaining?
Yeah, it's what he's holding in this video, correct?
Yeah, yeah, Yeah. Yeah. Yeah. Yeah, okay
Well, yeah, what more is like that on top of tech person? Yeah, definitely the person who carries a power bank with them
So there's a third thing
So that's three things you're rocking with I know I have so many power banks and oh, yeah
And ADHD has blocked every single one of them from
being useful because either I forget to charge them or I forget to bring them.
It also just doesn't look comfortable.
It's bigger than his hand.
Why do you want to hold that?
Yes, it's not teeny.
We'll get a better look at it in a second because I want to show you when he's trying
to make the case for this and one of, he spends a significant portion of his not very long keynote
on him re-rolling himself in the most iffy,
you gotta fucking watch this, it is so painful.
Our one has an eye, an onboard camera designed for advanced computer vision.
It can analyze surroundings and take actions in real time.
To activate the eye, just double tap the button.
Oh, funny seeing you here, Rick.
It's a picture of Rick Astley that he points it at.
Let me take a look.
I'm not gonna give you up. I am now.
Enjoy.
What?
Am I getting Rick Rove in my own keynote?
Let's move on to the next one.
Okay.
I need every tech keynote speaker to take a improv class.
Yes, one.
One.
You're like-
And I don't recommend that to normal people.
Yeah, because you're like no selling your own joke.
Just like, what?
So you're not even laughing at it.
Why am I gonna laugh?
I will say that was scripted almost exactly
the way Tim Robinson would have written it.
Oh, man.
But I bet Sam Richardson could have delivered that bit better.
Oh, 100%.
Wow.
So that is very funny, but I find this next clip more disturbing because it shows this
kind of desire that the people that are the early adopters here have, not just for more convenience, but to hand over like the
power to choose to a robot that's basically just pulling the first advertised result from
Google.
Like it's kind of messed up.
Oh, I can also use R1 to order food.
Get me a 12 inch pizza from Pizza Hut delivered to here.
The most ordered option on the app is fine.
Ordering a 12-inch pizza from Pizza Hut.
Since you mentioned that the most ordered option is fine, I will select that for you.
I just created an order for a 12-inch pizza.
It's going to be hand-tossed with a classic marinara sauce and topped with
regular cheese. Please confirm your order.
That sounds really good. I just confirmed order here.
And he used to click it again just like on a smartphone. He used to look at the device
and click it.
Here's a freebie for any of even Richard Lawler, who has been in the game for a long time.
But like any of your tech bloggers,
just do a side by side with this video
and physically do everything he's doing real time.
Because a pizza would have been ordered way faster
if you would have just pulled out your phone.
Well, and you could get the pizza you want.
Yeah, I don't want the 12-inch classic marinara most ordered pizza.
I want the shit I actually like.
That's so weird.
Yeah.
Yeah.
Also, who orders a 12-inch pizza for Pizza Hut?
Nobody.
Yeah.
What?
There's no way it's that the most ordered product from Pizza Hut.
I don't believe it.
No, it was definitely pizza.
Nonsense.
Yes.
Yeah, it's just like, yeah. And there's the next clip. I don't think we'll actually play it. It was definitely paid nonsense. Yes. Yeah, it's just like, yeah.
And there's the next clip, I don't think we'll actually play it, but like it is, it's him
saying, hey, plan out like a three day vacation in London for me.
And as far as I can tell, the AI goes for like the first top 10 list of things to do
in London that it finds, which was probably written by an AI and then makes an itinerary
based on those.
And it's like, first off, are you that basic?
Second, planning a vacation is fun.
Is that not a thing that you want to do?
Yeah.
You're so right.
Why would you?
Because the reason you would go to a travel agent
is because they are experts at it.
They're going to find the most fun thing for you.
But outside of that,
yeah, I wanna plan the cool stuff I'ma do, you know?
And what about people with fears, you know?
Or people without skills,
which is definitely gonna be a large margin
of people who do this.
So you're in London now,
and then it takes you on a trip to Malta
to go scuba diving.
You don't even know how to swim.
And now you sit in there and you spit and they're like, you already paid for it, bro.
You let your fucking thing for this for you. It's on you. It's silly, right?
And I don't want to be, I'm going to say this is not the most direct parallel
to Colt shit we'll have, but watching this, I couldn't help but think about a Colt
that was like the subject of our second episode for this year, The Finders.
And it was one of those things, the guy, Marion Petty, who ran it was like running games is
the way he framed it and people would join the cult and give up their agency and he'd
tell them, go take a job in this city or like go follow this guy and take notes on him for
a year or like have a kid and raise it this way.
And this is stuff like this is really common within Colts.
One of the appeals of a Colt to a lot of people
is that you both get a sense of purpose
by following the Colt and whatever things
it says it's gonna do, and you give up the burden
of having to choose a life for yourself.
And this is such a common thing in Colt Dynamics
that psychologist Robert Lifton,
who's kind of one of the foundational minds
in studying Colts, described it as voluntary self-surrender
and it's a major characteristic of a cult.
Many of the finders were not, you know,
these are not dumb people, these are not like Rubes,
these are not Hillbillies,
as they're often portrayed in our popular media.
A lot of the finders had Ivy League degrees,
one of them owned an oil company
and these guys still handed their
lives over to a cult leader. Heruki Murakami, writing about On Shinriko, which is the cult
that set off a bunch of poison gas in the Tokyo subway, noted that many of its members were
doctors or engineers who, quote, actively sought to be controlled. I found a lot of this really
information on the fundamental characteristics of what makes something a cult in an article by Zoe Heller published for the New Yorker back in 2021.
At the time, she was kind of looking at QAnon and trying to decide there's not like a clear
guy and that's the cult leader and there's not like a geographic center to this and usually
there is with cults in history, does this still qualify?
And I think a lot of people would agree that like, yeah, it does.
I think a lot of experts tend to agree that, yeah, it does.
And when she was looking at the Q&A movement as a cult, Heller noted this.
Robert Lifton suggests that people with certain kinds of personal history are more likely
to experience such a longing.
Those with, quote, an early sense of confusion or dislocation, or at the opposite extreme,
an early experience of unusually intense family milieu control.
But he stresses that the capacity for totalist submission lurks in all of us and is probably or at the opposite extreme, an early experience of unusually intense family milieu control.
But he stresses that the capacity for totalist submission lurks in all of us and is probably
rooted in childhood, the prolonged period of dependence during which we have no choice but
to attribute to our parents an exaggerated omnipotence. And I found particularly the bit where
he's talking about like, yeah, if you've an early sense of confusion or dislocation makes people
crave this kind of to give up this kind of control and responsibility.
The people running these AI companies and maybe not necessarily the very top because I think those tend to be pretty cynical
realistic human beings.
But like a lot of the people who are in them and a lot of the people who are
latching on to AI as a fandom online are people whose childhoods and adolescence is like all of ours were shaped by 9-11.
The dislocation and change that that caused.
And their young adulthoods, a lot of these people like us
will have come of age around the time of the 2008 crash.
Many of the people who are younger in the AI fan base
are maybe Zoomers and stuff.
And a lot of them are people who have really ugly ideas
about like artists shouldn't charge for shit or whatever.
But also, these are people who, a lot of them came into their careers, went into STEM fields
because they were told coming up the tech industry is the safest place to make a good
living for yourself.
And that all fell apart a couple of years ago, right?
It started to at least, tech laughs began.
So again, dislocation, chaos, the sense that like, what else am I going to entrust my life
to? I thought I had a plan and it fell apart
Yeah, you know, I this is this is where if he's gonna get real
Philosophical and big player, but I just finished all about love by bell hooks
And you know she often talks about like the the wandering life with lovelessness and that searching for it and not having it.
And I feel like that goes hand in hand
with what you're saying, right?
Where it's like, I want a sense of belonging
and I want to feel like I'm a part of people.
And whether that is running into Target
and trampling people for Stanley Cups
or it's being a part of like what you perceive
to be the next big thing.
Like I think that is the biggest kind of selling point
for a lot of these AI people who's like,
this is the future.
Like that is almost every person who starts a 50 tweet thread
with shitty examples of why AI rocks starts it with,
this is the future and you just gotta get over it.
And there's so many people who just wanna be
on the ground floor of that.
They want to be the people who were on it because how many times, even I, you know, when you
have that like time machine question, you're like, ooh, stock in Starbucks, ooh, uh, an
apple, you just want to be there before it gets big.
So when it, you really, at the end of the day, it all comes down to commerce.
You want to be at the top when it all shifts.
And that is actually the danger in this for me
is the commerce.
I think about it often because, like you're saying,
it like orders the top Google search.
Google is currently in courts right now fighting against,
basically shaking down companies to see
who would be the top one.
So like the future of this being actually a, you know, a useful app kind of lives now
in that case, because if Google wins and they can put whoever's on top, that's only going
to make it more valuable where they place who's on top because people are using these
weird rabbits, you know?
Yeah, exactly. It's, yeah, it is, to them, they see the beginning of the future.
And I feel like to me, I'm just looking at all the ways it can be abused,
because if we just look at everything that has come before us,
we have to think of the ways that it has been abused.
And all the ways it'll be a worse future, you know?
And I think I really like that you brought up
the panic they try to incite in the rest of us,
the like the FOMO where it's like, this is the future.
Get on board or you're gonna get left behind.
Yes.
That is the cult recruitment tactic, right?
And what they're trying to do, I just brought up
that a lot of the people who are most vulnerable to this
are the folks who like,
yeah, they have this sense of insecurity, dislocation,
and they see getting on board with this early,
they feel like a sense of security there.
And by saying, you're gonna get left behind,
this is the only way forward,
you won't be competitive if you don't embrace this stuff,
that they're trying to induce that sense of fear
and dislocation to make people vulnerable.
And I wanna read another quote from that New Yorker article.
The less control we feel we have over our circumstances, the more likely we are to
entrust our fates to a higher power. A classic example of this relationship was provided by the
anthropologist Bronislaw Malinowiski, who found that fishermen in the Trobiond Islands off the
coast of New Guinea engaged in more magic rituals the further out to see they went.
And I think we all feel like we're getting further out to see these days, right?
Like that's, that's, it's not hard to see why like, yeah, I'm near the shore.
What?
I don't believe in anything, but what's right in front of me.
And then like, you can't see anything but water and you're like, no, there's a God and
I can keep him happy.
Yes.
Yes, indeed. It's, it's a God and I can keep him happy. Yeah. Yes, yes, indeed.
It's tough.
Yeah.
Everyone's just kind of grasping at what they can to just bolster themselves.
And sometimes you're grasping at some weird stuff.
Yeah.
Yeah.
Yeah.
And it's, you know, it's noted often by I think a lot of particularly atheists on the
internet that like church attendance is down, people who identify as part of an organized religion.
Like that that is at its lowest level basically ever.
And this is true.
These are real trends and they have they have real effects.
But I don't think the fact that less people are religious in the traditional sense means they're less superstitious or spiritual than they ever.
It's just that what they invest with that belief has changed,
in part because they've seen the world dislocate so far
out of what most priests and other sort of like religious heads
are capable of sort of explaining or comforting them over, right?
It's like religion is less comforting in a world as advanced as ours.
For most people, now this may seem like a reach still to kind of call
what's going on around AI occult.
And I get that.
I ask you to bear with me here.
And I do want to note, there's nothing wrong
with the inherent technology that we often call AI,
or at least not with all of it.
Because it's used as such a wide banner term.
For stuff is very just like a text recognition program
that can listen to you human voice
and create an on the fly transcription,
that's an AI, that's an example of that kind of technology,
right, like it gets folded in there,
that's one of the things that AI has to do,
recognizing language and like facial recognition too,
recognizing faces, if you're ever gonna have
an actual artificial intelligence,
those are two of the baseline capabilities that it needs.
Chatbots obviously are a big part of this,
along with like the sundry tools that are being used
now to clone voices, to generate deep fakes,
and fuel our now constant trip into the uncandy valley.
CES featured some real products that actually did harness
the promise of machine learning in ways
that I thought were cool, as I noted
that it could happen here.
There's this telescope, it uses machine learning
to basically clean up images that you take with it at night when there's like a lot of light pollution.
So you can see more clearly.
And I'm like, yeah, that's dope. That's great.
But that lived alongside a lot of nonsense.
You know, chat GPT for dogs was a real sign. I saw.
And like there is an A.I.
assisted fleshlight to help you not be a premature.
I was going to bring up because of course, that's the one that popped on my top nine.
It was like, and then they gamified it where you go to different planets.
You defeat the planets.
So I'm like, well, what do you, you keep talking about beating the planets.
So how do I lose?
Is it when I bust?
If I bust, you lost?
Cause you're now introducing shame to sex again.
And I thought we finally got out of
it.
Yeah, I thought we were moving past that.
Yeah, I can't beat level one.
Those kind of bad ideas, that's all par for the course for CES.
But what I saw this year and last year, not just at CES but just over the year in the
tech industry from futurist fanboys and titans of industry like Mark Anderson is a kind of unhinged messianic
fervor that compares better to Scientology than it does to the iPhone.
And I mean that literally.
Mark Andresen is the co-founder of Netscape and the capital firm Andresen Horowitz.
He is one of the most influential investors in tech history, and he's put more money
into AI startups than almost anyone else.
Last year he published something called the Techno Optimist Manifesto on the Andreessen
Horowitz website.
On the surface, it's a pin to the promise of AI and an exhortation to embrace the promise
of technology and disregard pessimism.
Plenty of people called the piece out for its logical fallacies.
For example, it ignores that a lot of tech pessimism is due to real harm caused by some
of the companies Andreessen invested in, like Facebook. What's attracted less attention
is the messianic overtones of everything Andresen believes.
Quote, we believe artificial intelligence can save lives if we let it. Medicine, along
with many other fields, is in the stone age compared to what we can achieve with joined
human and machine intelligence working on new cures. There are scores of common causes of death that can be fixed with AI, from plane crashes
to pandemics to wartime friendly fire.
Now he's right that there's some medical uses for AI.
It's being used right now to help improve the ability to recognize certain kinds of
cancer and there's the potential for stuff like in-home devices that let you scan your
skin to see if you're developing a melanoma.
And there's debate
still over how useful it's going to be in medical research. I've talked to recently some experts,
and I've read some stuff that like there are some reasons for caution too. For some of the same
reasons we should have caution with this everywhere, there's also disinformation that spread medically
with AI, even to doctors. And some of the patterns that using this stuff gets medical professionals
into can make them discount certain diagnoses as well. So I don the patterns that using this stuff gets medical professionals into
can make them discount certain diagnoses as well.
So I don't say that to like say
there's not gonna be some significant uses
for some of the way this technology works medically.
Some aspects of AI will save lives.
It's just the evidence right now doesn't suggest
it's going to completely revolutionize medical science.
It's another advancement that will be good in some ways
and there will be some negative aspects of it too, right? It's also very much not fair to say that like we're going to
reduce deaths for human beings as a result of AI because right now the nation of Israel is using
an AI program called the gospel to assist it in aiming its airstrikes which have been widely
condemned for their out exceptional, outstanding, in many cases, genocidal level of civilian
casualties.
It's just outrageous.
Oh, 100%.
And, you know, that's exactly what's going on is a genocide and, you know, the language
in all of these speeches says that as much.
Even more so, yeah, like you're saying, another thing I want to point out, which you might have been about to say, and I'm already jumping ahead,
is how I think it was chat GPT that has quietly switched their terms of service
to say that it wouldn't be used for like weapons to hurt people.
Oh, yeah. They're going to use this shit for weapons for share.
Yeah. And now, now it has,
it has been quietly scrubbed from those terms of service.
And we do need to talk about that
because there's just so many things
that we have grown accustomed to with tech
that I think is dangerous as we get into things
that have more room for error.
Because we're used to updated terms of services
on our iPhone, right?
Every time we grab an update, it's like, here's a new terms of service and you just kind of
scroll through it and you go, yeah, because you're like, yeah, you know, this is just
a phone. It's not going to be used for anything weird yet. So you're comfortable. But like
when you're doing the same thing with these chat GPT machine learning situations where you're agreeing and you're agreeing,
okay, I will help this thing learn.
And now you are just actively helping it learn
how to be an assassin, like what happens there.
Yeah, and it's again, it's this back and forth
where on one hand, there is some technology,
like AI enabled robots that can go run onto a battlefield and pick up an injured soldier. And it's, again, it's this back and forth where on one hand, there is some technology,
like AI-enabled robots that can go run onto a battlefield and pick up an injured soldier.
I have no desire to see some random private bleed to death in a foreign country.
Fine with that.
Or anti-missile missiles, right?
Using AI to intercept and stop a missile from blowing up in a civilian area.
Sounds fine.
Like, I don't want random people to die from missiles.
But it's also going to be used to target those missiles and to say like based on some shit we analyzed
on Twitter or whatever, we think wiping out this grid square of apartment buildings will
really get a lot of the bad guys. And yeah,
Oh yeah, based on if he's tweeting, we should blow him up.
Exactly, right?
Because he's talking crazy.
It's just, it's certainly not fair to say there won't be benefits, but it's absolutely unclear
in every field of endeavor whether or not
they will outweigh the harms, right?
And even if they do to what extent, you know?
Because a lot of what I'm saying suggests
that even if the benefits outweigh the harms
in a lot of fields, it's still not going,
because of the extent of the harms in part,
it's still not gonna be a massive sea change, right?
There were a lot of reasons for caution, but Mark has no time for doubters. In fact, to him, doubting the
benefits of AGI, artificial general intelligence, is the only true sin of his religion. Quote,
we believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.
And that's fucked up. That's really dangerous to start talking like that.
Oh yeah.
And this is the more direct cult comment here.
I want you to compare the claim Mark made above that slowing down an AI is identical to murder.
I want you to compare that to the claims the Church of Scientology makes.
Because the Church of Scientology,
they have this list of practices and the beliefs
that they call tech, right?
And they believe that by taking on tech,
by engaging with it, people can become clear
of all of their flaws.
And by doing that, you can help,
you can fix all of the problems in the world, right?
The Church of Scientology on its websites
claims that its followers will quote, rid the planet of insanity, war, and crime. And in its world, right? The Church of Scientology on its websites claims that its followers will quote,
rid the planet of insanity, war, and crime,
and in its place create a civilization in which
sanity and peace exist.
How is that in any way different from Mark Anderson
saying all of the shit that he's saying, right?
That it's going to like create this,
we're going to revolutionize medicine,
we're going to like infriendly fire,
we're gonna cure pandemics,
we're gonna stop car crash deaths.
What is the difference, right?
And Scientology uses that claim,
that Scientology tech is so necessary,
it's gonna fix all these problems.
So anyone who gets in the way of the church of Scientology
and the deployment of this tech for mankind's benefit
is subject to what they call fair game.
A person declared fair game quote
maybe deprived of property or injured by any means by any Scientologist. And again,
Mark Anderson has not said that in his techno-optimist manifesto. In fact, he
makes some claims about like none of no people are our enemies, right? But if you're saying
you're a murderer for slowing this down, it's not hard to see how some people might adopt a
practice like fair game eventually, right?
That's how, where else does this go is my question.
Yeah, what do we do with murderers?
What is, I feel like the general rule across all creeds, across all beliefs is typically murderers are bad and should die.
Yeah, at least be punished, right?
There's a punishment for murder.
Most people agree.
Yeah.
Speaking of murder, you know who has never committed a murder? Right there's a punishment for murder. Most people agree. Yeah
Speaking of murder, you know who has never committed a murder
You cannot say that like okay. No, you're right. You're right all of our ads
You know why I can't prove
We're involved in any murders just like I have no evidence that Jamie Loftus had any role with okay, okay? What What? I'm just saying, one, my girl is innocent,
and two, you can't say that about us, about this,
we don't pick them.
If he, by the way, we're spreading a rumor online
that Jamie Loftus was possibly involved
in a series of murders in Grand Rapids, Michigan.
Oh, not me.
It's a bit, it's a good bit.
Yeah, yeah, everyone loves the bit.
Everyone knows her whereabouts, so, you know, who knows? Who knows? Oh, not me. It's a bit. It's a good bit. It's a bit. Yeah, everyone loves the bit.
Everyone knows her whereabouts, so, you know, who knows?
Who knows?
My gosh, innocent.
Uh-huh.
Anyway, here's some ads.
This show is sponsored by BetterHelp.
A lot of people say, New Year, New you, you know, think about changing something,
making a new resolution for the next year.
But as we move into 2024,
it's just as important to think about
what are you already doing right?
And how do you want to expand on that?
Well, that's the sort of thing
that therapy can help you with, right?
You know, maybe you finally organized one part of your space
and you want to get more organized next year.
Maybe you're taking supplements every morning,
you're on a good workout plan,
but you wanna extend that,
be healthier in other parts of your life.
Therapy can help you find your strength
so you can ditch the extreme resolution
and make changes that stick.
So if you're thinking of starting therapy,
give BetterHelp a try.
It's entirely online,
it's designed to be convenient, flexible,
and suited for your schedule.
They'll pair you with a licensed therapist, and if that therapist doesn't work out,
they'll pair you with another. All you've got to do is fill out a brief questionnaire.
They'll match you with someone, and again, switching therapists is something you can do at any time
for no additional charge. So celebrate the progress you've already made. visit BetterHelp.com slash behind today to get 10% off your first month.
That's BetterHelp, H-E-L-P, dot com slash behind.
I'm Nicole Perkins, and in my new podcast, The Godmother, I'm inviting you to 1930s New York.
At what would become known as the Trial of the Century, infamous mob boss Lucky
Luciano is finally taken down. But this is not Lucky's story.
I want to tell you the extraordinary story of Eunice Carter, the trailblazing black female
lawyer who put Lucky Luciano behind bars. At a time when black history is being erased,
telling Eunice's story is more urgent
than ever. She took down the country's most notorious gangster, but somehow she's been
largely forgotten.
The influence that you have while you're alive matters, even if after you're gone everyone
forgets about you.
Listen to The Godmother with me, Nicole Perkins, on the iHeart Radio app, Apple Podcasts,
or wherever you get your podcasts.
What up guys? Hola, qué tal?
It's your girl Chiquis from the Chiquis & Chill
and Dear Chiquis Podcasts.
You've been with me for season one and two,
and now I'm back with season three.
I am so excited, you guys.
Get ready for all new episodes
where I'll be dishing out honest advice
and discussing important topics like relationships,
women's health, and spirituality.
For a long time, I was afraid of falling in love.
So I had to, and this is a mantra of mine
or an affirmation every morning where I tell myself,
it is safe for me to love and to be loved.
I've heard this a lot that people think that I'm conceited, that I'm a Mamona.
And a Mamona means that you just think you're better than everyone else.
I don't know if it's because of how I act in my videos sometimes.
I'm like, I'm a baddie.
I don't know what it is, but I'm chill.
It's Chikis and Chill.
Hello.
Listen to Chikis and Chill and Dear Chikis as part of the MyCultura Podcast Network
on the iHeartRadio app,
Apple Podcast or wherever you get your podcasts.
Oh, we're back. So the more you dig into Andreson's theology, the more it starts to seem like a form
of techno-capitalist Christianity. AI is the savior. In the cases of devices like
the rabbit, it might literally become our own personal Jesus. And who, you might ask,
is God. Quote, we believe the market economy is a discovery machine, a form of intelligence,
an exploratory evolutionary adaptive system. Now, Through this concept of reality, capitalism itself and capitalized the sea there because
it's a deity has chosen to bring artificial general intelligence into being.
All the jobs lost, all the incoherent flotsam choking our internet, all the Amazon dropshippers
using chatGPT to write product descriptions, these are but the market expressing its will.
Artists have to be plagiarized.
Children need to be presented with hours of procedurally generated slop and lies on YouTube
so that we one day can reach the promised land of artificial general intelligence.
Iffy, isn't it worth it? Oh my god, it's so is. And it's, I didn't know, you know, one of the biggest criticism with AI is that it, you know,
it is one of the effects of when, you know,
creativity and commerce meets,
commerce will always try and kill creativity.
Because it is commerce is more concerned with the buck
than it is the outcome or what it takes to get said buck.
And that was gonna be a whole thing
I was gonna drop at some point
and they just said it for me.
And they just said like, I thought it was more veiled.
I thought it was more hidden, but that is why you can try
and say that it is ethical to take from artists
because we're making it easier for you.
Your fingies hurt doing all that drawing,
but if it learns from you and you know,
don't ask how you're gonna get paid,
but if it learns from you,
we can give the people what they want
without causing you all this labor.
But who gets the money?
It's always the dork behind the computer
who did the code that is just essentially stealing
from all of these people and learning from them and then just producing this amalgamation
of everything they've done.
Yeah, yeah, absolutely.
Also I should note, I'm trying to be consistent about this.
I wrote it down and then I think slipped into it.
It's Andreessen, Mark Andreessen, Andreessen Horowitz.
It's just a weird name that I'm not used to saying.
I wrote this down and then immediately forgot
to correct myself at the start of the podcast.
Again, folks, hack and a fraud, but you know who can own,
well actually I won't say only humans can be hacks
and frauds like that because the AI is absolutely
mispronounced shit and get shit wrong too.
I guess that maybe they are getting conscious.
Can they build an AI that's as much a hack and a fraud as I am?
We'll see.
So.
No.
Thank you, Sophie.
You're welcome.
I appreciate it.
AGI is treated as an inevitability
by people like Sam Altman of Open AI who
need it to be at least perceived as inevitable so their company can have the largest possible IPO.
Right?
There's a lot of money on the line in the people with money believing all of the promises
that Andreessen is making.
This messianic fervor has also been adopted by squadrons of less influential tech executives
who simply need AI to be real because it solves a financial problem.
Venture capital funding for
Big Tech collapsed in the months before ChatGPT hit public consciousness. The reason CES was packed
with so many random AI branded products was that sticking those two letters on a new company is
like they treat it like a talisman, right? It's this ritual to bring back the rainy season, you know?
If you throw AI in your shit, people might buy it. And it's, you know, there's versions of this, like laptop makers are throwing AI and everything
they do now, just because like laptop sales soared during the start of the pandemic,
but they plummeted because people for one thing don't need to buy laptops all the fucking
time.
Most people wear them out, right?
Oh my, yes, exactly.
But, and again, this comes in if you can get people in the cold, if you can get them scared
that they're going to fall behind without AI, then maybe they'll buy a new AI
and I have laptop, because they're like,
well, this is what I gotta do to stay competitive, you know?
The terminology that these rich tech executives
use around AI is generally more grounded
than Andreson's prophesying, right?
But it's just as irrational.
The most unhinged thing I heard in person at CES
was from Bartley Richardson,
an AI infrastructure
manager at NVIDIA who opened a panel on deepfakes by announcing, I think everybody has a co-pilot.
Everybody's making a co-pilot.
Everybody wants a co-pilot, right?
There's going to be a Bartley co-pilot, maybe sometime in the future.
That's just a great way to accelerate us as humans, right?
And it's funny, he's named Bartley. If you know your old Star Trek and you can remember Barkley, the sad ensign, he sounds
like that guy and resembles him.
Oh man.
What's funny about that speech is it sounds like he's trying to convince himself too.
He's like...
He's like...
It's going to accelerate us, right?
We're not wasting our time, are we?
Yeah. Yeah.
Again, later in a separate panel,
Nvidia in-house counsel,
Nikki Pope, who's like the only skeptic they let on,
cited internal research showing consumer trust in brands fell
whenever they used AI.
This gels with research published last December
that found around 25% of customers trust decisions
made by AI less than those made by people.
No one on stage bothered to ask Bartley.
It was like, okay, you wanna use this thing.
We know your own company has data
that it makes companies less trustworthy.
Are you worried that if you use it, people won't trust you?
Like, is that not in your head?
And that was kind of a pattern at CES.
All of the benefits of AI, with some very specific exceptions, most of the benefits of AI were
touted in vague terms.
It'll make your company nimble.
It'll make it more efficient.
It'll accelerate you.
Harms though, while they were discussed less often, they were discussed with a terrible
specificity that stood out next to the vagueness. One of the guys in the Deep Fake panel was
Ben Coleman and he's the CEO of Reality Defender, which is a company that detects artificially
generated media, right? Their job is to let you know if something's AI generated. And
he claims that his company expects half a trillion dollars in fraud worldwide this year,
just from voice cloning AI.
Not fraud from AI, just the fake voice AI,
half a trillion dollars.
Yeah, here's the thing too,
that I think is the scariest part of AI
that I don't think we've talked about yet,
is that everyone can use it.
You know, like, this isn't a thing that is like, oh, it's, you know, these companies
is too expensive, people are priced out and we just got to hope that everyone's good.
You're getting like SpongeBob rapping Kendrick lyrics on TikTok, you know, my buddy, Binium,
jokingly like promoted his show and made it look like he was having a FaceTime with Obama.
And it was pretty good.
The only reason you knew it was fake
was because of just the nature of the video.
But at what point is someone going to stop and go,
hey, we shouldn't have technology
where someone can impersonate world leaders?
Yeah.
And to be honest, that's not even, someone can impersonate world leaders. Yeah.
To be honest, that's not even, just because I think everyone is ready for the idea
that people are faking Obama,
or because we've done little versions of that
for years now, I think the scariest thing
is people aren't ready for their loved ones
to be imitated by AI.
And that is the thing that is happening.
In 2023, And this has happened
to a lot of people. There was a specific case that kind of went viral of this mother who got a call
from what sounded like her kidnapped daughter. And like the AI generated the voice of her daughter.
And then a guy was like, give us money or we'll fucking murder her. Right. And her kid was never
kidnapped. She very nearly sent the money because who wouldn't, right? Like if you don't know that that's a thing that can do,
who would not?
Like that's a, that's a,
and it's the AI was able to clone her daughter's voice
because her daughter has a TikTok, right?
It doesn't take that much, you know?
And this is why, by the way,
that people are talking about ways to mitigate this.
I think one of them is like have a family password
or something where it's like, all right, if I'm fucking kidnapped,
I'm gonna say the password, you know?
So that some random person with your TikTok won't know it,
or at least has to try harder to guess it.
So great, thanks to AI,
now we have to have passwords for our families in real life.
Cool!
Yeah, if you wanna see that Thanksgiving,
you gotta know the password.
Yeah, fucking great.
Yeah. At CES and at the substacks and newsletters of all these AI cultists, there's no time
to dwell on problems like these.
Full steam ahead is the only serious suggestion they make.
You should all be excited, Google's VP of Engineering, Beshad Bazzotti tells us during
a panel discussion with the McDonald's executive.
If you're not using AI, Besishad warned, you're missing out. And I heard versions, variations of
the same sentiment over and over again, right? Not just this stuff is great, but like you're
kind of doomed if you don't use it. And I will give Nikki Pope was the only skeptic really who
had a speaking role in CES. It is not coincidental that she was an academic and a black woman
because her background is studying algorithmic bias in the justice system.
And so she had some really good points about like the actual dangers this stuff had.
The panel she was on was governing AI risk and kind of her like the partner on the panel,
the guy she was talking with was was Adobe VP, Alexandru Kostin.
And she urged the audience,
I want you to think about the direct harm algorithmic bias
could do to marginalized communities.
Quote, if we create AI that disparately treats one group
tremendously in favor of another group,
the group that is disadvantaged or disenfranchised,
that's an existential threat to that group.
And she was specifically like,
people talk about the existential threat of an AI going crazy
and killing us all, but like,
that's not as realistic as what we know will happen.
Yes.
Oh yeah, when you have these banks that like,
automatically, you know, are using AI to try and approve,
you know, loans and Deontay sends his name in,
and they're like, not approved.
Yeah, exactly.
He's like, nah, I'm good.
And I am glad she was there.
She, again, she still, you know, works for a company that's going to make money off of this.
She's not like a doomer on it, but like at least one person was being like,
could we please acknowledge there are dangers?
Yes, I know, because here's the thing is, and I truly truly believe this and you were basically saying this earlier that
AI as a tool is fine
You know yeah
when you when it is and a tool is something that is always held and used by a human that
There's the checks and balances it is only as evil as the person who's using it and that is just any item
Physical or digital will ever, you know, will
always be under. But the moment you're like, I'm going to give you free reign based on
information. And how many times has an article gone online that was like, it's scanned Reddit
or it's scanned Twitter and it's racist now. You know what I'm saying? And we still went full steam ahead with producing this
and thinking we're right.
And we know, especially when you see a lot of these tech
leaders being predominantly white men,
and we know that in general, most white men
don't care about protecting marginalized people.
They care about getting their bottom dollar.
They don't see it as a rare occurrence
because they don't live that existence.
They don't have the data pun intended
to build something to defend against it
because it's not a real problem to them
because they don't see it.
And that is beyond just them being them
and more into as humans a lot of times,
if you don't do
the work to see it and understand what happens to other people outside of your perspective,
you're just going to believe that it's not real or people are exaggerating or it's this
and it's that.
And when you are a gung-ho and you have drunk the Kool-Aid, that is the AI Kool-Aid, and
you are telling people that this is the future,
we have to do it, you're gonna push ahead.
But like we've literally have seen a clear cut example
of what happens when you push past safety
and you just do what you wanna do
just because you have a whole bunch of money
and a mad cat's controller.
Like it gets dangerous.
So, as you started talking about like the dangers
of certain tools, right?
And how they, the value of the tools.
I literally looked over at the gun on my table, right?
And we all agree, even people who really like them,
there should be regulation.
And I, the vast majority agree more regulation,
but they are like, again, not to say that it's sufficient
But there are a lot of laws about like where you can carry a gun legally how you can buy a gun, right?
And because people understand that like yeah, if a tool is that powerful
There should be limitations and things and things that you can do that get them taken away from you forever, right?
Yes, I don't know how we do that with AI taken away from you forever, right? Yes.
I don't know how we do that with AI, but I don't think that's a reason not to try.
Right?
Yes, yes.
You gotta do what they did in that movie Hackers.
And it's like you're just banned from the internet till you graduate high school or whatever
it was.
Yeah, yeah.
Now, Costin claimed that the biggest issue, again, so Nicky Pope is like, yeah,
I think this stuff could really hurt marginalized communities.
And Alexandria Costin from Adobe responded that like,
well, I agree, but the biggest risk of generative AI
isn't fraud or plagiarism, it's not using AI, right?
He claims like, this is as big as the internet
and we all just have to get on board.
And then I'm gonna read verbatim how he ends
This this this particular statement. I think humanity will find a way to tame it to our best interest
hopefully
Awesome awesome and the whole week was like that. Again, these really specific, devastating
harms and then vague claims of like, yeah, we're all just going to have to do it. You
know, I brought up Scientology earlier, but when I think about touting like vague claims
of world saving benefits alongside, and it's going to hurt too when you have to accept
the pain, I think of Keith Ranieri, right? The nexium guy, we all remember Keith, you
know, like most coat leaders,
Reneary promised his mostly female followers,
you'll get all these benefits.
I'm gonna like heal you, you'll be extra productive,
you'll be super good in your business,
super good in your career,
but you have to follow my commands
because I have to retrain you on some stuff.
And so it's gonna be uncomfortable, right?
And the end result of this is a bunch of them branded them,
their flesh and partook in sex trafficking. Yeah.
These tech executives are not Ranieri, but I think they see money in aping some of his tactics.
Right? The benefits are so good, we just have to accept some pain.
You know, I got to hurt you to rebuild you better.
Now, all of the free money right now is going to AI.
And these guys know the best way to chase it is to throw logic to the wind and promise the masses that if we just let this technology run roughshod
over every field of human endeavor, it will be worth it in the end. This is rational for them
because they're going to make a lot of money, but it is an irrational thing for us to let them do.
Why would we want to put artists and illustrators who we like out of a job? Why would we accept a world where it is impossible to talk to a human when you have a problem and you're instead thrown to a churning
Swarm of chatbots? Why would we let Sam Altman hoover up the world's knowledge and resell it back to us?
We wouldn't and we won't unless he can convince us that doing so is the only way to solve the problems that scare us
climate change the cure for cancer and into war or at least in solve the problems that scare us. Climate change, the cure for cancer, an end to war, or at least an end to the fear that
we will all be victimized by crime or terrorism.
All of these have been touted as benefits of the coming AI age if we can just reach the
AI promised land.
And we're going to talk about some of the people who believe in that promised land and
what they think it'll be like.
But first, ify, you know it is the real promised land,
the only actual paradise any of us will ever know.
What?
Buying from the sponsors of this show.
Of course.
It was good. I was ready.
All right, here's an ad.
I'm Nicole Perkins and in my new podcast, The Godmother, I'm inviting you to 1930s
New York.
At what would become known as the Trial of the Century, infamous mob boss Lucky Luciano
is finally taken down.
But this is not Lucky's story.
I want to tell you the extraordinary story of Eunice Carter, the trailblazing black female
lawyer who put Lucky Luciano behind bars.
At a time when black history is being erased, telling Eunice's story is more urgent than
ever.
She took down the country's most notorious gangster, but somehow she's been largely
forgotten. The influence that you have while you're alive matters,
even if after you're gone everyone forgets about you.
Listen to the godmother with me, Nicole Perkins,
on the iHeartRadio App, Apple Podcasts,
or wherever you get your podcasts.
What up guys, hola qué tal?
It's your girl Chiquis from the Chiquis and Chill and Dear
Chiquis Podcasts. You've been with me for season one and two and now I'm back with season
three. I am so excited you guys. Get ready for all new episodes where I'll be dishing
out honest advice and discussing important topics like relationships, women's health
and spirituality. For a long time I was afraid of falling in love.
So I had to, and this is a mantra of mine
or an affirmation every morning where I tell myself,
it is safe for me to love and to be loved.
I've heard this a lot that people think that I'm conceited,
that I'm a mamona.
And a mamona means that you just think
you're better than everyone else.
I don't know if it's because of how I act in my videos
sometimes, I'm like, I'm a baddie, I don't know what it is, but I'm chill. It's Chikis and Chill. Hello.
Listen to Chikis and Chill and Dear Chikis as part of the MyCultura podcast network on the
iHeartRadio app, Apple podcast, or wherever you get your podcasts. On a winter morning in 2015,
an electrician is murdered in a quiet Amsterdam suburb.
It looks like an assassination, but there's no motive.
The killing, it leads investigators into a web of drugs, money laundering,
and state-sponsored assassinations that stretches from Dublin to Dubai.
At the centre is a cocaine super cartel and a hidden economic war between democracies and
dictatorships.
I'm Miles Johnson and I'm an investigative reporter for the Financial Times.
And I've always been interested in the way that crime and business meet, and they always
meet because crime is a business.
In hot money, the new narcos from the Financial Times and Pushkin Industries were investigating
a new breed of drug kingpin who's thriving in a global order that seems to be breaking apart.
Listen to Hot Money, the new narcos on the iHeart Radio app, Apple podcasts, or wherever
you get your podcasts.
All right, we're back.
So I want to talk about Silicon Valley's latest subculture, emphasis on the cult. Effective accelerationism, or E slash ACC,
EAC, I think is probably how you could pronounce it.
The gist of this movement fits with Mark Anderson's manifesto.
AI development must be accelerated without restriction
no matter the cost.
EAC has been covered by a number of journalists,
but most of that coverage misses how very spiritual
some of it seems.
One of the inaugural documents of the entire belief system opens with the statement,
accelerationism is simply the self-awareness of capitalism, which has scarcely begun.
Again, we see a statement that AI has somehow enmeshed itself with capitalism's ability to
understand itself. It is some way intelligent and can know itself. And I don't know how else you
interpret this, but as belief in a God built by atheists
who like money a lot.
The argument continues that nothing matters more
than extending the quote,
light of consciousness into the stars,
a belief Elon Musk himself is championed.
AI is the force the market will use to do this,
and quote, this force cannot be stopped.
This is followed by wild claims
that next generation life formss will be created inevitably
by AI.
And then a few sentences down, you get the kicker.
Those who are first to usher in and control the hyperparameters of AI slash techno capital
have immense agency over the future of consciousness.
So AI is not just a god, it's a god we can build and we can use it to shape the future
of our reality to our own whims
And again, some of these guys will acknowledge maybe it'll kill all of us
But as long as it makes a technology that spreads to the stars that's worth it because we've kept the light of consciousness alive
Wow, that's not I don't think the mainstream view, but you can definitely find people saying that
And they'll be like if you attempt to slow this process down, there are risks.
And they're saying the same thing Andreessen is.
You stop it from doing all these wonderful things.
But also, I do kind of view that as a veiled threat, right?
Because if AI is the only way to spread the light of consciousness to the void, and that
is the only thing that matters, what do you do to the people who seek to stop you?
Right?
Who seek to stop AI?
I actually am fine with extending the light of consciousness into space.
I'm a big fan of Star Trek.
I just don't believe that the intelligent, hyper-aware capitalism is the thing to do
it.
Yeah.
Again, too much of a Star Trek guy for that.
When I look at the people who want to follow Mark Andreessen's vision, who find what these
IAC people are saying is not just compelling but inspiring, I think of another passage from that New Yorker article by Zoe
Heller.
Quote,
Not passive victims, they themselves sought to be controlled.
Haruki Murakami wrote of the members of Om Shinriko, the cult whose Seren gas attack
on the Tokyo subway in 1995 killed 13 people.
In his book Underground, Murakami describes most Ome members as having deposited all their
precious personal holdings of selfhood in the spiritual bank of the cult's leader,
Shoko Asahara, submitting to a higher authority to someone else's account of reality was,
he claims, their aim.
Now, the EAC manifesto newsletter thing used the term techno capital in conjunction with
AI.
This is a word that you can find a few different definitions on,
because it's a wonky, weird academia philosophy term. And there's a number of folks who will argue
about how it ought to be described. But this is broadly kind of the same thing that Andreessen
is referring to when he talks about the market as this intelligent discovery organism. And while
there are a few different ways you'll see this defined, the EAC people
and Andresa himself are thinking about
how philosopher Nick Land, who's the guy
who's generally credited with like popularizing
the term techno capitalism, defines it.
Land is one of many advocates of the idea
of a technological singularity.
The point where technological growth
driven by improvements in computing becomes irreversible.
The moment at which a super-intelligent machine begins inventing more and more of itself and
expanding tech in a way that humans can't.
As one of Glenn's fans summarized in a Medium post,
A runaway reaction of self-improvement loops will almost instantaneously create a coherent,
super-intelligent machine.
It is man's last invention.
The most notable of industries, AI, nanotechnology, FIMTO technology, and genetic engineering
will erupt with rapid advancements, quickly exceeding human intelligence.
Now, obviously, the way land rights is, again, kind of worth reading, but dense, perhaps
too dense for an entertainment podcast. So I'm going to read again from a sub-stack called Regress Studies by a writer named Santi Ruiz, kind of talking about this idea of techno-capitalism
that land has from a more critical standpoint. quote, Nick Land who coined the term is a
misanthrope. He doesn't like humans much. So the idea that there could be an entity coming,
already being born, drawing itself into existence by hyperstititionally preying on the dreams of humanity, cannibalizing their desires.
Wearing the invisible hand is an ideological skin.
He's into that.
Technoeconomic interactivity crumbles social order in auto-sophisticating machine runway,
as he would put it.
And that's good.
You're being colonized by an entity that doesn't care about you, except insofar as
you make a good host.
We'll talk about hyperstition in a little bit here.
So Land is the guru of accelerationism.
You might not be surprised to learn that he has a devoted following among the far right.
This is because he is quite racist, anti-democratic, and obsessed with eugenics.
Now his eugenics are not your grandpappy's eugenics.
For him, it involves gene editing, which will be available to greater extents
than ever thanks to AI.
Land claims to disna- like white nationalists
and conventional racists
because they don't see the whole picture.
Quote, and this is me quoting
from one of Land's publications.
Racial identarianism envisages a conserva-
a conservation of comparative genetic isolation,
generally determined by boundaries corresponding
to conspicuous phenotypic variation.
It is race-realist, in that it admits to seeing what everyone does in fact see, which is to
say consistent patterns of striking correlated multi-dimensional variety between human populations
or subspecies.
Its unrealism lies in its projections.
That's pretty racist.
Land is listed by name in Andreessen's Man manifesto as someone you should read for a better understanding
of the wonderful optimistic future he and his Ilkev planned for us.
He cites extensively Gregory Cochran, who posits that space travel, spreading to the
stars, will solve our race problem because it's a natural filter.
Basically saying some races won't make it into space.
So we don't need to be violent.
Like we just have to spread the space
and that will do our eugenics, part of it for us.
So that's cool.
I'm stuck on this journey to the stars
through evolution thing because I don't know, Robert, do you play Warhammer 40K?
Oh, oh, God, iffy, iffy.
Of course I play.
I've been playing Warhammer 40K most of my life.
Okay, because this is on a very adeptus mechanics.
Yeah, mechanic is right, yes, absolutely.
And I'm like, what is going on here?
I'm deep and wrong trader, and now I'm like,
no, this is them. They didn't really... and well trader. And now I'm like, no, this is them.
What is fun is that like Warhammer 40,000,
the deep fluff envisions a society
that it's like a hybrid between the Federation
and Star Trek and what these AI yak people dream of.
That is this utopia for like 10,000 years
because they develop thinking machines.
And then all of the thinking machines turn on them
and murder everybody.
And so in the future, we just lobotomize people
who commit crimes and turn them into computers for us
because we can't have intelligent machines anymore.
Yes, so we're just, we're going,
that's what it looks like we're marching towards
with these folks.
Yeah, and obviously the thing that the Warhammer people
are inspired by is like the Butoutlarian jihad and dune,
which is the more artful version of that story
with less orcs, which makes it inferior in my mind,
but I do love dune.
So.
You need more red because the red makes everything fast.
Red makes it go faster.
Yes.
So land concludes by imagining both racists
and anti-racists binding together
in defense of the concept of race, right? That's what the result of AI is going to be.
The racists need race, you know, we're going to get so good at gene editing. The racists
will get angry and the anti-racists will get angry because they're so in love with the
concept of race. Well, we're just going to improve people, annihilate racial differences
through moving to the stars
and the natural filter that that implies.
By the way, the name of the article,
Land wrote all this in is called hyper racism.
So cool guy, glad Mark Andreessen cites him in his manifesto.
Glad the biggest venture capital guy in the country.
He's like, yeah, read this dude.
Well, that's the sequel to flume's hyper realism.
So that's, yeah. It is interesting. And these guys don the sequel to Flume's hyper realism. So that's, yeah.
It is interesting.
And these guys don't tend to side it as much.
I think you get into some of the deeper stuff.
They're all talking about these techno capitalist concepts
that Nick Land plays with.
They don't talk about what I think is actually
one of his most sort of insightful points,
which is about a concept called hyperstition.
And in brief, hyperstition is like
creating things in fiction that become real in the process
by which that happens.
I think about that a lot when I think about things like the Boutlarian Jihad, the war
against the intelligent machines in the Dune or what happened in the Warhammer 40,000 universe.
But I also think about how part of why these people are targeting creators, writers, actors,
musicians, artists, like pen and paper, you know, painting artists and stuff,
is because the only way out of this future they have envisioned is in imagining a better one
and then making it real, right? Like, and that that is a thing that creatives have a role in
doing. So if you can kill that ability, hand it over to the machines that you control, maybe you can stop them from this path of resistance. Motherfuckers. Yeah, they're on there. They're on there with
there like you want something better, we're gonna take it away from you. Yeah, fuck you.
So anyway, I think that's gonna end it for us in part one, you know, this whole investigation
in much more condensed form,
just kind of really focusing specifically on the argument that there's cult dynamics
to the fandom is being published in an article on Rolling Stone. I'll probably edit in like
the title or something here so you can find it. But check that out. That's kind of the
more easily shareable version, more condensed. Ify, where should people check out you and
your stuff?
Oh, man, I'm if youy Wadiway on Twitter and Instagram.
You know, so definitely peep me there.
Listen to our relationship pod with Ify and Emmy.
If you want to hear us talk about relationship stuff.
And yeah, you have Maxfilm for movies.
But if you go to Ify Wadiway, you'll find all that stuff.
And of course, watch Dropout.
Something cool.
Absolutely. Absolutely. Is about to be announced next week.
Yes, watch Dropout.
Something cool is coming soon and it's also an extremely human endeavor.
Just like this show is.
So go with whatever god you worship or the machine God you plan to meme into being.
Goodbye.
Behind the Bastards is a production of Cool Zone Media.
For more from Cool Zone Media,
visit our website, coolzonemedia.com,
or check us out on the iHeart Radio app,
hop a podcast, or wherever you get your podcasts.
I'm Nicole Perkins, and in my new podcast, The Godmother, I'm inviting you to 1930s New York.
I want to tell you the extraordinary story of Eunice Carter, the trailblazing black female
lawyer who put New York's most notorious gangsters behind bars. Somehow, she's been largely forgotten.
Listen to the godmother with me, Nicole Perkins,
on the iHeartRadio app, Apple Podcasts,
or wherever you get your podcasts.
What up, guys?
Hola, qué tal?
It's your girl, Chiquis,
from the Chiquis and Chill and Dear Chiquis podcasts.
And guess what?
We're back for another season.
Get ready for all new episodes
where I'll be dishing out honest advice,
discussing important topics like relationships,
women's health, and spirituality.
I'm sharing my experiences with you guys
and I feel that everything that I've gone through
has made me a wiser person.
And if I can help anyone else through my experiences,
I feel like I'm living my godly purpose.
Listen to Chikis and Chill and Dear Chikis on the iHeartRadio app, Apple Podcast, or
wherever you get your podcasts.
History is beautiful, brutal, and often ridiculous.
Join me, Ben Bullitt and me, Noel Brown. As we dive into some of the strangest stories from across the span of human civilization.
In Ridiculous History.
Join us to hear the many ways history can be oh so ridiculous.
Listen to Ridiculous History on the iHeart Radio app, Apple Podcast, or wherever you
find your favorite shows.