It Could Happen Here - The AI Con
Episode Date: June 15, 2023Robert sits down with Garrison and James to talk about the hype surrounding AI, and how popular AI news stories are nearly all lies.See omnystudio.com/listener for privacy information....
Transcript
Discussion (0)
Hey guys, I'm Kate Max. You might know me from my popular online series, The Running Interview Show,
where I run with celebrities, athletes, entrepreneurs, and more.
After those runs, the conversations keep going.
That's what my podcast, Post Run High, is all about.
It's a chance to sit down with my guests and dive even deeper into their stories,
their journeys, and the thoughts that
arise once we've hit the pavement together. Listen to Post Run High on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts. You should probably keep your lights on for
Nocturnal Tales from the Shadow. Join me, Danny Trails, and step into the flames of right.
An anthology podcast of modern day horror stories inspired by the most terrifying legends and lore
of Latin America. Listen to Nocturnal on the iHeartRadio app, Apple Podcasts,
or wherever you get your podcasts. and culture in the new iHeart podcast, Sniffy's Cruising Confessions. Sniffy's Cruising Confessions will broaden minds and help you pursue your true goals.
You can listen to Sniffy's Cruising Confessions, sponsored by Gilead,
now on the iHeartRadio app or wherever you get your podcasts. New episodes every Thursday.
ah welcome back to it could happen here a podcast it's a podcast uh i'm robert evans uh and with me today is garrison davis and james stout hello a canadian a britishman and a texan
walk into a podcast yeah walk into a podcast this Yeah, walk into a podcast. This is the least toxic possible.
Only two of them can drink in a bar.
That's not true.
In Canada, we can all drink in a bar.
In Canada, we can all drink in a bar.
Now, Garrison, a moment ago,
you were holding your hand above a lit candle
in a way that reminded me of G. Gordon Liddy,
the Nazi who masterminded the Watergate break-in,
and in order to convince people that he was a hard man,
would regularly burn the palm of his hand on a candle while staring at them.
A cool guy.
Isn't gender great?
Haven't we really nailed it?
G. Gordon Liddy.
You don't know enough about it.
We'll talk about G. Gordon Liddy.
But today we're talking about something else problematic.
Artificial intelligence.
Which is
not a thing that exists anywhere.
It is instead a
terrible, terrible error
going back to like the 60s
in case of terminology.
When we talk about all of
the things that people are like
flipping out as AI's chat GPT and stable diffusion and fucking, um, all, all these other sort of like, um, different programs.
They're not intelligences.
They're, you know, the chat GPT is like a large language model.
like bots that you train to understand kind of like what the likeliest thing that what the likeliest appropriate response is to like a given prompt. That's kind of like the the broadest way
to explain it. It's complicated and they're, you know, very useful. But obviously, if you've
been paying attention to the world right now, there's just a whole bunch of bullshit about them. And I think to kind of make sense of why we're seeing some of the shit around AI that we're
seeing and for a little bit of specificity, there have been like this kind of endless
series of articles around this open letter signed by a bunch of luminaries in the AI
field talking about how, you know, there need to be laws put in place to stop it from ending the world.
You know, you've seen articles about like, oh, X percentage of AI researchers think that
it could destroy the planet and destroy the human race.
Kind of most recently, the biggest viral hype article was that the Pentagon had supposedly been testing an AI, like, missile
system that blew up its operator in a simulation because the operator was trying to stop it from
firing or whatever. It was bullshit. Like, what was that? What actually happened? Like,
Vice ran with the article. It was very breathless.
Shocked that Vice would do this.
Flipping out about how horrifying, about how horrifying our AI weapons future is.
And like, yeah, we shouldn't give AI the ability to kill people.
But that's not at all what happened.
Basically, a bunch of army nerds or air force nerds were sitting around a table doing the
D&D version of military planning, where you say, what if we did this?
What kinds of things could happen if we did this system?
And another guy around the table said,
oh, well, if we build the system this way,
it might conceivably attack its operator,
you know, in order to optimize for this kind of result,
which is like not scary.
Like it's just people talking through pot,
like a flow chart of possibilities around a fucking table.
You don't need to worry about that.
There's so many other things to worry about.
New York City is blanketed in a layer of smog so thick you could cut it with
a butter knife. Don't flip out about AI weapons just yet, folks. But I wanted to kind of talk
about why this shit is happening. And a lot of it comes down to the fact that when we're talking
about the aspects of the tech industry that have an impact on outside of the tech industry, there's basically three jobs in big tech.
One job is creating iterative improvements on existing products.
These would be the teams of folks who are responsible for designing a new iPhone every year.
Every couple of years, Lenovo puts out a new series of ThinkPads and IdeaPads.
Every couple of years, you get a new MacBook. Every couple of years, Razer puts out a new series of ThinkPads and IdeaPads every couple of years. You get a new MacBook.
Every couple of years, Razer puts out a new Blade.
These are the folks who kind of move along technology at a relatively steady pace for consumer devices.
And then you have the people who are responsible for kind of what you might call the moonshot products.
This is a mix of the next big thing and doomed failures.
And it's often pretty hard to tell what's going to be what ahead of time.
A very good example would be back in the 90s, Apple put a bunch of resources into launching an early tablet computer called the Newton that was a fabulous disaster.
And then in the mid-aughts, they put a bunch of resources into launching the iPad, which was a fabulous disaster. And then in the mid aughts, they put a bunch of resources into launching the iPad,
which was a huge success.
And when you kind of think about like the folks doing this,
like working on the moonshot products,
the most recent example would be whatever team at Apple,
the team at Apple that was behind putting together
these new Apple goggles,
which I don't think are going to be
a wildly successful product
in the way that they need it to be, like a smartphone scale success.
But this is an example of like a thing that didn't exist and a bunch of people had to
invent new technologies or new ways to combine technologies in order to make it exist.
The third kind of job that the tech industry has, broadly speaking, are con men, right?
that the tech industry has, broadly speaking, are con men, right?
And the state that we are in in the industry right now is that every major tech company is run by some form of con man, right?
Tim Cook is, you know, kind of the least conniest of the con men among them.
But like Mark Zuckerberg, obviously, is a fucking flim flam artist, you know, and you
can see this with the huge amount of
money, like it's something like $11 billion, at least that Facebook pumped into this bullshit
metaverse scheme that like Apple barely even talked about during their event unveiling,
like a headset that has VR potential in it. I'm getting away from myself here. Kind of the point
that I'm making is that you can often
have very real products. There's actual technology going into the Apple glasses marketed by con men,
flim-flam artists. This is not always like a bad thing, right? Steve Jobs was a con man,
and it worked out pretty well for him because it just so happened that the tech, he had a decent
enough idea of what the tech was capable of, that it was able to kind of meet the promises he was making in more or less real time.
An example of what happens pretty spectacularly when that's not the case is what we saw with Theranos and Elizabeth Holmes who started prison last week.
You've got these promises being made by the con man and the people who are responsible for the moonshots can't make it work.
I'm bringing this up right now because there's a lot of folks, I think, who believe that
the actual potential of AI has been proven in a spectacular way because the tools that
have been released are able to do cool things. And I think those people are missing some key aspect, like
some key things that like might cause one to think more critically about the actual potential the
industry has and also might cause one to think more critically about how earth shattering it's
all going to be. It's being taken kind of as read right now by a lot of particularly journalists and media analysts
outside of the tech and or like outside of, you know, the dogged tech press that like,
well, this is going to upend huge numbers of industries and put massive numbers of people
out of work.
And, you know, that may seem if you sat down in front of this chat bot and had like a mind
blowing experience, that may seem credible.
There's not the evidence
behind that yet. If you actually look at the numbers behind some of these different companies
and like how their usership has grown and how it's fallen off, one of the things you've seen
is that a lot of these tools had this kind of massive surge peak in terms of the number of
people adopting them and in terms of their profitability. You saw this with like stable diffusion, right? And then this kind of fairly rapid fall afterwards,
not because people are like giving it up forever or whatever, but because like once you fucked
around with it and generated some images or generated some stories, there's not a huge
amount to do unless you're someone who's specifically going to be using this for your job.
And most of the people that wanted to fuck around with a lot of these apps didn't have long-term use cases for them.
This is why, while you've got like, for example, Stability, which is the company, or at least the main company behind Stable Diffusion, has been valued at like $4 billion.
I think last it was checked.
But their annualized revenue is only about $10 million. So that's a pretty significant gap. And it's a pretty significant
gap because the actual money in AI so far isn't with the service providers, really. Like you've
got some that have made in like the $100 million range, although it's not entirely clear what their margins are or what the long-term reliability of that profit is.
But the vast majority of money in AI, almost all of it, has been made by companies like
NVIDIA.
NVIDIA jumped up to become a trillion-dollar company as a result of this because the hardware
needs of these products are so intense.
And obviously, that shows there's money here for somebody.
But the fact that a shitload of people got curious about these apps and used them in
quick succession and then kind of dropped off is an evidence that we're seeing entire
industries replaced, as much as it is evidence that like
a lot of people thought this was interesting briefly.
And so I think kind of when you look at the data, one of the things that suggests is that
we're heading towards a point in AI.
And I think we're probably going to hit it within the next six months to a year that
is broadly referred to as like the trow of disappointment.
the next six months to a year that is broadly referred to as like the trowel of disappointment.
And this is what happens when kind of the promises of a new technology that are being made by the hype men or con men, as I tend to call them, meet with like the actual reality
of its execution, which in some areas is going to be significant.
There are places I think medical research may be one of them.
We'll talk about that in a bit where a lot of the promises people are making about AI
will be fairly quickly realized.
And then there are areas where it won't be.
I think content generation is one of those things.
But yeah, so that's kind of like what I'm seeing when I'm looking at the broad strokes
of where this technology is here and kind of the gap between how people are talking
about it and what we're actually seeing in terms of monetization. celebrities, athletes, entrepreneurs, and more. After those runs, the conversations keep going.
That's what my podcast, Post Run High, is all about.
It's a chance to sit down with my guests
and dive even deeper into their stories,
their journeys, and the thoughts that arise
once we've hit the pavement together.
You know that rush of endorphins
you feel after a great workout?
Well, that's when the real magic happens.
So if you love hearing real, inspiring stories from the people you know, follow, and admire,
join me every week for Post Run High.
It's where we take the conversation beyond the run and get into the heart of it all.
It's lighthearted, pretty crazy, and very fun.
heart of it all. It's lighthearted, pretty crazy, and very fun. Listen to Post Run High on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Welcome. I'm Danny Thrill. Won't you join me at the fire and dare enter Nocturnum, Tales from the Shadows, presented by iHeart and Sonora.
An anthology of modern-day horror stories inspired by the legends of Latin America.
From ghastly encounters with shapeshifters
to bone-chilling brushes with supernatural creatures.
I know it.
Take a trip and experience the horrors that have haunted Latin America since the beginning of time.
Listen to Nocturnal Tales from the Shadows
as part of My Cultura podcast network,
available on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Hi, I'm Ed Zitron, host of the Better Offline podcast.
And we're kicking off our second season digging into how Tex Elite has turned Silicon Valley into a playground for billionaires.
From the chaotic world of generative AI to the
destruction of Google search, better offline is your unvarnished and at times unhinged look at
the underbelly of tech from an industry veteran with nothing to lose. This season I'm going to
be joined by everyone from Nobel winning economists to leading journalists in the field and I'll be
digging into why the products you love keep getting worse and naming and shaming those
responsible. Don't get me wrong, though.
I love technology.
I just hate the people in charge
and want them to get back to building things
that actually do things to help real people.
I swear to God things can change if we're loud enough.
So join me every week to understand
what's happening in the tech industry
and what could be done to make things better.
Listen to Better Offline on the iHeartRadio app,
Apple Podcasts,
or wherever else you get your podcasts.
Check out betteroffline.com.
I want to talk a little bit now about kind of one of the guys,
I would call him kind of a con man,
who's been a big driver of the current AI push.
He's a dude named Ahmad Mostak, and he's the founder of Stable Diffusion, right, which
is a text-to-image generator that was kind of, like, before ChatGPT hit, this was, like,
the first really, really big mainstream AI thing.
ChatGPT was a lot larger, but Stable Diffusion came first and, came first and was critical behind, among other
things, a lot of the silliest NFT bullshit.
And he's a really interesting dude.
If you look at his own claims about his background, he says that he's got an Oxford master's
degree, that he was behind an award-winning hedge fund uh that he like worked for the united nations in
a really important capacity um and also that he obviously founded this this ai bot um none of
that's true uh he has a bachelor's degree from oxford not a master's degree he did well that's
uh what he's playing off a thing that happens there where like you can you you can get if you
have a ba oxen you can you can get it
to be an ma it doesn't mean you did a master's it's just uh the wealthy people flex yeah it's
not a master's degree you shouldn't call it that if you're calling it that you're taking the piss
yeah yeah he's taking the piss knowing no one's going to call him on it or at least knowing that
people wouldn't like at large like loudly enough for it to matter for him. Yeah. He hasn't worked with the UN in quite some time and never did in a major capacity.
He did run a hedge fund that was successful in its first year, but then got shut down
in its second year because he lost everybody's money.
So like this is this is but you see with this guy, if you go through his like history, he's
like he's like chasing hedge funds in the early aughts.
He first gets in with stable diffusion after covid and he's kind of like building it as this is going to help with like research into trying to like, you know, fight the covid-19 pandemic.
And then he kind of pivots to like, oh, this is a great way to like make NFTs and shit.
You know, when that hit, like he's he's just sort of like chasing where the money is yeah um any way he kind of can um and he's not uh by the way he's not the guy who
wrote any of the source code for this that was done by like a group of researchers and he you
know he essentially like acquired it which is usually what happens here now none of this has
stopped him from getting a hundred million dollars so in investments from various venture partners. It hasn't stopped his company from getting this massive violation.
It hasn't stopped the White House from inviting him to talk as part of a federal AI safety
initiative. But it is one of those, when I look of look into this guy and kind of the gap between his claims and what's actually happened and the claims that are being made about the value of his company and what it's actually like proved to be worth so far.
I think a lot about Sam Bankman Freed because a lot of like the early writing around this guy was similar and a lot of the kind of shit that he's claiming is similar.
of shit that he's claiming is similar. And yeah, I'm not sure if this is a case where,
because Bankman Freed is one of these people who, like Elizabeth Holmes, I think backed the wrong technology, because it's fine in Silicon Valley, it's fine, generally speaking, in capitalism to
lie about what a product can do if you can fake it till you make it. And maybe AI is there. He may have, this guy may have made a good bet
as to the future,
but that's kind of far from certain yet.
And it's just really clear
how much of this industry is being built on
or is being built by,
how much of the people running
sort of these AI companies
are dudes who managed one way or another,
either through access to VC funding
or kind of like, you know, just being in the right place at the right time to jump in on the bandwagon
in the hopes that they'll be able to cash out very, very quickly. I found a good quote from
a Forbes article talking about like a big part of why guys like Moustak are so interested in AI right now from a financial perspective.
And this is true, not just, this was true about like crypto before, but AI, because there's more
to the technology, this is kind of even more so valid. Quote, venture capitalists historically
spend months performing due diligence, a process that involves analyzing the market, vetting the founder, and speaking to customers to check for red flags
before investing in a startup. But start to finish, Moustak told Forbes, he needed just six days to
secure $100 million from leading investment firms Kochu and Lightspeed once stable diffusion went
viral. The extent of due diligence that the firms performed is unclear given the speed of the
investment. The investment thesis we had is that we don't know exactly what all the use cases will be,
but we know that this technology is truly transformative and has reached a tipping point
in terms of what it can do.
Gaurav Gupta, the Lightspeed partner who led the investment, told Forbes in a January interview.
So again, they're being like, yeah, we're pumping tens of millions of dollars into this.
We don't know how it'll make money.
It just seems so impressive that it has
to be profitable. Now, that line is particularly funny, maybe the wrong word, when compared
alongside this paragraph from later in the article. In an open letter last September,
Democratic Representative Anna Eshoo urged action in Washington against the open source nature of
stable diffusion. The model, she wrote, had been used to generate images of violently beaten Asian women and
pornography, some of which portrays real people.
Beshara said new versions of stable diffusion filter data for potentially unsafe content,
helping to prevent users from generating harmful images in the first place.
So it's like part of what's happening here is you've got this thing that seems really impressive, and that is to some extent because it's able to remix stuff that exists in a way that you haven't done automatically before.
But all of these kind of valuations are based, number one, on ignoring the problems with monetizing this stuff, including the still very much unsorted nature of how copyright is going to affect this.
And also the question of, is this really worth that much money?
Is this actually – is being able to generate kind of weird, slightly off-putting AI images a huge business?
Like how much of,
because like from where I'm seeing it,
one of two things is possible.
Number one, this replaces all art everywhere.
And so there's a shitload of money in it.
Or number two, this remains a way
that like low quality websites
and like Amazon dropship scammers
who are like putting up fake books
on Kindle and whatnot
to trick people using keywords,
like this is just like a way to fill that shit out.
Like I don't see a whole lot of room in the middle there.
You know, maybe I'm being like overly pessimistic there,
but that's where I'm sitting.
I mean, some of the models we've seen used
is selling like subscription packs
for like access to these tools and
access to use them for like commercial reasons.
Um,
other thing we could see is just like corporations selling to other
corporations,
like base having Disney and Warner brothers be able to use this to generate
concept art.
And now they,
they don't need to pay concept artists and instead they just have like pretty,
pretty,
uh,
pretty like nicely curated tools
for them to generate this type of AI image.
Those are kind of two of the biggest use cases
that at least I'm seeing right now
from slightly more on the creative filmmaking art side of things.
Because I mean, I don't think it's going to replace all art.
I think nobody is actually thinking this's just going to replace all art,
just like photography did not replace all art.
It changes the paradigm.
And because this tool does seem specifically useful
for the way that we're seeing corporations make the same movie every five years.
It's all built on all of the same stuff.
And I think that's how a lot of it's going to get used.
It's going to be a lot of weird scam artists,
people just messing around for fun,
and then people not paying illustrators as much.
Yeah, and I think that's kind of...
I see this being adopted widely,
but that's not the same as it, like, being a huge success.
Like, right now I'm looking at an article that's estimating the current value of AI in the U.S. at $100 billion, and that by 2030 it'll be worth $2 trillion U.S.
And it's like, I don't know, man, like, is replacing-
I mean, the AI is more than just, than just mid-journey image creation, right?
There is open AI and chat GPT.
AI isn't everything we use now.
AI is in your smartphone.
AI is going to be in your refrigerator soon.
It's not just image generation by any means.
That kind of gets to what I'm saying.
Because when you look at AI as a tool,
as more of like a paintbrush than a paint at AI as a tool, as more of like
a paintbrush than a painter, as a tool that will like augment or be used in, because I
think a number of times it may be used in a way that makes the product worse in a lot
of existing technologies.
Well, that's really different from kind of, number one, the doom and gloom, like this
is an intelligence on its own that could like overtake humanity.
like this is an intelligence on its own that could like overtake humanity.
I think the worry is more like
this could make,
get adopted on such a large scale
that it like makes a lot of shit worse.
Like my biggest fear with AI
is that it kind of hypercharges
the SEO industry
and the way that that has worked
to destroy search
and destroy so much of internet content.
Yeah, I think that is very possible.
Like if I look at chat gpt like i
don't think that's going to be writing features for rolling stone anytime soon but what it can
probably do because seo max copy is derivative right like like it's predictable it's derivative
it's based on other stuff supposed to be yeah and so it can do that seo max copy and some of that
ad copy like very well.
And yeah, either really fuck up searches, which is quite possible, and also make the lowest kind of acceptable tier of that kind of copy,
what it can generate.
And because you can just shove that copy in front of people with SEO max
and then have shitty ad copy written by chat GPT,
that will change certainly how we buy stuff on the internet,
but also how we read news, et cetera.
Yeah, absolutely.
And I already see that.
I've written for some big publications.
You have essentially a side...
Do people know what content-driven commerce is?
Oh, yeah, yeah, yeah.
It's why every article about stuff is now the best
five x right yeah like they have they have affiliate links and the the publication will
profit if you buy stuff after clicking the link yeah yeah so like in the probably 2016 era uh all
of the stuff i so i did a lot of previously outdoor journalism right writing about climbing gear bikes that kind of thing uh and like that whole industry went to just afcom like just
affiliate links and they kind of trashed any quality review stuff and i can see like a similar
change to that happening with this right where people will just chase that seo max copy and that will become the new cool
thing to do and like a lot of outlets will suffer as a result but that's not the like earth shattering
change that people are talking about on twitter.com or whatever
hey guys i'm kate max you might know me from my popular online series, The Running Interview Show,
where I run with celebrities, athletes, entrepreneurs, and more. After those runs,
the conversations keep going. That's what my podcast, Post Run High, is all about. It's a
chance to sit down with my guests and dive even deeper into their stories, their journeys, and the thoughts that arise once we've hit the pavement together.
You know that rush of endorphins you feel after a great workout?
Well, that's when the real magic happens.
So if you love hearing real, inspiring stories from the people you know, follow, and admire, join me every week for Post Run High.
know, follow, and admire, join me every week for Post Run High. It's where we take the conversation beyond the run and get into the heart of it all. It's lighthearted, pretty crazy, and very fun.
Listen to Post Run High on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Welcome. I'm Danny Thrill. Won't you join me at the fire and dare enter Nocturnal Tales from the Shadows presented by iHeart and Sonora. inspired by the legends of Latin America. From ghastly encounters with shapeshifters
to bone-chilling brushes with supernatural creatures.
I know you.
Take a trip and experience the horrors
that have haunted Latin America since the beginning of time.
Listen to Nocturnal Tales from the Shadows as part of my Cultura podcast network,
available on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.
Hi, I'm Ed Zitron, host of the Better Offline podcast, and we're kicking off our second season digging into how Tex Elite has turned Silicon Valley your podcast. from an industry veteran with nothing to lose. This season, I'm going to be joined by everyone from Nobel-winning economists
to leading journalists in the field,
and I'll be digging into why the products you love keep getting worse
and naming and shaming those responsible.
Don't get me wrong, though.
I love technology.
I just hate the people in charge
and want them to get back to building things
that actually do things to help real people.
I swear to God things can change if we're loud enough.
So join me every week to understand what's happening in the tech industry
and what could be done to make things better.
Listen to Better Offline on the iHeartRadio app,
Apple Podcasts, or wherever else you get your podcasts.
Check out betteroffline.com.
Well, one thing I saw recently is that more and more students are just using chat gpt to look up
information like as opposed to like as opposed to wikipedia as opposed to wikipedia or as opposed
to google if they have a question they'll actually chat gpt which has a few problems
as soon as you start getting into how much of the chat gpt output is just ai hallucinations, where it's not actual information, which is honest, that's something I should just write my own thing on in the future.
But yeah, it's just, it's a really weird problem.
That's really interesting that the problem of like, because I think it's very clear to me at this point, that AI is a more user friendly search experience than a search engine, right?
Because you can talk to it like a person
and explain what you need explained.
That doesn't mean it's a better option
in terms of it provides people
with information more effectively,
that it actually tells them
what they want to know as well.
But it's like easier
and maybe like less kind of an imposing task to like ask an AI a question
than it is to ask like a search, especially as much worse as Google has gotten lately.
Like one of the things that I found interesting as I was kind of doing digging for this,
I was looking at some AI articles that were published in like 2019, 2020, 2021.
This is before the big AI push that we're
currently all in the middle of before ChatGPT got its widespread release. And it was talking with
some people from Google who were like, yeah, we really see AI supercharging our search results.
There's a lot of potential in its ability to help people with search. And I'm thinking about in 2020,
2019,
Google was a really useful tool and it's a shit show.
Now,
like it's filled with ads,
like search results have gotten markedly worse.
Everyone who uses Google as part of their job will tell you that it's gotten
like significantly worse in the recent past.
And like,
I,
I, that's kind of like the the thing that i see being more of a worry and it's one of those things it's like on one hand in the hype machine
you have like ai could become like our new god king and destroy us all and the other like ai is
going to like you know create all
there's all this vague talk about well it could be giving people the tools to create more art than
ever before um to you know make more good things faster and i kind of feel like well what if neither
of those things happens which i and it just sort of allows us to continue making the internet worse for everybody at a more rapid pace.
What if that's the primary thing that we notice about AI as consumers?
It's probably a reasonable assumption.
I think Garrison's point was good, though, when they said that bigger companies will buy.
Companies will just exist to get bought, right?
Which is something that's happened in tech for decades because like it can't fundamentally
change things like if ai is another means of production right if we want to be like a grossly
materialist um if ai is another means of purchase a tool for making things if the same people own
it and benefit from it then like it's incapable of fundamentally changing our material conditions, right? Just becomes another way for them to churn out shit and say that, like, this is fine.
This is what you'll get, you know, like churn out shit content on the internet or whatever
it might be.
And likewise, if AI is primarily, like if it gets caught in this kind of SEO loop where
it exists primarily to help advertise and sell products, whether it's
as a search engine or generating mass content, you know, for, for like the internet that's sort
of optimized to, to appear higher in search results. Um, and it's also being trained on
that. Is there a point at which it kind of starts to lobotomize itself where it's just recycling
shit other AI has written, um, which also seems kind of inevitable
with that. This is one of those things. So one of the more famous moments in like recent AI research
is this Google researcher, Timnit Gebru, who no longer works at Google and some other very smart
people put together a paper that like it was, I think, generally regarded by AI folks as kind of
middle of the road
but it it kind of it developed the term stochastic parrot which is what people know it for as sort of
trying to describe what these quote-unquote ais do in a way that's better than an ai because like
part of what it was saying is that like we have to look at this as kind of like a parrot that if
you say enough like words around it including including enough like racial slurs, it'll start repeating a bunch of toxic shit. It doesn't know what it's doing. It
doesn't have intention. It's just kind of like repeating this stuff because that's what's been
fed into it. But one of the things that point out in that paper is that like when you have
one of these LLMs trained on too large of a model, it becomes number one, kind of impossible to avoid that toxic stuff,
but it also reduces the utility of the AI in a lot of ways. Because like when you have so much
data going in, it's very difficult for the humans to kind of tell how competent it is.
This is why stuff like chat GPT involves so much human training, why they had
hundreds of people spending tens of thousands of man hours, like going through responses to tell
if they made sense. Because when you've got like, it's one thing if you're like using an if you're,
for example, training an AI on a bunch of different like medical data to try to determine
patterns and like antibiotic research, right, which is a thing that LLMs have been like shown to be – have some early utility in
is like kind of helping to identify new paths for like antibiotics research.
Because like we've got a lot of data, but it's also a really focused kind of data,
right?
We're not like training these things on like all of, you know, Wikipedia and, you know,
thousands and thousands and thousands of fan fiction stories about Kirk and Mulder fucking
each other during some sort of like exile file Star Trek crossover. We're using a fairly focused
data set to try and analyze it in a manner more efficiently than people are simply capable of.
to try and analyze it in a manner more efficiently than people are simply capable of.
That's a lot more useful in terms of getting good data than,
you know,
just training it on half a trillion different things out there.
A lot of which are going to be lies.
But anyway,
I,
I,
I found that interesting.
It's kind of worth noting that like Gabriel and a number of other people who were responsible for that got forced out by Google and kind of attacked by the industry.
Because I think there's a desperation.
And I talked about this in that episode I did last year, kind of about the fundamental emptiness at the core of the modern tech industry.
kind of about the fundamental emptiness at the core of the modern tech industry.
But I think there's this desperation on like, we have to find the new thing, the thing that's going to be as big as social media was, the thing that's going to deliver the kind of
stock market returns that social media did.
And that doesn't exist yet.
And AI is, after especially several years of disasters with crypto and diminishing returns in social media and honestly diminishing returns in like traditional tech because shit like smartphones have reached kind of a point of saturation, right?
You can make money.
So obviously, like you can make money selling smartphones, but you can't show exponential growth, right?
There's just not that many people who need new ones.
Yeah.
Anyway. Yeah, I think there, I feel some desperation here. I wanted to kind of close by reading you all. I found a very funny article
in the Financial Times that was about the potential that the head of Europe's biggest
media group, Bertelsmann, sees for generative for generative AI. Um, and yeah, it interviewed a couple of
people, including a guy, Thomas Rabe, who works, uh, um, is the chief executive of the German
business that owns Penguin Random House. Um, and one of the things that he says in this is basically
like, I think this is, you know, uh, going to be super great for authors. Um, you know, going to be super great for authors. You know, there's a potential for
copyright infringement problems, but really, like, it would allow you to feed your own work into an
AI and then produce much more content than you were ever able to put out before. Like, his exact
quote is, if it's your content for which you own the copyright and then you use it to train the
software, you can, in theory, generate content like never before, which I think is, yeah, a fundamental, like, you know, I don't actually
even think it's going to be possible to like train them on airport novels.
Like you've got like James Patterson and other guys who they're not, they don't write their
own books anymore.
They have like a team of ghost writers, but like having gone through a lot of AI stories,
they're not books.
Like they're not capable of writing books.
They're capable of like producing text and producing pieces of books that human beings
can edit laboriously into something that might look like a book.
But the use in that is not like filling up airports with kind of mid-grade fiction, because
I think that's even beyond these models.
It's like tricking people on Amazon.
There was a really funny quote in this article, though, where at the end of it, Rabe is like, I asked ChatGPT what the impact of ChatGPT or generative AI is on publishing.
It prepared a phenomenal text.
Frankly, it was very detailed and to the point, which he then presented at a staff event.
So there is kind of evidence that CEO jobs
could be pretty easily replaced by this.
Like, you don't actually have to know how to do anything.
Comrade Chad GPT, we agree.
It's a spinning Jenny for bosses.
I love it.
Yeah.
Anyway, that's what I've got right now.
We have been doing some research
and we'll have an article out
on one of the more unsettling little side industries
that I think AI is going to create,
which is like scam children's books
that exist to make con men on the internet money
and poison the minds of little kids.
But we'll get that to
you next week um yeah felt like it was worth coming back to this subject because it i don't
know it's the most apocalyptic thing people in the media are talking about in a day in which like the
entire northeast is blanketed in poison smoke which seems bad yeah well people are talking
about that now because they all live in New York
and they're freaking the fuck out.
But yeah, previous to this.
Yeah.
Anyway.
Go to hell.
It Could Happen Here is a production of Cool Zone Media.
For more podcasts from Cool Zone Media,
visit our website, coolzonemedia.com
or check us out on the iHeartRadio app, Apple Podcasts, or wherever you listen to podcasts. You can find sources for It Could
Happen Here updated monthly at coolzonemedia.com slash sources. Thanks for listening.
Hey guys, I'm Kate Max. You might know me from my popular online series, The Running Interview Show,
where I run with celebrities, athletes, entrepreneurs, and more.
After those runs, the conversations keep going. That's what my podcast, Post Run High, is all
about. It's a chance to sit down with my guests and dive even deeper into their stories, their
journeys, and the thoughts that arise once we've hit the pavement together. Listen to Post Run High on the iHeart
Radio app, Apple Podcasts, or wherever you get your podcasts. You should probably keep your
lights on for Nocturnal Tales from the Shadow. Join me, Danny Trejo, and step into the flames of right. An anthology podcast of modern day horror stories
inspired by the most terrifying legends and lore of Latin America.
Listen to Nocturno on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.
Curious about queer sexuality, cruising, and expanding your horizons? Hit play on the sex-positive and deeply entertaining podcast, Sniffy's Cruising Confessions. your podcast. cruising confessions will broaden minds and help you pursue your true goals. You can listen to Sniffy's Cruising Confessions, sponsored by Gilead, now on the iHeartRadio app or wherever
you get your podcasts. New episodes every Thursday.