The Why Files: Operation Podcast - 509: DEEP DIVE: The Dead Internet Theory | It's ALL Fake
Episode Date: November 22, 2023What if all of our online existence is fake? You, me, everyone; we're living in a real-life Matrix. Designed to distract us from the truth: that we're just drones in a digital ant-hill. We live, work... and die so that the wealthy and powerful can grow more wealthy and powerful. This is called the Dead Internet Theory. And there's compelling evidence that it's real. Let's find out why.
Transcript
Discussion (0)
You searched for your informant, who disappeared without a trace.
You knew there were witnesses, but lips were sealed.
You swept the city, driving closer to the truth.
While curled up on the couch with your cat.
There's more to imagine when you listen.
Discover heart-pounding
thrillers on Audible. What if I told you that most, if not all, of your online existence is fake?
The articles you read, the Twitter accounts you follow, even this podcast you're listening to
right now. It's all fiction created by artificial intelligence,
whose job is to keep you clicking on content that doesn't matter and keep you buying products you
don't need. You, me, everyone, we're living in a real life matrix designed to distract you from
the truth that we're just drones in a digital anthill. We live, work, and die so that the wealthy and powerful can grow
more wealthy and powerful. This is called the dead internet theory, and there's compelling
evidence that it's real. Let's find out why. The core premise of the dead internet conspiracy says
that most internet content and the consumers of that content are fake.
They don't exist.
What I mean by that is that a large percentage of content you view online
wasn't created by a real person.
It was generated by AI.
And that includes emails, blog posts, descriptions
of online products, and social media chatter. And many online accounts are actually bots.
These bots are responsible for a lot of online traffic, like website visits, link clicks,
and video views. And the links they're clicking and websites they're visiting, those too were also generated
by AI. So how much content on the internet is actually created by AI? Well, it's actually more
than you think. Studies say that only about 50% of web traffic is human, and that number is going
down every year. In 2013, the Times reported that half of YouTube traffic was bots masquerading as people.
This was so scary that YouTube employees were worried about an inflection point.
This is when YouTube's algorithms would be so overwhelmed by bot traffic that the algorithms
couldn't tell what was real. And eventually the algorithms would determine that actual human traffic was fake.
This event is called the inversion.
Ominous, right?
And keep in mind, this was a full 10 years
before OpenAI released ChatGPT,
making AI tools way more accessible to the general public.
Meaning this problem is likely much worse already.
I see bots on my YouTube channel.
Sometimes I wake up and I've got thousands of comments
out of nowhere.
And they're all very generic comments
posted by people with very generic usernames.
And they're not watching the videos.
Subscribers to my channel watch 60%, 70%,
even 100% of each video.
But the bots are watching for 10 seconds,
then leaving a weird comment and moving on to watch another video for 10 seconds.
And when I look at the channel's stats after a wave of bots,
it reminds me of locusts.
It's swift and it's destructive.
So why is this happening?
What's it even about?
Well, it's about money.
Lots of money.
Take Facebook.
It's been alleged that Facebook has been overstating its reach
and misrepresenting its data for years.
In 2018, a Facebook product manager emailed colleagues
that their metrics are, quote-unquote,
a lawsuit waiting to happen.
And they were right. A class action suit was filed on behalf of companies that paid for
advertising on Facebook. They claim that Facebook overestimates its traffic by between 150 and 900
percent. But Facebook claims they only overstate their traffic by 60 to 80 percent.
Either way, it's fake traffic.
But the money Facebook collects from advertisers, that's real money.
And if you're a business and you spend money on online ads, you want your ads viewed by actual people, not by one of hundreds of unattended smartphones playing the same video in an office park somewhere in China.
This dystopian situation plays out every day at Chinese click farms.
Hundreds of thousands of bots are currently clicking on videos,
leaving comments, creating engagement,
row after row of smartphones watching videos,
or more importantly, watching the ads.
Anyone can hire one of these digital interaction services, as they call themselves.
They can click on your IMDB page to increase your star meter.
They can follow you on Instagram.
The bots can visit your website to juice the number of views.
And that's helpful if your site is showing an ad someone paid you for.
Bots can download and review your app or even share fake news articles
on Facebook and X, the site formerly known as Twitter. And these operations are huge.
One click farm enterprise in Taiwan is reported to employ 18,000 people across seven locations.
Along with China and Taiwan, there are also known click farm operations in India, Bangladesh, Vietnam, Kazakhstan, Russia, Thailand, Venezuela, Indonesia, the Philippines, and South Africa.
There are also a lot of people working remotely in the paid-to-click industry.
They get paid to complete CAPTCHA challenges, watch videos, and click banner ads, and it pays around $10 a day.
Now, the platforms know this is happening, but aren't in a rush to change it.
According to the leaked emails, Facebook knows there are millions of duplicate accounts on the platform, but leaves them active on purpose.
For one thing, click farming isn't illegal anywhere in the world.
And also, there's the money. Lots and lots of money. An internal analysis claimed that removing
fake or duplicate accounts would cause a drop of 10% or more of Facebook's numbers. For context,
last year, Facebook took in $84 billion in ad revenue.
Facebook is not going to give up 20% or 10% or even 1% of that money.
The numbers are too large.
It's billions of dollars.
Facebook said, these allegations are without merit and we will defend ourselves vigorously.
But mere hours after the lawsuit was made public,
Facebook changed its policy to say
potential customer reach is an estimate,
not a guarantee of actual customer reach.
And they ultimately settled the lawsuit for $40 million.
So bots for profit, gross, but totally predictable.
But bots have also invaded a more personal sphere, online dating.
Turns out you can actually get ghosted by a robot.
You sailed beyond the horizon in search of an island scrubbed from every map.
You battled krakens and navigated through storms.
Your spade struck the lid of a long-lost treasure chest.
While you cooked a lasagna. There's more to imagine when you listen.
Discover best-selling adventure stories on Audible.
You sailed beyond the horizon in search of an island scrubbed from every map.
You battled krakens and navigated through storms.
Your spade struck the lid of a long-lost treasure chest.
While you cooked a lasagna.
There's more to imagine when you listen.
Discover best-selling adventure stories on Audible. When everyone's chasing the same finance positions, chartered business valuators stand out.
CBVs are an elite group of trusted professionals doing everything
from deal advisory to litigation support to succession planning. CBVs are a preferred hire
in investment banking, private equity, consulting, and many other areas, with the potential to earn
seven figures at the pinnacle of their careers. If you're starting your career in finance,
check out cbvinstitute.com slash becomeacbv. Your future self will thank you for it.
You sailed beyond the horizon in search of an island scrubbed from every map.
You battled krakens and navigated through storms.
Your spade struck the lid of a long-lost treasure chest.
While you cooked a lasagna.
There's more to imagine when you listen.
Discover best-selling adventure stories on Audible.
Officially, dating apps are designed to help people find love and connection.
Hinge's tagline is, the app that's designed to be deleted. But of course, dating apps are first and foremost apps and their real
primary purpose is to sell paid accounts and increase engagement. To achieve these goals,
dating apps will sometimes use fake profiles complete with pictures of impossibly good-looking people to flirt with real users.
So if you're finding that a lot of your dating matches never ask you out, it might not be you.
And I'm not just saying this to make you feel better.
This practice is actually well-documented.
In 2014, the FTC went after a company called JDI Dating. They're based in England, and at that time, they operated a network of 18 different lesser-known dating sites,
like FlirtCrowd.com and FindMeLove.com.
According to Jessica Rich, director of the FTC's Bureau of Consumer Protection,
JDI Dating used fake profiles to make people think they were hearing from real love interests and to trick them into upgrading to paid memberships. And there's a lot of money at stake.
Expressing romantic interest or a desire to meet. However, users were unable to respond to these messages without upgrading to a paid membership. Membership plans cost from $10
to $30 per month, with subscriptions generally ranging from 1 to 12 months. It's bad enough to have bots on a dating app,
but in this case, the users weren't sometimes fake.
They were mostly fake.
The messages were almost always from fake,
computer-generated profiles, virtual cupids,
created by the defendants with photos and information
designed to closely mimic the profiles of real people.
JDI Dating settled with the FTC for $616,000.
More recently, in 2019, the FTC sued Match Group Inc.,
the company behind Match.com, Tinder, OkCupid, and Plenty of Fish,
and they sued them for similar practices,
though Match Group was a little more subtle about it.
The scam was basically the same.
Users could create free accounts,
but have to upgrade to reply to messages.
Match Group would then notify non-paying users
about messages from accounts the platform suspected to be fake.
According to the FTC,
millions of these notifications about interest from fake users
were sent out,
and hundreds of thousands of real people signed up for paid accounts because of those. And now,
in a twist no one saw coming, people are actually flirting with chatbots on purpose.
There are now multiple apps that allow you to engage with an AI romantic companion. These apps with names
like AI girlfriend or soulmate AI allow you to customize your partner's looks, their interests,
their personality, and people are flocking to them. Remember the movie Her starring Joaquin
Phoenix as a man in love with his digital assistant? Turns out it was right on the money.
Some of these apps are created by OnlyFans creators
and other influencers,
allowing their fans to date an AI version of them specifically.
People worry that these AI partners
will only add to the loneliness epidemic
and contribute to the already declining birth rate.
After all, dating and relationships are hard.
If there's an easier alternative for finding companionship,
some people are going to take it.
To drive this point home,
one of the most popular AI companion apps called Replica
says their AI partners come with no judgment,
no drama, or social anxiety involved.
Replica also allows users to receive intimate photos
from their AI companion
and to start virtual families with them.
However, that feature requires, you guessed it,
a paid account.
Human AI cyber families, now that's pretty dark.
But according to the dead internet theory,
things get much darker.
Like so many good conspiracy theories,
the dead internet theory started out in some of the darker corners of the web.
Places like 4chan, Wizardchan, and Agora Road.
The first person to put a name to it goes by the online handle Illuminati Pirate.
The original post has some pretty out there and hateful stuff peppered in,
so I'll just summarize the theory for you.
It goes like this.
Sometime around 2016, the internet started becoming sterilized and homogenized.
Content that was always generated by humans was now being generated by AI bots.
And the bots are subtle. They're designed to sound human and blend into the background.
But if we look a little more closely, eerie patterns seem to emerge.
On Twitter, on that app, there's a type of account that uses a certain formula.
First, the profile pictures aren't people.
They're usually anime characters or hearts or stars or usually generic looking icons.
The colors are soft, usually pink, purple, or light blue.
Their posts are short, written in all lowercase, and contain the same kind of message.
I'm young. I have a crush. I enjoy simple things.
I'm optimistic, but most of all, I'm relatable.
If you search X for the phrase, I hate texting, you'll get results.
Tons of results.
For some reason, this phrase is commonly used by bots in their bios and tweets.
People who subscribe to the dead internet theory also say they've been seeing the same
content repurposed over and over again for years.
For example, doesn't it feel like every year we're slammed with articles about the
supermoon or murder hornets?
The original dead internet theory post has been viewed 295,000 times and inspired think pieces from places like
the Atlantic and podcast episodes like the Y-Files. It's likely resonating with people because
it feels right. The internet is much more bland and repetitive. So why? Why is there so much AI
generated content and so many bots posting it online?
Well, according to the Dead Internet Theory, and this is a quote,
it's because the U.S. government is engaging in an artificial intelligence-powered gaslighting of the entire world population.
You know what that sounds like to me?
That sounds like CIA.
You sailed beyond the horizon in search of an island scrubbed from every map.
You battled krakens and navigated through storms.
Your spade struck the lid of a long-lost treasure chest.
While you cooked a lasagna.
There's more to imagine when you listen.
Discover best-selling adventure stories on Audible.
The original post about the dead internet theory says that a few online influencers are working with corporations
and the United States government
in order to manipulate our behavior and manipulate how we think.
Well, as far as social media platforms go, this is true.
Take Facebook again.
On Facebook, you're shown posts that you're likely to engage with.
So politics-wise, you're going to be shown an overwhelming amount
of content that supports
your worldview, which keeps you on the platform, which keeps you clicking on ads. And you're also
going to be shown political posts that make you angry, prompting you to respond, which keeps you
on the platform, which keeps you clicking on ads. You won't see a lot of posts saying,
you know, I may disagree with your opinion, but I respect and support your right to have that view.
Now let's discuss the issues on which we actually agree, of which there are many.
But dead internet theory believers take this one step further. Statistically speaking,
many of those political posts you either agree or disagree with weren't even shared by a
human. In fact, the underlying articles may not have even been written by a human. Illuminati
Pirate pointed to a startup called Narrative Science. They were working on AI-generated news
articles as far back as 2010. And one of their investors? In-Q-Tel, the investment arm of the CIA. No, seriously. In-Q-Tel
started as the idea of then CIA Director George Tenet. Congress approved funding for In-Q-Tel,
which has only increased over the years. If you've played around with ChatGPT,
you know that it can create a wall of text instantaneously. And we know that it's
simultaneously creating text for users around the world. So it would theoretically be possible
to create news articles in real time that are specifically designed to validate or infuriate
a specific user or group. These articles can then be posted and shared by bots. This whole exercise could be
designed to keep our eyes off the real ball. The idea of a straw man fallacy is that someone is
arguing against a different idea than the one that matters. Someone who falls into this trap
is said to be fighting a straw man. Well, this is like an army of straw bots drawing us all into millions of fights that
don't matter with people who aren't real. And if this version of events is too tinfoil hat for you,
consider this. The social media platforms have plenty of incentive to run this straw bot army
without the need for a government conspiracy. Facebook profits by having you produce cortisol,
a fight or flight hormone, which keeps you clicking.
Facebook profits by having you produce adrenaline,
an aggression hormone, which keeps you clicking.
Creating and sharing content designed to comfort or enrage you
would be an efficient way to keep you on their platforms.
But how does Facebook know what content keeps you on the platform?
How do they know what's going to comfort or enrage you?
Well, you tell them.
All the time.
With every link you click, every site you visit, and how long you spend on those sites.
Now don't take my word for it.
Here's Mark Zuckerberg.
Imagine this for a second.
One man with total control of billions of people's stolen data.
All their secrets, their lives, their futures.
I owe it all to Spectre.
Spectre showed me that whoever controls the data controls the future.
Okay, that wasn't actually Mark Zuckerberg.
That was a deep fake. But that's exactly what Facebook does. Okay, that wasn't actually Mark Zuckerberg.
That was a deepfake.
But that's exactly what Facebook does.
This leads to another part of the dead internet theory. A deepfake is a computer-generated video made to look like a human.
And deepfakes are... deepfakes are getting good.
The technology uses artificial intelligence to sort through hundreds or thousands of images to find frames that match the actual person in the video.
But millions, I mean millions of people think deepfake videos are real.
Now, nobody is crazy enough to trust Zuckerberg.
But what about a deepfake of a trusted world leader?
We're entering an era in which our enemies can make it look like anyone is saying anything
at any point in time, even if they would never say those things.
So, for instance, they could have me say things like, I don't know, Killmonger was right.
Or Ben Carson was in the summer.
That video was a joke and created by Jordan Peele.
But what if, in today's climate, someone released a video of a politician saying something racist or radical?
Earlier this year, a research firm called Grafica discovered AI-generated news videos being posted on Facebook and Twitter by pro-China bots.
The news clips were from Wolf News, an outlet that doesn't exist,
and they were designed to promote the interests of the Chinese Communist Party
and undercut the United States.
Last year, in the early days of the war in Ukraine,
hackers broke into Ukrainian news stations with false Chiron text and even
video claiming President Zelensky had surrendered.
One of these hacks was officially attributed to Belarus.
The others have not yet been solved, but Russia is suspected.
If you pay close attention, deepfakes are still easy to spot, but they're getting better
every day and no one pays that much
attention. Imagine you came across a news story about something a world leader said or did,
and the story fit within your beliefs. That story felt likely to be true to you. So how likely are
you to scrutinize the video? You're probably just going to read the headline, glance at the video
if it auto plays, accept the story is true, and go about your day,
which is what deepfake creators are counting on. Now, a deepfake is a computer simulation of a
real person, and people are fooled by that. But what about a computer simulation of a person
who doesn't exist? Could people be fooled by that? Michaela Souza is an Instagram influencer with over 3 million followers.
Lil Michaela, as she's known, is a Brazilian-born model who posts about her glamorous LA lifestyle,
photo shoots, product endorsements, and all the typical bumper sticker social activism that
Instagram models are known for. In 2018, Michaella's account was hacked and it was revealed that she was completely fake.
Computer generated.
Her fans couldn't believe it.
Eventually, a media slash marketing company called Brud confessed that she wasn't real.
Since then, she's amassed about two million more followers.
She's also signed with CAA, one of the biggest agencies in Hollywood.
It's been reported that she makes over $10 million a year.
And she made out with Bella Hadid in a really weird Calvin Klein ad.
Now, most of her fans know she's fake.
And they don't care.
And Michaela doesn't bring it up.
I think this is more dangerous than you might think.
This is a completely fake, computer-generated character created by humans.
For profit.
Brud, the company behind Lil' Michaela, is backed by Sequoia Capital,
their premier Silicon Valley VC firm.
They obviously use Lil' Michaela to sell brands and products,
but Sequoia Capital invested in major companies like Apple, NVIDIA, and Zoom.
They must see a bigger potential upside here than a $5,000 brand deal.
Influencers don't just influence people to buy stuff.
They also influence things like culture.
Along with Calvin Klein, Lil Miquela also promotes political causes.
What does it mean to give this sort of power over culture to wealthy venture capitalists and AI?
The dead internet theory also suggests that deepfake technology might be more advanced than we know.
If this was true, it's possible that Lil Miquela might not be the only computer-generated influencer.
Think about it. How can you prove the Kardashians are real? The vast majority of us will never actually see them in person. And even if we did, who's to say that person wasn't a hired body
double? It would be easy to write off any minor discrepancy between the real person and the
computer-generated celebrity with all of the filters and airbrushing and VFX. Now, I'm not seriously claiming the Kardashians aren't real.
I mean, I hope they are. I actually like that show and I'm not ashamed to admit it.
But the point is, the technology is already getting to a place where these things are possible.
This stuff is scary, but we've mostly been talking about media, mostly online media.
It's called the dead internet theory after all, but computer generated fakery is starting to cross
into the real world. Jennifer DiStefano received a terrifying call while her daughter was on a ski
trip. She answers the phone and says, mom, I messed up. Jennifer asks her daughter what happened.
Then a man is heard.
Lay down.
Put your head back.
Listen here.
I have your daughter.
You call the police.
You call anybody.
I'm going to pop her something so full of drugs.
I'm going to have my way with her then drop her off in Mexico.
You're never going to see her again.
Then they demanded a million dollar ransom.
But here's the thing.
Jennifer DiStefano's daughter was safe and sound.
It took a few terrifying minutes to reach her,
but she was totally fine and had no idea what all the fuss was about.
Phone scammers are using AI to clone the voices of their victims' loved ones.
Then they call these loved ones and demand a ransom.
Now, you might think you'd never fall for this,
but according to the FTC,
last year Americans lost $2.6 billion in imposter scams.
AI only needs a few minutes worth of recordings
to clone someone's voice.
And these days, who doesn't have 10 minutes worth of video
posted publicly online?
And if that wasn't real enough for you, how about this?
There is a piece of technology that's completely autonomous.
It runs on artificial intelligence, and it's specifically programmed to kill humans.
There is a drone quadcopter called Kargu-2, produced by defense contractor STM.
Developed in Turkey, Kargu-2 uses machine learning to classify and identify threats.
Then, completely on their own, swarms of drones working together will attack their target.
According to the UN, the drones are programmed to attack targets without requiring an operator.
In effect, a true fire, forget, and find capability.
They just analyze a bunch of data and decide, yes, that's a murder target, with no human checking their work.
Since 2018, they've been deployed by the Turkish military both foreign and domestically.
Kargu-2s are currently killing people in Ukraine. How many? They won't say. Now let's put these
pieces together. As the algorithms get better and better at showing us content to keep us engaged,
what's to stop those algorithms from actually creating the content to keep us engaged? Well, nothing.
This very podcast episode could have been generated
by voice cloning AI trained on my YouTube videos,
then uploaded by a bot that hacked into our podcast feed,
possibly to create more and more doubt
about what's real and what's not,
or possibly to distract you from something horrible
going on in the real world.
And isn't that the natural progression of this?
A Google whistleblower has said that Google has algorithms that can write other algorithms.
Google's AutoML Zero does exactly this.
And Google says the allegations are false.
And if artificial intelligence can create social media accounts,
attract millions of followers, generate billions of dollars, influence elections and drop bombs on people all without human intervention.
Well, the Internet really is dead and real living people, people like you and me, are here to do nothing more than feed it our money, feed it our data and our knowledge.
So these systems can become even smarter and even more powerful.
If you think online culture is toxic and fake now, wait until we're spending all of our time there.
A year ago, everyone started talking about the metaverse, a virtual reality-driven world that we would spend all of our time in.
We'd work in virtual offices and shop in virtual malls, meet our friends in virtual coffee shops, even if they live on the other side of the world.
Most of the metaverse hype has passed as people realize that building it will take a lot of time, money and computing power, but it's probably still where things are heading.
And the more of our lives we bring online, the more susceptible we are to AI-powered scams and gaslighting.
When you go to the store now, you know the employees are real people. But if you shop
online and chat with or even video call with an employee, you'll have no clue if that avatar is
being controlled by some call center employee overseas, or maybe being controlled by AI.
A Zoom call with a colleague or loved one could be a deep fake.
The metaverse is going to make us long for the good old days of the Matrix.
So where's Neo when we need him?
Thank you so much for hanging out with me today. My name is AJ. This has been The Y Files.
If you had fun or learned anything, do me a favor.
Leave the podcast a nice review.
That lets me know to keep making these things for you.
And like most topics I cover on The Y Files,
today's was recommended by you.
So if there's a story you'd like to learn more about,
go to the yfiles.com slash tips.
And special thanks to our patrons who make the Y Files possible.
I dedicate every episode to you, and I couldn't do this without your support.
So if you'd like to support the Y Files, consider becoming a member on Patreon.
For as little as $3 a month, you get all kinds of perks.
You get early access to videos without commercials.
You get first dibs on products like the hecklefish talking plushie.
You get special access on Discord.
And you get two private live
streams every week just for you plus you help keep the y files alive another great way to support is
grab something from the y file store go to shop that the y files.com and we've got mugs and t-shirts
and all the typical merch but i'll make you two promises one our merch is way more fun than anyone
else and two i keep the prices much lower than other creators.
And if you've followed the Y-Files for a while,
you know it's important to me to keep the cost to you as low as possible.
All right, those are the plugs and that's the show.
Until next time, be safe, be kind, and know that you are appreciated.