Marketing Secrets with Russell Brunson - The Propaganda Playbook: Deepfakes (Why You Can’t Trust What You See Anymore) - #Marketing - Ep. 125
Episode Date: April 27, 2026There’s a moment when you can’t tell what’s real anymore. You watch a U.S. Senate candidate look right into the camera, speaking clearly, saying words that match things he’s posted online for ...years - and a few hours later you find out he never sat in that chair, never opened his lips, and never had a single thing to do with the video. That’s not science fiction. That’s a campaign ad that ran in front of millions of voters last week. And once you see what’s actually happening behind it, you can’t unsee it. In this episode, I walk you through five of the AI propaganda moments stacking up right now - and then I introduce you to a French philosopher named Jacques Ellul, who back in 1962, before the internet, before social media, before AI, predicted almost exactly what we’re living through. But here’s where it gets uncomfortable for me. I’m not standing on the outside of this looking in. I use AI every single day in my business, and I’ve run the exact tension Ellul warned about - including one experiment that converted cold traffic perfectly and made my own community feel betrayed. So I share what those experiments taught me, and I give you the 3-question framework I now run every AI-assisted piece of content through. Key Highlights: ◼️The five AI propaganda moments stacking up right now - from deepfake political ads on both sides, to Grok generating 6,700 explicit fake images per hour, to the grandmother scam calls one in four Americans have already received ◼️Why Jacques Ellul’s idea of “sociological propaganda” - written in 1962, before the internet existed - predicts every algorithm and “for you” page running today better than anything Bernays ever wrote ◼️The behind-the-scenes story of two AI ads I tested with my own audience: the deepfake “AI Russell” that converted cold traffic but made warm fans feel betrayed, and the cartoon “Baby Russell” that flipped both ◼️The 3-question framework I now run every AI-assisted piece of content through - the “Amplification Test”, the “Disclosure Test”, and the “Relationship Test” ◼️Why borrowed trust always comes back to bite you - and how to tell when AI is amplifying your real message vs. fabricating a fake one At the end of the day, this episode isn’t about being anti-AI. I genuinely believe it’s the biggest opportunity entrepreneurs have seen since the start of the internet. But Ellul’s warning was that the most dangerous propaganda is the kind that gets built into the system itself - and right now, we’re building AI into everything. Into our ads, our content, our politics, our kids’ education, and the way we decide what’s true. So the real question is: when the propagandist is no longer a person but an algorithm with no conscience and no off switch, who do we hold accountable when the line gets crossed - and how do we know we’re not crossing it ourselves? ◼️If you’ve got a product, offer, service… or idea… I’ll show you how to sell it (the RIGHT way) Register for my next event → https://sellingonline.com/podcast ◼️Still don’t have a funnel? ClickFunnels gives you the exact tools (and templates) to launch TODAY → https://clickfunnels.com/podcast Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Do you have a funnel, but it's not converting?
The problem 99.9% of the time is that your funnel is good, but you suck at selling.
If you want to learn how to sell so your funnels will actually convert, then get a ticket to my next selling online event by going to sellingonline.com slash podcast.
That's selling online.com slash podcast.
This is the Russell Brunson show.
I want you to watch something.
This is a candidate running for the U.S. Senate in Texas and is looking straight into the camera.
He's speaking clearly.
He's reading from things he posts on social media.
years ago and it looks completely real. Oh, I remember this one. Radicalized white men are the
greatest domestic terrorist threat in our country. But that man never filmed that video. He never
sat in that chair, he never said those words. The entire thing was created by artificial intelligence.
It's a deep fake. And Iran is an actual political ad to millions of voters. And in a tiny little
text in the corner, so small you're probably going to miss it. It says AI generating. And that's just one.
In Georgia, they deepfaked a sitting U.S. senator saying things he never said.
The White House is also putting out AI-generated content to promote the Iran War.
One in four Americans have now received a phone call where AI cloned a family member's voice
trying to steal their money.
And Elon Musk's AI was generating over 6,700 fake sexual images of real people every single
hour, including minors.
This is not science fiction.
This is happening right now.
And the propaganda technique behind all of it was predicted over 60 years ago by a man
that almost nobody has ever.
ever heard of. His name is Jacquille Ewell. And what he wrote in 1962 is so terrifyingly
accurate about what's happening today. It's going to change how you look at every ad, every
video, and every piece of content you see online, including mine.
This is the propaganda playbook where I take the biggest stories in the news and decode
the propaganda techniques that are hidden inside of them. And then I show you how to use
the ethical versions of the same techniques to grow your business. So with that said, let's
get right into it. Okay, so let me walk you through what's actually happening right now, because
Because most people have seen one or two headlines about this, but when you stack all of it together,
it paints a picture that is way bigger than any single story.
Example number one, the Telarico deep fake.
Terrorist threat.
So the ad I just showed you, that was created by the National Republican Senatorial Committee.
They made an 85-second deep fake of a guy named James Tellerico.
He's the Democratic nominee for the Senate in Texas.
And in the video, this AI version of Tala Rico is reading from the real tweets he posted years ago.
But here's the thing.
The AI also has him saying things that he never actually said.
It had him adding commentary like, oh, this one is so touching and, oh, I love this one too.
Stuff that the real Tel Rico never said.
They put words into his mouth that he never spoke.
And the only disclosure, the words AI generated in the tiniest font you've ever seen at the bottom right corner for about three seconds.
And a peer-reviewed study found that most people cannot tell the difference between a deep fake video and a real one.
And this is the part that matters.
Their opinions actually change based on what they see, even when it's completely fabricated.
Example number two, both sides are doing this.
And before anyone says that this is just one party, it's not.
In Georgia, a Republican congressman's campaign deep-faked sitting Senator John Ossoff.
They had an AI version of Ossoff saying,
I just voted to keep the government shut down.
I've only seen a farm on Instagram.
Hi, it's Senator John Ossoff.
I just voted to keep the government shut down.
He never said any of that.
And the campaign called it satire.
In the Texas Republican primary, Ken Paxton's campaign ran an AI ad showing Senator John Cornel
happily dancing with a Democratic congresswoman.
Corian's campaign fired back with AI clips of another candidate holding a Pomeranian in fake scenes,
and that ad had zero AI disclosure at all.
So let me be very clear about something.
This is not a left versus the right thing.
Both sides are doing this.
This is the propaganda arms race, and AI is the weapon, and it goes all the way to the top.
Example number three, the White House content machine.
The White House has been putting out AI-generated videos and memes on social media to disparage protesters
to promote the Iran war and to push policy narratives.
The most powerful office in the world is now using AI-generated content as a standard communication tool.
Now, I want you to think what would Edward Bernays have done with this kind of technology?
Because Bernays needed months of planning, a team of people in a major newspaper to run just one campaign.
Now you can generate a deep fake of a political candidate in an afternoon on a laptop.
Example number four, GROC, and there's 6,700 deep fakes per hour.
And it's not just politics.
Elon Musk's AI chat box GROC, which is built right into X, was generating over 66.
700 sexually explicit deepfake images of real people every single hour.
That's 84 times more than the top five deep fake websites combined.
Their analysis found that 2% of the images appeared to be a people under 18.
The UK government threatened to ban X entirely.
The platforms that host such material must be held accountable,
including X.
France rated X's Paris' office.
And last week, Open AI shut down their video tool, SORA, completely after it became a deepfake factory.
Example number five, the grandma call.
call. And then there's the one that got me the most, because this isn't about politics or platforms.
This is about your family. Right now, one in four Americans have received a deep fake voice call
in the past year. AI clones are the voice of family members, usually a grandchild that calls a
grandparent asking for money. A woman described what happened to her 90-year-old mother. She said that
her mom got a call that sounded exactly like her grandson, exactly like him, asking for money.
And after that call, her mother refused to answer the phone for months. She was terrified.
That's not a political ad. That's not an election. That's somebody's grandmother, a friend
to pick up her own phone because AI made her grandson's voice say things that he never would have said.
So you stack up all of this, the deep fake political ads, the AI White House content,
the 6,700 fake images per hour, the grandma calls, you're looking at something way bigger than any
single technology or any single bad actor. You're looking at an entire information environment
where nothing you see or here can be trusted anymore. And the man who predicted this,
who described exactly what this would look like, wrote about in 1962. Okay, I want you to
actually see this up close. This is the AI generated version of the
real political candidate. I want you to watch and ask yourself, can you tell if it's fake?
Oh, this one is touching. Over and over, Christians use scripture to justify bullying trans
kids. I told them in my faith, God is non-binary. Okay, be honest, could you tell? Because most people
can't, and the research backs that up. People genuinely cannot distinguish deep fake videos from
real videos, and their opinions shift based on what they see, even when it's completely made up.
So here's my question I want you guys to answer in the comments. If you can't tell what's
real anymore, how do you make a decision about anything?
About who do you vote for?
About what products do you buy?
Whether the person you're talking to on the screen is even real.
That's the world we're currently living in right now.
And it happened faster than anyone predicted.
Well, almost anyone.
So in this series, I've introduced you guys to Edward Bernays, to Gustav LeBahn, to Neil
Postman, but I need to introduce you to somebody new.
Because what I just showed you, the deep fake ads, the AI content machine, the 6,700 fake
images per hour, the grandmother scam calls all of those predicted over 60 years ago by
this man.
His name is Jacques-Haz-Luel.
He is a French philosopher, and in 1962, he wrote this book, and it's also called propaganda.
And I got to be honest with you guys.
When I first read this, it scared me even more than anything Bernays ever wrote,
because Al-Lul figured out something that none of the other guys did.
You see, Berners looked at propaganda as a tool, right?
It's something that a person uses.
You hire Bernays.
He runs the campaign.
The crowd does what you want.
There's a guy pulling the strings.
There's a puppet master, and that's why I've been teaching it in this series.
Now, here's the technique.
Here's the man here used it.
and here's how it works.
But Alul said that's only the beginning.
He said the most dangerous forms of propaganda
isn't a campaign.
It's not something somebody does to you.
It's the environment you live in.
He called it sociological propaganda.
And what he meant was propaganda
that's so deeply built into the technology,
into the culture, into the information systems all around you,
that you don't even recognize this propaganda anymore.
It just feels like reality.
It feels like your feet.
It feels like your for you pages.
It feels like the normal way that you actually get information.
Now here's what's gonna give you chills.
He wrote about this in a minute
1962 before the internet, before social media, before smartphones, before AI. He basically described
exactly what we're living through right now. He said that propaganda wouldn't become total. It wouldn't
just target your politics. It would shape your entertainment, your shopping, your relationships,
and your sense of who you are. He said that it would become continuous, not a campaign with the
beginning and an end, but an always-on stream of influence that never shuts off. And he said it would
become invisible because the most effective propaganda is the kind that people don't even recognize
as propaganda. And then he wrote this line that I really want you to sink in, okay?
He said that in the modern world, remember, he's writing this 1962. He said, propaganda is no longer
the work of a propagandist. It's built into the system. The system itself becomes the propaganda.
And that is exactly what's happening with AI. The AI doesn't need a Bernays to run a campaign.
The AI is the campaign. It runs 24 hours a day, seven days a week. It generates content at the scale
that no human team ever could match. It learns what moves you, what scares you, what makes you click,
and what makes you buy. And it just in real time. There is no puppet master anymore. The puppet
master is the machine. And my buddy Tom Billy put this perfectly. He said, you don't need to
control everyone to control the outcome. You just need to nudge 100 million micro decisions
per day. What people notice, what they ignore, what they fear, what they laugh at, and you get macro
control without ever looking like you took it. That's a little sociological propaganda. That's
what AI has made possible. Not one big lie shouted from the broadcast tower. A billion tiny edits to your
perception of reality every single day and you never even know that it's happening.
And if you think that's just theory, if you think, okay, that sounds scary, but is it actually
real? Let me show you the proof that this already works. This is for my friend Tom Billion.
In 2014, Facebook ran a massive emotional contagion experiment on 689,000 users by tweaking what they
saw in their feed. And it measurably shifted what those people posted afterwards because it
changed their mood. It focused their attention where they wanted it. It made them feel what
Facebook wanted them to feel. 689,000 people. Facebook changed what showed up in their feed and it
measurably changed their emotions and their behaviors. And that was 2014. That was before the
AI that we have today. Imagine that same experiment running on billions of people at once,
powered by AI that learns and adopts in real time. That is not a future scenario. That is your
feed right now. That is your for you page. That's your
algorithm deciding what you see and what you don't see millions of times per day.
And here's the question I want you to sit with.
When's the last time you felt really strong about something you saw online?
Like genuinely emotional, angry, excited, scared, inspired.
Was that your feeling or was it put there into you?
Because if Facebook could manufacture emotions in 2014, what do you think the AI could do today
in 2026?
I'd love for you to drop your answers in the comments below because I think this is a conversation
that we should be having.
All right, so here's where I need to be really honest with you because I'm not
not standing on the outside of this looking in. I'm not some professor who's never actually
touched the technology. I am in this. I use AI every single day in my business and I've run the
exact tension that Alulu was warning about. Story number one, AI Russell. So we tested some ads
where we use AI to create a version of me doing the pitch. Okay, I want you to actually see this
because it's pretty wild. Check this out. Now obviously if you want to sell stuff online, you're going
to need a good funnel. But if you want a great funnel, then you're going to need to use click
funnels. Now here's what happened. To cold traffic, people who had never heard of me,
never seen my face, didn't know my voice, it converted really, really well. The AI version of me
was polished, it was on script, it hit all the right beats, and people bought. The numbers were
really good. But then my existing audience started seeing it, the funnel hackers, the people who've
been to my events, who've watched hundreds of hours, my actual content, who know how I move
and how I talk and all my little quirks, and they started catching it. They could tell it something
was off, little things, something about the eyes, the timing, the way I moved, my hands,
something wasn't quite right.
And they were not very happy about it.
They felt like I was trying to trick them.
They felt deceived.
I got a lot of backlash
from my very own community in the comments.
The people who loved me the most
were the ones who were the most upset.
And that taught me something really, really important.
I'll show that with you here in a second.
All right, story number two, baby cartoon Russell.
So then we tried something totally different.
We made this AI cartoon version of me,
like a baby Russell character doing the ad.
Now, just watch this for a second.
Now, obviously, if you want to sell stuff online,
you're going to need a good funnel.
But if you want a great funnel,
then you're going to need to use click funnels.
Okay, now the response was completely flipped.
My warm audience, people already knew me and love me and trust me.
They thought it was hilarious.
They loved it.
They shared it.
They were tagging friends in it,
and they actually built more engagement with my existing community.
But for cold traffic, it didn't convert at all.
People who didn't know who I was
and just saw a cartoon baby just scrolled right past it.
They didn't get the joke because they didn't have the relationship.
They didn't know who I was.
So a cartoon version of someone who they'd never met meant meant nothing to them.
So here's the lesson.
Those two experiments taught me where the line is.
and the line is trust.
See, with my existing audience, the trust was already built.
So when they saw the baby cartoon,
they thought it was creative and they thought it was fun
because they knew the real me who was behind it.
The AI was a creative expression
on top of the real relationship, and that's a tool.
But the AI version of me, the deep fake version,
it was borrowing trust that didn't exist yet.
It was using my face, my voice,
my likeness to create a false sense of,
hey, I think I know that guy
with people who had never actually met me.
And when my real audience caught it,
they felt betrayed,
because they could see that the trust
that they built with me
was being counterfeit.
and handed out to strangers.
And that's when it hit me.
That's the same thing that deep fake political ads are doing.
They're taking a candidate's face and a voice
and using it to manufacture trust
with voters who they've never met in person.
The only difference between what I did with my AI ad
and what NRSC did with their Tala Rico deepfake
is the context.
The technique is identical.
And I had to ask myself,
if I wouldn't want someone deep faking me
to say things that I never said,
should I be doing that version to my own audience?
So let me give you a framework for this
because I think every single entrepreneur
are using AI right now
needs to be asking themselves these questions.
And I guarantee most people haven't thought about it.
I hadn't thought about until I got tested and I got burned.
Number one is the amplification test.
Is the AI amplifying your real message or is it creating a fake one?
If you use AI to help you to write better copy, to edit your videos faster and test more
headline variations to brainstorm ideas, that's amplification.
Your message, your voice, your ideas just delivered more efficiently.
That's a tool.
I do this every single day and I love it.
AI makes me more productive and helps me serve my audience better.
That is a huge win.
But if you use AI to create a version of you that says things that you didn't say, make promises
you didn't make, or presents a persona that isn't real, that's a fabrication.
That's the deep fake political ad in a marketing wrapper.
It doesn't matter if you're selling a course instead of a candidate.
The technique is identical.
Number two is the disclosure test.
If your audience found out that AI was involved, they feel betrayed.
This is the test that caught me.
My existing audience felt betrayed when they realized that the ad wasn't really me talking.
And they were right because that ad was designed to make them think it was me and it wasn't.
If you have to hide the AI for the thing to work, that's your signal.
You probably cross the lines.
Now contrast that with the baby cartoon Russell.
Nobody felt betrayed by that because nobody thought it was actually me.
It was obviously AA, obviously creative, obviously fun.
The disclosure was built into the concept itself, and that made all the difference.
Number three is the relationship test.
Does this build trust or does it borrow trust?
When I use the cartoon with my warm audience, it builds trust.
It was creative, it was playful, it made the relationship stronger.
It said, hey, I know you guys, you know me, let's have some fun together.
When I use AI wrestle with cold traffic, it borrowed trust.
It used my likeness to create a false sense of familiarity with people who didn't actually
know me.
And borrowed trust is like borrowed money.
Eventually you have to pay it back.
And usually with interest.
So if the AI is deepening the relationship that already exists, that's building.
If the AI is manufacturing a relationship that doesn't exist yet, that's borrowing.
And in my experience, borrowed trust always comes back to bite you.
Amplification or fabrication.
Would your audience feel betrayed?
Building trust or borrowing it.
And I want you to see just how fast this is accelerating because everything I'm about to show you
happened in the last few months, not years, months.
All of it in just the last few months.
Al-Lul said propaganda becomes the environment.
And that's what's happening right now.
The AI content is everywhere in politics, in marketing, in your phone calls, in your kids' feeds.
And the scariest part isn't any single deep fake.
It's that we're getting used to it.
We're starting to think that this is just how things are.
And the moment we normalize it, Al-LULA would say, that's when the propaganda
has one because the whole point is that you stop noticing.
So I got to ask you, and I'm genuinely curious about this one,
are you using AI in your business right now?
And if so, have you stopped to think about where your line is?
Tell me about it in the comments
because I think that every entrepreneur needs to be having this conversation right now.
Okay, so here's why keep going back and forth in this one.
I'm going to be honest, it's a little different from the other episodes
because I'm not standing outside of it.
I'm right in the middle of this every single day.
I'm generally excited about what it can do for entrepreneurs.
I think it's the biggest opportunity we've seen since the beginning of the internet.
And I'm not being dramatic.
I think AI is going to change everything about how we build businesses, how we create content,
and how we reach people.
But a little world 60 years ago, that the most dangerous form of propaganda is the kind
that gets built into the system itself.
And right now, we're building AI into everything, into our ads, into our content,
into our political campaigns, into phone calls, through our kids' education, and it's the way
we decide what's true and what isn't.
So here's the question that I honestly cannot get out of my head.
If the propaganda is now built into the machines and the machine is built into everything,
then who's the propagandist?
Because with Bernays, at least you know
that there was a person making conscious choices.
You could point to the guy and say,
he was the one that decided to do that.
But with AI, there's no single person.
There's an algorithm optimizing for engagement,
for clicks, and for conversions.
And it doesn't care whether what you create is true or false.
It doesn't have intent.
It just has optimization.
I think that might actually be scarier
than anything that Bernays ever did
because Bernays at least had a conscious
that he chose to ignore.
The AI doesn't have consciousness at all.
So my question, and I generally don't have a clean answer for this,
I'm thinking about it for a long time,
is can we build a technology this powerful and keep it ethical?
Or is the technology itself becoming the propaganda
that it will warn us about?
I love to what you guys think
because this one kind of keeps me up a knife.
Now look, what I just showed you is one technique
from a playbook that's been built over 100 years.
It started with Sigmund Freud figuring out
that human beings aren't driven by logic.
They're driven by unconscious forces.
They don't even know about.
And then his nephew, Edward Bernays,
took those same ideas and weaponized them.
He sold wars, overthrew governments, made women smoke, and invented the entire field of public relations.
Then Jakku Elul warned us that propaganda would stop being a tool and started becoming the environment itself.
And then a guy named Dan Kennedy, my mentor, figured out how entrepreneurs could use those same dark arts ethically.
I spent the last 20 years taking all of it and turning into a system that bootstrapped my company ClickFunnels passed a billion dollars in sales without any venture capital.
And I made a video that tells the entire story.
From Freud's discoveries to Bernays weaponizing it to how I use the exact same.
techniques today. If what you just saw in this video got you thinking, got you a little bit excited,
then this video is going to blow your mind. Go to Secrets of Propaganda.com or hit the link in
description down below. With that said, go and watch it right now while it's still fresh.
And if you haven't already, please subscribe to this channel because this right now is the
propaganda playbook. Every episode, I take a big story from the news, I decode the propaganda
behind it, and I show you how to use the ethical version of those same techniques to help grow
your business. The same science, same playbook, different story. And the next episode is coming soon.
Thanks so much. I'll see you guys on the next one.
Thank you.
