CyberWire Daily - Secure Your Summer: Top Cyber Myths, Busted [Threat Vector]
Episode Date: July 4, 2025While the N2K CyberWire team is observing Independence Day in the US, we thought you'd enjoy this episode of Threat Vector from our podcast network. Listen in and bust those cyber myths. In this ep...isode of Threat Vector, David Moulton talks with Lisa Plaggemier, Executive Director of the National Cybersecurity Alliance. Lisa shares insights from this year’s “Oh Behave!” report and dives into why cybersecurity habits remain unchanged—even when we know better. From password reuse to misunderstood AI risks, Lisa explains how emotion, storytelling, and system design all play a role in protecting users. Learn why secure-by-design is the future, how storytelling can reshape behavior, and why facts alone won’t change minds. This episode is a must-listen for CISOs, security leaders, and anyone working to reduce human risk at scale. Resources: Kubikle: A comedy webseries about cybercriminals. Oh Behave! The Annual Cybersecurity Attitudes and Behaviors Report 2024 Join the conversation on our social media channels: Website: https://www.paloaltonetworks.com/ Threat Research: https://unit42.paloaltonetworks.com/ Facebook: https://www.facebook.com/LifeatPaloAltoNetworks/ LinkedIn: https://www.linkedin.com/company/unit42/ YouTube: @paloaltonetworks Twitter: https://twitter.com/PaloAltoNtwks About Threat Vector Threat Vector by Palo Alto Networks is your premier podcast for security thought leadership. Join us as we explore pressing cybersecurity threats, robust protection strategies, and the latest industry trends. The podcast features in-depth discussions with industry leaders, Palo Alto Networks experts, and customers, providing crucial insights for security decision-makers. Whether you're looking to stay ahead of the curve with innovative solutions or understand the evolving cybersecurity landscape, Threat Vector equips you with the knowledge needed to safeguard your organization. Palo Alto Networks Palo Alto Networks enables your team to prevent successful cyberattacks with an automated approach that delivers consistent security across the cloud, network, and mobile. http://paloaltonetworks.com Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the CyberWire Network powered by N2K.
We've all done things with technology that we shouldn't.
There was a time in your life when you reused a password or clicked on something you shouldn't
or almost clicked on one of this malicious text that
we're all getting all the time.
You felt the emotion spike when somebody gave you some urgent message that one of your kids
was in trouble or there's fraud on your account or something.
We've all had the emotional reaction to that and hopefully caught ourselves before we did
something.
But I think it's leaving people with a sense of empathy that we all do these things, we're
not going to solve for human error.
And so designing software and systems and products that are more secure by design is
really, I think, the way forward. Welcome to Threat Vector, the Palo Alto Networks podcast where we discuss pressing cybersecurity
threats and resilience and uncover insights into the latest industry trends.
I'm your host, David Moulton, Senior Director of Thought Leadership for Unifor2.
Today I'm speaking with Lisa Plagmeier, Executive Director for the National Cybersecurity Alliance.
Lisa is on a mission to eliminate the cliché of hackers and hoodies and bring a more human,
relatable face to cybersecurity.
Her career spans Fortune 100 brands, cutting edge cybersecurity training companies, and
leadership roles across the cybersecurity landscape. She blends psychology,
marketing, and behavior science to inspire real world change. She's also the co-author of the
annual Cybersecurity Attitudes and Behaviors Report 2024-25, a global study that reveals the
truth about how people actually behave online, not just what they say they know. The myths we're about to unpack come straight from the gap
between awareness and action.
-♪
-♪
Lisa, welcome to Threat Vector.
I am so excited to have you here.
Thank you for having me.
You've had this really unique career path,
from launching Ford Road shows across Morocco to
leading cybersecurity culture initiatives.
What's one experience from your early days in international marketing that surprisingly
prepared you for your current work in cybersecurity awareness?
I think it's understanding more about the creative process.
So one of the most interesting things that I observed in working with highly paid ad
agencies that, you know, auto manufacturers and tennis shoe companies and tequila brands
all use to sell their product was that it's managing creatives is different than managing
technical people or managing administrative folks, giving them room for their brains to breathe, giving them
time for the ideation process, more than a lot of jobs, they need to sit and think.
They need to go take a walk and get ideas.
And the other thing that I observed was that no matter what the deadlines were, there's
times when you just can't force it.
You can't force, I've been in ideation sessions where like the ideas just aren't flowing,
the folks in the room are not clicking.
And there's times when you just can't force it.
And you have to be okay with that.
And you have to be okay with that ebb and flow
of the creative process and allow for times
when suddenly something brilliant happens
and you know you've got a diamond and you got to run with it.
And, but you, it can be frustrating in the meantime.
So I think that until you've worked directly with creatives, it's really hard to understand
all that.
I feel that.
Um, I sometimes tell the teams and the folks that I work with is we can let the brown water
flow, right?
A lot of times the stuff that's coming out right away
out of the tap, that's not the drinking water.
That's not the clear ideas.
And you just let it run.
It's okay, right?
And then it will come.
You just have to trust that process.
Well, Lisa, we've got a lot to talk about today.
Let's get right into it.
Okay, let's go.
Lisa, you've built your career
at the intersection of
psychology, persuasion, and cybersecurity. And now that you're shaping public
perception through the National Cybersecurity Alliance and this year's
massive 7,000 participant report, what's one finding or a moment from this year's
research that made you say, we have to talk about this.
Probably the fact that there's,
we're not seeing the curves we wanna see.
We're not seeing things get better.
So one of the most prominent examples
would be password reuse.
And it's just general, a password habits in general,
people are using passwords that are too short.
People are reusing the same password too often. People are using insecure methods to keep track of their passwords,
they're kind of acting as their own risk managers and are doing things that they think are safer
than a password manager because they have more control, for example.
So it's the whole topic of passwords.
People hate them.
They haven't worked, right? As a means to
protect our stuff, they've made a complete abject failure. And people just don't like
them. It was one thing 20 years ago, and you could have one password or two or three that
you remembered, and they didn't have to be that long. There was no such thing as complexity
rules. Like we all kind of got by.
And then we very quickly realized that that's not, that's not really gonna protect our stuff.
So we're big fans of things like password managers,
pass keys are a whole lot easier for people,
but by and large they've just been,
they've been a failure.
People don't like them and they haven't worked.
Do you have a personal story or anecdote
that you have found works when you talk to somebody
about their short password, their password reuse,
they're saving it on sticky notes,
they're saving it in an Excel file and, you know,
in an insecure way that helps them understand
that they need to break those habits?
I have one that I use all the time about password reuse
because a lot of people, I mean,
I've even heard security professionals now kind of default
to this like, well, for your really important accounts,
you should use a unique password
and maybe for everything else, it's okay to use the same one,
which I'm not a big fan of
because people aren't great risk managers.
They're not good at assessing what's a value to a bad guy and what isn't.
So what they deem an important account is probably different than what a cyber criminal
takes an important account.
So the story that I use is one I try to remember as often as I can what it was like to work
in marketing before I had any clue what this cybersecurity
stuff was all about, before I was assigned to work with the security team on thought
leadership at the company I was at.
Right, before you were really made aware of like how dangerous some of the behavior is.
Right. Before I understood the ins and outs, when I was just a normal consumer going about
my day, reusing passwords, using passwords that were too short, opting out of MFA, like things like that.
Things that people do.
I was just like everybody else.
And if security professionals will admit it, they do some of these bad things still.
We all do.
That's how so many data breaches keep happening.
They keep making mistakes with basic hygiene.
So I can remember when it happened because I was, for some reason, I think I was like
out for a walk and I can't remember that light bulb moment when I was walking in my neighborhood
and I heard about the Yahoo breach years ago.
I can't remember what year it was.
Well we all had a Yahoo account back in the day.
Like I can still remember my AOL dial-up and the sound of the modem
and like chatting with somebody on the other side
of the earth.
And cause I was at that time, I was living in Europe.
So I had a lot of reason to be excited
about things like that and not paying Deutsche Telekom,
you know, a dollar a minute to call the US.
And when I heard about that data breach,
it was usernames and passwords,
I just thought, who cares?
I haven't logged into that in 10 years.
I mean, if you ask anybody over the age of 50,
or anybody who was getting online in the late 90s
or the early noughts, we all had a Yahoo account.
And a lot of us, I would venture to guess,
haven't logged in in a really long time.
And I don't know if they've deleted our accounts
or they're still, I'm guessing at the time of that breach,
they were all still active or valid usernames and passwords.
And I just thought if a bad guy has access
to my Yahoo account, like I haven't used that in a million
years, I don't know what's in there.
You know, like they can have at it.
Like have fun with that.
There's nothing there that's of use, right? In my non-security
brain, I told myself, like, well, if I reuse that password anywhere, then they would have to know
where else I have accounts that I've reused it. And like, they're not going to take the time to
figure that out, right? I didn't know it was spray and pray. I didn't know there was automation and
that these guys were using technology. You have that image in your head of one person sitting at a laptop. It's the hacker in the
hoodie image. Somebody's in the dark somewhere wearing dark clothing and usually masculine.
Like there's, you know, the vibe we get from that imagery is usually masculine. And we
don't think about teams of developers like I was working with at the company
that I worked with at the time.
We don't think about them as businesses.
We don't think about them as using automation
and being really smart and being agile
and really being a mirror image of the legitimate world,
just doing what they do for illicit reasons
instead of trying to run a legitimate business.
So that's also the thought behind the cubicle,
the cubicle series that we shot and we have season two coming out soon.
It's a video series like watching the office,
but it's the office of the bad guys and that's what I was trying
to provoke in people is maybe that light bulb moment of like,
oh, wait a minute, there's somebody doing this for a living.
It's somebody's job to hack me, and they're using technology to do it.
So these little things, these myths that I tell myself that it's okay to do,
the excuses I make for some of my bad habits with technology, or
maybe I don't even understand that it's really a bad habit.
That's what we're going for with that series, is that light bulb moment that maybe people understand
and they think twice and maybe down the line
with some more nudging and some more messaging
and some more education, they actually change their behavior.
Yeah, as you were talking about that Yahoo breach,
for those of you listening who are curious,
2013 undetected for three years,
three billion different user accounts attacked.
And as you're describing that,
it's the rainbow table.
It's the ability to say, well,
Lisa's here and here's her password.
Let's go try anywhere else that we can find
Lisa and the password elsewhere.
And I agree, I came into this after 20 years of being in design and didn't realize Let's go try anywhere else that we can find Lisa and the password elsewhere.
And I agree, I came into this after 20 years of being in design and didn't realize that
that was what was happening, that it is a business.
There are KPIs, there's a metric, they're trying to get to their revenue number, and
it's off of our mistakes.
It's off of the things that we don't necessarily think about or as you called it, these myths.
And it allows for them to still be profitable.
Otherwise this area would go away very quickly.
This tactic would dry up if we would stop doing these things
or if users would stop doing these things
or even just change to something as simple
as a password manager.
I wanna say it was an NSA story that got my attention,
and I moved from Dave's clever,
he can keep all of his passwords in his head
if he just does a little bit of changing,
and they really weren't all that different.
It was like I added a one or two, and I jumped to-
Not an exclamation point.
An exclamation point would have made
all the difference, Dave.
It was already there.
I had a fraternity room name that I used.
I can't say that on this podcast.
And then I would just be like, exclamation point one.
Anyways, I got to 44, Lisa.
Just the number of times I was asked to change it before.
I was like, this is a terrible password, I should stop.
So, you know, I think people know,
and you guys call out in the report
that people should use those unique passwords.
They know that, but in the report, nearly half,
I think it was what, 46% still reuse their passwords.
And this is wild to me.
It's kind of like saying, you know,
hey, here's the key to everything I have
in my digital existence.
And I'm gonna just make it a digital copy for everyone.
And if I lose it, then you can get into all the things.
But I don't think people necessarily get that.
You were talking about that with the Yahoo hack
and how that kind of gave you that light bulb moment.
Lisa, what is it that causes this to be like
such a persistent gap between what people know
and the actual actions, the behaviors that they take?
I think some of it is just our own belief
in our own superiority.
We all trust ourselves more
than we trust anybody else.
We all think we're smarter than the average bear.
And one of the other things we ask people is. We all think we're smarter than the average bear.
And one of the other things we ask people is, do you think you can spot a fish?
And it's a five-point scale, and everybody's like, you know, fours and fives.
Like, except Germany, that was the one country in the report that's far, their confidence
in themselves to spot something malicious is far lower than every other country in the
survey. So it was all five eyes plus Germany and India.
This year, later this year, it's going to be the US, the UK, Mexico.
No, yes, it's Mexico and Brazil or Brazil and Chile.
I can't remember.
Germany and India again, because the data out of India was really, really fascinating.
Their confidence and their ability to recognize things is very, very high, but their rates
of compromise is equally very, very high for things like romance scams and just across
the board.
People's beliefs in themselves and their own methods runs pretty deep.
And their own conviction of wanting to feel like they're in control, which is why they
don't trust password managers a lot of the time.
Telling them using education or some sort of awareness or whatever you want to call it these days,
any kind of communications to say to them,
no, this is the better way to do it.
Or you could, you know, let's use the phishing example.
They don't think they're, they think they're going to be able
to detect something malicious.
So just saying to them, no, you're at risk for phishing.
That we're all contrarians.
Like you're telling me something I don't believe.
You can't just say to me, you know, yeah, you don't think you're going to fall for it,
but you could.
Like that's, that's not persuasion.
Persuasion is, there's more of an art to it than that, to persuade human beings.
And I think we're still, at least in the security community, a little too guilty of just trying
to be contrarians, trying to just tell them something that's the opposite of what they
believe and thinking somehow that's going to change their minds.
And I don't think that's enough.
It takes a lot of, I mean, for the light bulb
moment you had or the light bulb moment I had, it takes that constant drumbeat of information.
When something resonates with an individual for whatever reason, that opens their eyes and they in their habits. Let's shift gears a little bit.
AI has introduced a new myth.
If I use AI tools correctly, they're safe.
But your findings, they suggest that most people
don't fully understand AI risks.
What kind of misunderstandings did the report uncover?
Well, first of all, we learned that there are a whole lot
more employees that are putting sensitive company information
into AI tools without their employer's knowledge.
I think it's 43 percent or something like that.
It's a pretty high percent.
The other thing we learned is that 51 percent of organizations at
the time of the survey hadn't given
employees any training on the safe use of AI.
So I think the risk there is that while we're all busy debating
policies and how to enforce them and find the right tools
and what we're gonna allow
when we're having all these conversations,
meanwhile, people are using this stuff anyway,
and finding ways to use it,
whether it's on their own device or whatever,
I would suggest that we enable gaze a little bit too much
some organizations over their policies,
and we need to have more of a bias for action, I think.
You can always go back and change things, but not taking action, not starting to train people. I remember when
I first got into cybersecurity and I heard somebody, I think I was at a conference at
a roundtable discussion or something and somebody said, well, we have policies that aren't finished.
And I told the business, I can't train anybody until the policies are finished. And I said, do you think the bad guys aren't going to
attack your people until your policies are finished?
We have this tendency to want to serialize things, I think.
And I think in the case of AI,
that's increased our risk.
I think people fundamentally think of it like a search engine.
They think about the like a search engine. They think about
the result that they want. Their focus is on trying to solve a problem and what they're
going to get back. And they're not really thinking about what they're giving away.
Yeah, I think that the business model also makes it tricky, especially if you're paying for a service.
I've always had that model and I was recently disabused of this theory that if I'm paying for something,
it's private, right?
It's my right, you know, in that space
is an ethical relationship between me and a service.
And I think with the chat bots
and some of the LLMs in particular,
that's a really gray zone,
mostly moving towards that's not the model.
Like you're getting amplified service,
you're getting more tokens,
you're getting more faster capabilities delivered,
and the free model is the model for privacy.
I keep coming back to is it the system design, right?
Is it not on the individual?
And have we built systems that allow you
to do dangerous things that don't feel dangerous?
You could also claim driving a car is more dangerous
than other modes of transportation, you know,
per mile kind of thing.
And statistically that's true.
And yet I think you get more anxiety out of a flight
than you do out of a drive around the corner.
But one has a higher probability of a problem.
It's the perception of your control of a situation.
When you're driving the car,
it's different than the pilot flying the plane.
Yeah. Yeah.
And so I think that going into a chat bot
and having a conversation or putting in information,
it's just you and
that chatbot and that's the edge of it.
You can't see the actual larger frame of danger.
So that's an interesting space of like, how do you make for human security and human risk
management?
Sometimes I'll see these debates pop up on LinkedIn where the debate about designing software
securely to begin with and really what is the user's responsibility, like people should
still know to do X, Y, Z. And it's not any one individual's fault. It's system thinking and I think we just have a long way to go. I mean,
I think those of us in security will tell you, you know, there's the old adage,
the internet was never designed to be secure. And now we're trying to play catch up. And
it's impossible. It's really, really, really hard and really expensive.
And at some point, we'll get better because we'll redesign some things.
And it's just like anything else, any sort of new technology.
You look back at some point, maybe it takes 50 or 100 years, and you look back and you
go, you know what?
We shouldn't have built it that way to begin with.
We shouldn't have designed it that way to begin with. We shouldn't have designed it that way to begin with
because now we've seen all these bad things happen
and we need to rethink it.
So I think we have a long way to go yet,
but I'm glad that it's even a topic of conversation, right?
I'm glad that there's folks like Bob Lord talking about
secure by design and things like that.
So when you're talking about this idea, is it this,
is it that, and this idea of fixating
on or focusing on one area, I think it's a lesson we could take from economics, right?
You want a diversified portfolio.
You want to get a couple percent.
You want some things that are going to be slow growth and hold you over time like a
bond.
Maybe you need some stocks, maybe you need some real estate in your portfolio, but you
wouldn't say like, let's just put it all in one area.
And I think in security, when we do that,
then a very clever attacker will figure out
how to break that one thing that was so very strong,
and then it doesn't really help all that much.
Like you get to the point where a couple of years ago,
MFA was the sort of silver bullet for identity.
And then very clearly it's not, right?
Like you look at what scattered spider
or muddled Libra is doing with social engineering
and they're just like going around the MFA
or making the MFA extraordinarily annoying
and getting past it anyway.
So it's like each time that we go like,
oh, that's the one thing,
that's the red flag for me.
I'm like, you're beckoning for somebody to destroy this.
Somebody's gotta break it.
It's somebody's job to break it.
Yeah, but like you wouldn't not use MFA
just because somebody's figuring out how it happens.
Definitely use it.
Right.
It's the same argument when people say,
well, how do you know that any kind of security education
or awareness or any of it ever has any effect?
And I think I'm, you know,
came from the world of marketing and advertising.
So I'm gonna say, well, you know,
Ford Motor Company can't tell you that their Super Bowl ads
are quote unquote effective,
but they're not gonna not do them.
I mean, because we know you have to think about the whole,
the whole picture, not just one tactic.
And, you know, I would challenge any security professional who says, well, you know, I don't
think this stuff is working.
Well, then, okay, do you want to stop?
Do you think you should just stop messaging anything about security to any of your employees?
No, I think that it's-
Well, no, don't want to do that.
Like, that sounds, like, dangerous.
That sounds irresponsible.
Well, then, okay, do it, but do it well, you know, do a good job at it.
It's still worth doing well,
even if you're not sure that it works.
So you've talked about some storytelling,
you know, I've biased myself.
I'm a big fan of storytelling as an effective model
for getting through to people.
You've obviously used humor.
Are there other types of interventions that you've noticed
that have the long-term effects that we're all going for
with some of the training?
I think we can do better at storytelling in different ways.
So one of the projects we're looking at now is,
it's real simple.
Every Friday night when I'm going through all the streaming channels
trying to find something to watch and decide, eh, nothing looks good, I'll default and end
up watching like Dateline or 2020 or one of those things. And every time I'm like, okay,
she killed her husband. Like, what's new? It's kind of the same old, same old in the world of physical crime.
And maybe there's a little fraud thrown in there too.
Where's my cybersecurity story?
Where's my story that, because those of us who've come from the world of marketing or someplace else,
we've had a sideways path to get into cybersecurity.
That's one of the things I think that makes you make the jump
is you start to peel the onion and you're like,
holy cow, this stuff is fascinating.
And nobody knows what's going on.
Most people are not paying any attention.
And I'm even shocked, we do a lot of media interviews.
We get a ton of earned media as a nonprofit, which is great.
And I talked to a lot of investigative reporters and I'm even surprised at how little they're
paying attention sometimes, which is great.
It's an opportunity for us.
I get to, you know, drip a few little hints of what's happening out there.
And they're like, really?
I should do a story on that.
I'm like, yeah, you should.
So we're working with DHS, with HSI,
Homeland Security Investigations,
because they investigate crime committed by people
who are not in the country legally.
And some of those crimes involve technology.
I think that you're hard pressed,
any organized crime these days,
you're hard pressed to find things
that don't involve technology in some way, shape or form.
So what we're gonna focus on,
and we're also working with the Secret Service.
So one of the things we're gonna focus on
are cases where you have,
we think it's gonna be easier to communicate to the public in the 22 minutes you have, we think it's going to be easier to communicate with the
public in the 22 minutes you have in a 30 minute episode.
Cases that involve both physical crime and there's a physical aspect to the cyber crime.
So things like EBT skimmers or in the case, it was Operation Red Hook is the story that we're looking at
with HSI that involves gift card scams. Things that have a physical, tangible, because I
think that's one of the hardest things about storytelling in cyber is it's intangible.
You can't just show binary floating across the screen. People don't know what that means.
It's a trope, it's not relatable.
Instead of demystifying this topic, it mystifies it even further. And it also makes your audience feel stupid. I don't know what those ones and zeros floating around the screen are, or that
green screen that you're showing me, that you're scrolling through while somebody's,
there's a narrator telling the story. I don't know what that is, so I must be dumb.
And I don't want to feel dumb
when I'm trying to be entertained with a story.
So we're going to really focus on the tangible aspects
of some of these crimes and show how the technology
has enabled those crimes.
And I think those will be stories that,
you can go to the fridge and get a Coke
and you're not going to lose track of the story.
Like it's got to be super easy to tell and it's got to resonate very quickly if you're
doing that kind of very digestible content.
There are other things out there that communicate about this topic where we're, I think, expecting
a little more undivided attention from the audience.
And if we want to scale,
then I have to be honest about how much attention
we're gonna get.
You might be scrolling Facebook while you're watching TV.
You might go to the fridge, go to the bathroom.
Your kids might ask you for something.
It's gotta resonate in a way that accounts for the fact
that we don't have people's undivided attention.
Yeah, so both that like quick hit snackable bit,
but then something that allows you to follow through
all 15, 18, maybe 22 minutes if it's a television show,
but also kind of stews in your head
and makes you think about it.
As you were talking, we kind of need an Ocean's Eleven,
but instead of having the trapeze artist
and the guy who's able to crack the safe
and Brad Pitt who's always eating, right?
Like it's just the hackers and what they're doing.
Maybe you have like a little bit of affinity for them,
but at least it shows like it's a business and what they're doing. Maybe you have like a little bit of affinity for them, but at least it shows like
it's a business and what they're doing. So maybe that's like,
I mean, look at the shows lately. I'm watching, I'm currently watching Friends and Neighbors
and we watched Bad Sisters lately. There's a lot of shows lately that are getting you
to really root for the bad guy. Like you have huge empathy for the criminal.
Yeah, it's pretty disturbing.
I think it started with Sopranos and Breaking Bad
where the anti-hero, right, was the main character
and you're kind of into it even if they were awful.
And then it showed that there was a way of telling the story
from a different point of view.
Not necessarily always like the police drama where the law enforcement was chasing the
bad guys, but you're kind of rooting for the bad guy to get away.
So Lisa, I want to take it back to you the report.
And if there are security leaders out there who want to really use this report to, you know,
to drive the changes in their organizations
that they know they need to, you know,
where should they start?
What's the like jump off point for them?
Well, if you're trying to find the report,
you can go to stay safe online.org,
or I would just Google stay safe online, OBEHAVE,
and that'll get you to the landing page.
You can download the report.
We'll put that URL in our show notes.
So if you're listening and you're thinking, I don't think I can remember that, just check
the show notes.
I think a lot of organizations have very mature, large organizations have very mature training
and awareness programs.
Maybe they're transitioning into human risk management.
They're looking at more sources of data.
They're using more behavioral science, like nudges to get employees to do the right thing
or to help them to do the right thing.
They're using more solutions that help employees in the moment to make a good decision.
And so I think that's all good stuff.
But I think a lot of the security communications or awareness materials that we're using aren't
making enough use of what advertisers know about behavioral science and like basic human
psychology and being more persuasive and being better at storytelling.
Because being really good at those things is really, really hard.
Not every person out there can write a really good article.
I used to teach a certification class for people in training and awareness.
And I gave everybody an assignment once to use Dr. Shaldini's principles of persuasion.
I explained the principles, and then the assignment was here's an FBI alert,
you know, one of the alerts that they put out about a particular problem, and you want
to tell your employees about this.
The CISO has said to you, you know, here's this thing we need to tell everybody, and
you can't just post the FBI notice because nobody will read it.
What is the title based on these principles of persuasion? Which one are you going to pick to use?
And how would you title the article you're going to write
that talks about that topic?
Because I'm a big old David Ogilvie fan.
When you've written your headline,
you spent 70 cents for your advertising dollar.
If you don't write a good subject line to your email or title to your article in the company newsletter, nobody's going
to read the article, no matter how much good stuff is in there. Everybody in the room chose
to use the principle of authority because I told you so, right? The heavy handed, you
know, or doctors recommend, you know, that principle of like,
we know better than you do. Nobody wants to be told that by an IT person. Even though it's true,
people don't, that just doesn't resonate. So the next time I taught the class, I had to say,
you can pick from any of these except the principle of authority. That's off limits. That's not compelling enough.
So I think we still have a little ways to go in using some of the advertisers' trickery
and some of the persuasion techniques that are used in the business world, in the consumer
world, to get us to buy products and do things.
We can do better. It's the story that we wrap it in
and it's the demographic that we target
that makes the difference.
I suggest that if you're curious,
you should definitely go read this report.
It's been fascinating to talk to you today.
And I really appreciate that you took time out of your day.
I know you're really busy to share your insights
and just throughout the year,
not just today on Threat Factor,
you're out there trying to make sure
that the people who need this information
are able to get it and not only in a report,
but in video with humor, with story,
and really maybe to like raise up some of the myths
so we can go like,
wait, I see myself in that thinking.
I think attaching information in different ways
allows different people to learn and change their behavior
and to start to be a little bit more safe.
And that's awesome.
I also like the fact that you've combined that like
marketing and cybersecurity and behavioral science
together for doing good.
So thanks for coming on today and sharing with me
about the report and some of your thoughts and experiences.
It was absolutely my pleasure.
Thank you so much for having me.
That's it for today.
If you've liked what you heard, please subscribe wherever you listen and leave us a review
on Apple Podcast or Spotify.
Your views and your feedback really do help me understand what you want to hear about.
If you want to reach out to me directly about the show, email me at
threatvector at Palo Alto networks.com.
I want to thank our executive producer, Michael Heller, our content and
production teams, which include Kenny Miller, Joe Benacourt, and Virginia Tran.
Elliott Peltzman edits the show and mixes our audio.
We'll be back next week.
Until then, stay secure, stay vigilant. Goodbye for now.