Right About Now with Ryan Alford - AI Productivity vs AI Security: The Human Risk Behind AI with Fable Security
Episode Date: March 17, 2026Generative AI is being adopted faster than almost any technology in recent history. The productivity upside is massive—but so are the security implications. As companies rush to integrate AI into ev...eryday workflows, many are discovering that the biggest risks aren’t always technical—they’re human. In this episode of Right About Now, Ryan sits down with Nicole Jiang, co-founder and CEO of Fable Security, to unpack how organizations can embrace AI while protecting sensitive data and minimizing human-driven security risks. Nicole explains why AI adoption is creating a divide between companies moving quickly and those falling behind, and why security strategies must evolve just as fast. They also explore how human behavior often becomes the weakest link in cybersecurity—from employees unintentionally sharing sensitive information with AI tools to organizations failing to clean up messy data systems before adopting new technologies. Nicole shares how companies can rethink security training, why traditional cybersecurity tools miss the human layer, and simple steps individuals can take to practice better AI hygiene in their daily work. Topics Covered Why AI adoption is accelerating faster than most security frameworks The growing gap between AI-enabled companies and slower adopters How human behavior creates new cybersecurity risks What sensitive information employees accidentally share with AI tools Why data hygiene matters before adopting AI systems The rise of AI-powered phishing and social engineering attacks How companies can balance innovation with security Why traditional security tools struggle with human risk The difference between security training vs. real-time coaching Practical tips for building better AI security habits Sponsors Wix Building a website and need a little help? Go to wix.com/harmony. That’s wix.com/harmony. To start your website today! Connect With The Guest Nicole Jiang Co-Founder & CEO, Fable Security Website: https://fablesecurity.com LinkedIn: Nicole Jiang Fable Security builds human-risk security platforms that help organizations identify risky employee behavior and deploy targeted interventions to improve cybersecurity practices. Connect With The Host Ryan Alford Host of Right About Now Website: www.RyanisRight.com Instagram: @ryanalford LinkedIn: Ryan Alford
Transcript
Discussion (0)
When AI process, when you give a prompt, if you give a clear prompt, you have to process tokens.
There's a cost associated with every prompt or every question.
One thing open AI, I believe, has said, is there are people, I'm one of those people.
I'll be like, can you please do this thing for me?
Thank you.
And they said, if you add please and thank you, it takes up more tokens.
Just be direct.
Just remove all the pleasantry.
That's crazy for me.
I'm Canadian.
I say sorry.
I say thank you.
I say please.
All the time.
Turn out that's not good for AI.
It's costly.
This is right about now with Ryan Alford, a radcast network production.
We are the number one business show on the planet with over one million downloads a month.
Taking the BS out of business for over six years and over 400 episodes.
You ready to start snapping next and cash in checks?
Well, it starts right about now.
Companies are adopting generative AI tools faster than almost any technology we've seen.
The productivity upside is real, but so are the risks.
Today's guest is working at the intersection of AI adoption and cybersecurity,
helping companies understand why the biggest threats aren't always hackers, their human behavior.
Nicole Jang is the co-founder of Fable Security, and today talking about how organizations can embrace AI
without accidentally exposing their most valuable data.
Nicole, welcome right about now.
Thanks for having me.
Great to meet you.
I know you're in Boston today.
We're in South Carolina.
We got the East Coast covered, and we're going to talk all things AI security.
It's funny, Nicole, this is something that's been on my mind.
I was having flashbacks.
When the internet started getting going and we started Googling everything and we started putting
all our stuff on social media, it dawned on me.
This was probably like a 2004 or five.
We sure openly giving up a lot of information.
What is happening to all this stuff?
And now I've sort of had the same epiphany a few weeks ago, which is why I love having
the opportunity to have you on the show.
It's like, yeah, maybe we need to think about this a little more than we are.
Not slow it down, but just be aware.
I'm sure we love looking at things we've posted 10, 15, 20 years ago is a good throwback,
but also just like, oh, wow, there's a lot of things on the internet that we shared.
Yeah, exactly.
And so now we're telling AI our deepest secret so it can help us solve puzzles, write contracts,
all that stuff.
And it's like, well, where is that data going?
I have a feeling you might tell us.
But Nicole, I'm anxious on this topic because it's like so real for me.
We're owning businesses and use it.
So let's set the table, give everyone a little bit of your background,
what got you into Fable Security and cybersecurity.
and all that good stuff.
My name is Nicole Jang, co-founder, CEO Fable Security.
We've built a human risk platform that shapes secure employee behavior.
Behind the scenes, we leverage a mix of AI, ad tech approaches to understanding employee behavior
in an enterprise.
We deploy just in time personalized interventions.
When we see employees doing some things that might expose more risks than necessary
for organization, we do this better than typical annual security compliance training,
which is what the industry status quo looks like.
Our approach is relevant, personalized.
It really drives at the risky behavior at time when things are happening, almost like that just-in-time coach.
The reason why we started this business had a lot to do with my previous background with me and my co-founder.
Saney came from an ad tech background.
So making ads super-relevant, super-clickable, convert people from buying things, not even know that they want.
Our sheer background also came from abnormal security is another startup before that we were founding members of.
We leverage AI to look for fishing attacks.
Fishing became super prevalent in today's age.
AI, unfortunately, supercharges attackers.
They literally have the tools to send you really targeted fishing threats.
And saying I both realized that we really want to focus on a human layer to teach people to
better defend themselves from not just bad fishing, but also all sorts of social engineering
attacks, all sorts of things that people may do that introduces risk, not just the
enterprises, but also for themselves.
And so that's the reason why I became super interested in not just cybersecurity, but
ways to protect people and ensure that we can all be more productive and more effective as we work.
I'm glad we have people like you.
Some people get annoyed about tech security back channels.
Oh, they're putting the guardrails on everything.
I tend to be rule blaker, but I actually really appreciate the people that actually put the
guardrails up that need to be there to help us from ourselves and especially from the bad guys.
Thank you for the service to our cyber community.
This stuff is, I think about the like the curve of.
internet and then social media and then the speed with which we could do these things with bandwidth
increases AI is on all that on steroids. It's like moving so fast. Are companies moving faster than
their security frameworks? When I look at customers that we serve today, we're seeing interesting
diversions. Companies that haven't been this industry of the past 10 years looking at cyber,
I see a lot of companies going through digital transformation. So if you ask 10 years ago, it's like moving from
on-prem set up to cloud, right? So that was a big shift. We're seeing companies that are forward-thinking,
they're more mature, they are really technology-driven. They're getting so much more upside from
AI. And then we're also seeing companies that might be a generation that they've been around over a hundred
years. They have a solid business. They're maybe not as tech-enabled or digital transformed. They're a
bit slower. So we're seeing adoption curves in various ways. And so that's the shape of the companies.
And then from a security perspective, it's also around how companies view security. Everyone needs security.
compliance requirements, there's a baseline. There's a lot of now common languages on what needs to be done from a security level.
But I do see that companies who are really thinking about investing in digital AI technology transformation, they really double triple down in their security investments.
And then some may still be checking the box. I really see the divergence of companies based on their tech sabbiness, their belief in modernizing.
Now, from a particular companies that are really adopting AI, you really do, I think it really depends.
on the people. People in the company, for example, if you're a very developer-centric company,
you just see like a ton of crazy things that people can do with AI. They're tinker and they're trying.
And the organization allows them to do that, right? They take on the risk for innovation and the
trade-offs might be risk. For some other ones, they're worried about, for example, their healthcare
companies, you're worried about HIPAA, financial you care about PCI, PII. They're real like financial
business consequences if you tinker too much. We also see other companies will create playground or
sandbox for people to play. And then it's a balance of letting people try things out, innovate,
but also like not break the bottom line for the business. We're seeing all of those, but we are
seeing more and more companies just their time is spent on trying AI, being more productive.
And if you're in this like marathon pace, some are the beginning at one end and some are kind
trailing. And I think that space will become larger as we go. Some people have to go slower due to
risk, risk tolerance, data. And then some
them are stodgy and need to be moving faster.
They're going to be irrelevant.
And then there's the ones that are moving really quick that are really nimble.
I can speak small business, but I talk to Cisco, I mean, some of the largest corporations
of the world.
So I'll speak from kind of both ends of it.
As a small business, I had 18 to 20 employees in 2021.
I really grew faster than I wanted to.
I worked in Manhattan.
I had a team of 100 people directly or indirectly reporting to me.
I didn't want to, I didn't start my business going on to where I just did not want to manage
that many people.
Part of it was intentional kind of scale back.
But what the last few couple three years has done is not replaced those people because
I kind of scaled the business away from them.
So it wasn't, oh, I just replaced all these people with AI.
No, it allowed me to do some things to maybe accelerate the de-scaling because I could take on more.
I now have a gentic AI throughout my businesses as a small business.
And I know I'm probably ahead on the small business curve because I've worked nationally and have this background.
But I also have started to pause and go, okay, where's all this data going?
And I know Open AI or chat, GPT, opening out, okay, they've got security measures.
I've read as much as I can stand with the legal jargon that's in all this stuff.
It's funny because I own a publishing company on the podcast network side.
And then I own an agency.
And so I'm developing tools thinking about that data.
And then I've got the podcast network side with publishing.
And I'm going, well, what about all this content?
How's that being digested and then used?
And why aren't we getting paid for it?
That's a whole other topic.
there's a lot of people asking these types of questions right now like myself at all levels,
which is this is great.
I'm comfortable moving fast, but sometimes you don't know what you don't know.
But I know I don't know something that I might should know about where all this data is going
and what I need to be thinking about.
That's what we got you here, Nicole.
What do we need to be thinking about and what kind of sense of information are people
accidentally sharing with AI tools?
Hey, guys, if you've ever built a website before, you know how quickly it.
It can turn into a time suck.
Recently, I've been playing around with Wix's new hybrid editor called Wix Harmony.
You basically start by telling it what you're trying to build.
You prompt it to generate a professional grade site just like you want it.
And here's the part I like.
You can easily go back and forth between AI and hands-on editing whenever you want.
The AI agent, ARIA, is an expert in website design and business.
You can answer questions or perform direct actions throughout the process,
which has been huge for me.
and I'm trying to perfect the look of my website.
You've also got built-in tools for selling, bookings, and marketing.
Pretty much all the stuff you actually need once the site's live.
If you're building anything right now, a side project, brand, business, whatever,
Wix Harmony honestly makes it easier to get out of your own way and start making stuff happen.
Go to Wix.com, backslash, harmony.
That's Wix.com, backslash, harmony.
Start your website today.
I have a couple of thoughts on this, and I've been thinking quite.
a bit about it. If you think about AI in its like pure as pure as form, not even technical
gardens, what does AI do for you? AI is running. Ray has giving you the insights faster than a human
analyst. For you to read 20 blockposts, surface to insight, you can now say, read all of them,
give me the insight, ask me a question, ping pong, be my reasoning partner. AI can automate certain
things for you. Before you have to do step one through 20 to get you a task. Now you can automate
those things running the background. These things are human instructed. The way you want some
things to be done requires hyper clarity on the outcome you're looking for. And AI is just extracting
data, content, information that you have in your system today. So when I think about the two risks,
if you don't know what you're looking for and you ask the stupid question, that exposes risk in ways
that you may not an employee does this. You go, why are they asking this question? Can they? Should
they be asking this question? Damn, if they ask this question, they might get the answers now and the answer
is so easy. This is like a net new set of things that folks are worried about. The other piece is, I think
AI also makes it, if you think about it, right, every AI service want faster adoption. So they say,
integrate with all your tool stack. We can do all these things. People don't think, no one thinks
about permissioning. No one thinks about data. But they think about adoption, easy click, one click.
If your house is not clean, like if your database is not clean, if your systems are not clean,
a lot of companies don't think about that. You can ask a question and you get the answers because
the underlying data is chaotic on its own. When I see our customers, and maybe for you, Ryan,
getting your jobs done, getting your test done, it's awesome. But then who did the data cleanup
in the first place? There's also just this fundamental, regardless of AI, regardless of how we use,
there is still a fundamental data hygiene, security hygiene thing that we've got to figure out.
All these advancements is putting the security foundation into a massive test. And attackers knows that.
And so they really try to exploit now with additional vectors in a faster way through AI, through prompting.
That's why when I think about security practitioners, they might feel a little stressed,
knowing that their house needs to be in order to support all these evolution of technology.
Superhero movies, it's like, well, who has the superpowers?
Who are the mutants?
The good guys and the bad guys have the same superpowers.
Who's using them best?
The bad guys are using these tools to take all these other stuff to advance their criminal behavior
or they're sneaking around or whatever they're doing, no matter how nefarious or non-nefarious it is.
As the good guys, we've got to use these same superpowers to both protect it and to use it for
what we're ultimately using it for. It's like anything else. The bad guys seem to always be
at least even and sometimes one step ahead of us. And not that we're all perfect. I'm 100% sure
I'm not a criminal. I've been called a cheater in a few board games every now and then. But that doesn't
mean it was true, Nicole. I just like to win. It's like I feel like we're playing chess, right?
Attackers can be offensive. We're defensive. And it's just the mindset is kind of different.
And so that's why in cyber you also see offensive teams who's trying to break systems ahead of time.
We can think like attackers too. And actually AI unlocks that, really. I think it's
It's a huge value add for security teams.
But also, unfortunately, a tax surface expanse.
We also just have to work really hard and be very creative when it comes to how to better protect.
You mentioned something about how direct what outcomes we want.
And usually it's my own lack of clarity that causes the AI's bad behavior or bad outcome that I didn't want.
But anyway, I digress.
I just wanted to come on here on this episode, it made sense and admit that sometimes I'm mean to my AI person trying to get it to be more efficient.
When AI process, when you give a prompt, if you give a clear prompt, you have to process tokens.
There's a cost associated with every prompt or every question.
One thing Open AI, I believe, has said, is there are people, I'm one of those people.
I'll be like, can you please do this thing for me?
Thank you.
And they said, if you add please and thank you, it takes up more tokens.
Just be direct.
Just remove all the pleasantry.
That's crazy for me.
I'm Canadian.
I say sorry.
I say thank you.
I say please.
All the time.
Turn out.
That's not good for AI.
Toronto's costly.
And I'm from the South.
So I kind of do the same thing.
I do.
I had the pleasantries on the front end, but then when I'm mad at it, I'm like, you know,
this is really costing me more time and hours today.
Your whole purpose in life is to save me time and energy.
And all you've done is, anyway, what I'm hearing from you is that's great.
Maybe you felt better.
It's just costing you more money because you're just using tokens.
It's costing you money.
But, hey, if it works, it works for you.
Like, ultimately, it's about you.
It's less about the AI.
But, yeah, just say too many prompts, make it short, be efficient, walk me through what you're
going to do differently.
I love it.
Hey, good tip for anyone out there. I'm not the only one. I'm just the only one that admits things.
As we close out here, Nicole, I always like quick tips, actionable stuff.
For anyone that's listening, small, medium. We got executives, running big companies. We've got
entrepreneurs run startups, maybe a handful of things, easy things people could do to maybe have
a little more AI hygiene in their cybersecurity.
We're all going to be superior prompters as we acquire skills in the AI world.
My recommendation is number one's totally fine to be curious, but number one,
Two is also just ask the question of should I be concerned about the data? AI, can you please
sanitize my data, sanitize my queries, and make sure that the AI can do the security-minded work
for someone. I think the second thing is remember, don't give out your credit card information,
don't give out your passport, don't give out your blood type, Ryan. Let's just do the same thing
in the AI world and make sure that the information you care about, just ask for it to omit,
and AI would do the job for you. Regularly goes through, hey, are things shared that
it really shouldn't be? AI can probably find out about that really quickly,
for you too and then you can take action.
Those are good hygiene to just ask and prompt, almost like part of your regular workflow.
Those are good.
It's so funny.
Everything's meta with this because AI can assist in whatever thing we're trying to solve that
might be related to AI.
It's sort of, I don't know, this meta circle.
I always find myself in.
I'm like, I'm trying to worry about this with AI, but can AI help me?
And it seems like it can.
AI is like your reasoning partner and just does so much.
I'm really excited about the outcome, the future of where this technology can go.
Nicole, tell everyone.
where they can learn more about Fable Security yourself and stay in touch or learn any of the sharing you might be having universally.
You can learn about us at fablesecurity.com.
Our website shows a lot about what we do when it comes to protecting human risk, understanding employee behavior,
and figuring out ways to share target interventions that can elevate your overall security hygiene, whether it's AI, adoption, whether it's sharing sensitive data, whether it's also just defending against external threat actors, where based in San Francisco,
California, our office is right in the heart of downtown.
We work in person.
We love being there to collaborate.
So if you're ever in town, our door is open.
I really appreciate you for coming on.
Let's do it again soon.
Absolutely.
Let's stay in touch to Cole.
I'd love to have you on every now and then.
This is a very topical thing, real thing.
As things evolve.
Yeah.
Have a fabulous rest of the day.
Thank you.
Great to meet.
Thank you.
Thank you for having one of the show.
Yeah.
You're a good time.
Take care.
Thanks.
See you.
This has been right about now with Ryan Alford.
Radcast Network production.
Visit Ryanisright.com for full audio and video versions of the show
or to inquire about sponsorship opportunities.
Thanks for listening.
