CyberWire Daily - AI and cyber practicum [CISOP]
Episode Date: December 9, 2025In this episode, host Kim Jones examines the rapid rise of enterprise AI and the tension between innovation and protection, sharing an RSA anecdote that highlights both excitement and concern. He outl...ines the benefits organizations hope to gain from AI while calling out often-overlooked risks like data quality, governance, and accountability. Kim is joined by technologist Tony Gauda to discuss why AI represents a fundamental shift in how systems and decisions are designed. Together, they explore AI-driven operations, cultural barriers to experimentation, and how CISOs can adopt AI responsibly without compromising security. Want more CISO Perspectives? Check out a companion blog post by our very own Ethan Cook, where he breaks down key insights, shares behind-the-scenes context, and highlights research that complements this episode. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyberwire Network, powered by N2K.
This exclusive N2K Pro subscriber-only episode of CISO Perspectives has been unlocked for all Cyberwire listeners through the generous support of Meter, building full-stack zero-trust networks from the ground up.
Trusted by security and network leaders everywhere, meter delivers fast, secure by digital.
design and scalable connectivity without the frustration, friction, complexity, and cost of managing
an endless proliferation of vendors and tools.
Meter gives your enterprise a complete networking stack, secure wired, wireless, and cellular
in one integrated solution built for performance, resilience, and scale.
Go to meter.com slash CISOP today to learn more and book your demo.
That's METER.com.
slash C-I-S-O-P.
Welcome back to C-S-O-P.
I'm Kim Jones, and I'm thrilled that you're here for this season's journey.
Throughout this season, we will be exploring some of the most pressing problems facing our industry today
and discussing with experts how we can better address them.
Today we are looking at how rapid innovation around AI
can introduce unplanned risks into an enterprise.
Let's get into it.
I was eating dinner with a friend at the RSA conference this year.
After taking the time to catch up,
we began to discuss artificial intelligence
and its application in most businesses.
My friend is an innovator by nature.
As such, his focus tends to be to look at the benefits
of a new technology first.
He quickly rattled off a list
of potential benefits of AI.
One, improved productivity.
AI can automate mundane tasks,
allowing employees to focus more time
and energy on more complex and creative tasks,
thus improving the overall productivity of an organization.
Two, improved customer experience.
With the analytic capability of generative AI,
it is possible to create highly personalized and responsive customer experiences
without the need for humans at the other end of the exchange.
This could potentially lead to higher customer satisfaction rates
and ultimately increase sales.
And three, data analysis and insights.
Generative AI excels at data analytics.
The time frame for turning data into information and then intelligence can therefore be shortened.
Further the depth of the depth of the data.
these insights may be potentially deeper as AI engines recognize patent and anomalies with more
efficiency. Whereas my friend is an innovator, my nature is that of a protector. While I agreed
with my friend's insights, I focused on the challenges with operationalizing AI within any
environment. One, clean, normalized data. This is a must for any AI implementation, yet is a
struggle for many organizations who are leaping into the AI fray.
Two, exceptional data governance.
Lack of good data governance to include pristine knowledge of data pipelines can lead to
inadvertent data poisoning or worse, inappropriate leakage of data.
Most organizations continue to struggle with data governance, assuing the detailed measures
and approaches needed in an AI-driven environment to ensure safety for fear that such
measures will impede progress.
3. Recognizing bad results.
In 2024, an article in Scientific American described the technical term for what AI does as
BSing.
While respecting the author's premise, it is, I believe, a tad harsh.
AI is performing predictive analysis based upon a series of inputs and destroying conclusions.
Unfortunately, just like human beings, AI is subject to
Error and Misinterpretation.
4. Accountability.
In some respects, this is an extension of the data governance concern I raised above,
but I believe it merits special attention.
If your AI agent, specifically in an agentic AI situation,
makes an error hallucinates, who is accountable for that error?
More importantly, how do we minimize the likelihood of such an error causing material issues?
within the enterprise.
Five, infrastructure upgrades.
Tying AI into systems and workflows not designed for it
will lead to short-term and hopefully not catastrophic failures.
And six, costs.
IT infrastructures are accustomed to being told to do more with less.
As such, more organizations will not add AI dollars into IT budgets
and just expect CIOs to figure it out.
This will mean trade-offs.
While this is nothing new,
the newness of AI means that many organizations
will not know or understand the nature or depth of the trade-offs
that need to be made until they are in the midst of their AI journey.
When pressed on these concerns,
my friend went into an all-too-familiar reiteration of the benefits.
While he eventually acknowledged the potential concerns,
he offered no insights as to how these risks could be mitigated.
This discussion is not atypical of some of the discussions going on within enterprises today around AI and AI usage.
While all agreeing, myself included, about the benefits of AI, most organizations are whitewashing risks and costs and leaping into implementing something, anything actually, just to say that they are AI enabled.
I'm almost reminded of the I-Need Agile phenomenon a decade ago.
Is AI here?
Yes.
Should we embrace it?
Definitely.
But do so smartly and with eyes wide open.
My two cents.
Working with a visionary thought leader like Tony Goda is one part exasperation, one part mind-stretching, and three parts fun.
Tony and I have had many discussions about how to take his truly insightful ideas and operationalize them in a way that doesn't break an organization or its culture.
We sat down to have one of our typically spirited discussions around innovation as pertains to all things AI.
A quick note that the opinions expressed by Tony in this segment are personal and should not be interpreted as representing the opinions of any organization that Tony has worked for, past, or crescent.
So, you know, you and I have known each other for three years now, but my audience may not be familiar with you.
So tell my audience about Tony Goda.
Oh, my gosh.
First of all, it feels like I've known you for 30.
Yeah, yeah, it's like that.
Yeah.
I'm an engineer, I've been doing software development all my life.
My dad had a computer consulting firm, brought me in at a pretty young age,
and told me, hey, I can touch every computer in this organization,
but I can't play games on any of them.
So what that meant is that I had to just, you know,
figure out ways to entertain myself.
And that meant, I guess, building things to entertain myself with.
than I've been doing that pretty much ever since.
At some point, you know, I ended up, you know, kind of working at,
I got into, you know, kind of cybersecurity because he actually had,
that computer consulting firm actually was a fraud prediction system for the cellular industry.
So I was kind of indirectly involved with kind of building fraud prediction systems.
And then, you know, kind of later in life, I found out a way to MasterCard,
We're going to kind of help design the first generation of AI-powered,
actually credited debit fraud detection systems.
I did that for a few years.
And at some point, I bought a MacBook Air, ran out of space,
and decided, hey, I can solve this problem, the storage problem with software.
So ended up quitting my job, moving out to the valley on a whim,
and being a startup CEO for a company that I founded on some business.
technology that I built at the time. I built that for, you know, probably five years, left
that company, started another company. I was a CEO of that for another five years. I was,
you know, spent a decade in the valley, you know, just kind of being an engineer slash,
you know, kind of CEO, which allowed me to better understand, you know, the intersection
between kind of business and technology. Actually, how to do people management? Because remember,
people are a lot less deterministic than software is. Well, well, what? Well, well,
Oh, whoa, really?
They are?
Shocked I am.
The current generation of people, and we'll talk about that later, actually,
because, you know, if we didn't even get into the deogenic people just yet.
We'll get there.
Right.
But, you know, really learned how to, you know, kind of motivate people to kind of do the best work of their lives.
And, you know, both those startups actually were cybersecurity adjacent or cryptography,
you know, kind of related.
One was a storage company.
The other was an insider threat detection company.
so then took a bunch of time off
to kind of catch up with family
because I was basically, you know,
traveling a lot, you know,
had young kids at the time
and, you know, just wanted to spend some time with them.
And then, you know, I got a random LinkedIn request
from someone at Intuit who, you know,
kind of reached out and said,
hey, you know, you've got a lot of, you know,
kind of external startup experience.
It's like, you know, you're, you're impatient,
you know, kind of with your, you know,
and your expectations.
And you also, you know, or, you know,
It seems like you're pretty innovative.
You know, you haven't been, you know, maybe, I guess,
indoctrinated by a larger organization just yet.
So we'd love to bring you in
so that you can kind of be that architect
for the next generation of cybersecurity technologies
that we need to kind of build to solve for the problems
at Intuit scale.
So that's my current position.
I'm the vice president of cybersecurity architecture.
At Intuit, been there for about almost three years now,
really enjoying the job.
I walk in every day.
and ask the question, hey, you know, is this the right thing that we're doing?
And if it's not, let's change it and let's figure out what the right thing is
and kind of help drive that strategy across the company.
Fantastic.
So the nature of our relationship began, and I won't talk about Austin, because that's a different story.
We don't talk about Austin.
We don't talk about Austin.
But the nature of a relationship has been, and that's really what I wanted to bring here, is when we met, you coming in as an architect and probably using the term slightly incorrectly, but, you know, I'm a simple guy.
So you have a futurist outlook and look at what, you know, what can be and how to project truly from a strategic vision as to what we ought to be thinking about.
And, you know, I've been an operational CISO for 17 of my 38 years in cyber and was running the sock for into it when we met.
So a lot of our relationship is a, yeah, great, wonderful, fantastic.
That's going to break all this stuff.
Now, how do you want me to get there without breaking stuff?
And this led to more than a few very, very engaging conversations about, yeah, I want to go forward.
I don't want to just invent, I want to innovate.
And I had a dear friend, Frank Kim, tell me that the difference between innovation and invention is both are new, but innovation is actually useful.
So I want to innovate out in the market, and I want to help support the innovation, but how do we get there in a way that doesn't blow up the culture, blow up our protective protection posture, et cetera?
And you and I regularly have conversations about this and did for three years.
And as I started thinking about, you know, the theme for this season and thinking about, you know, the dreaded AI.
And yes, we're going to use that you're, if you're playing AI bingo, trust me, we're going to use it a lot here.
But what should the cyber professional be thinking about in terms of where his or her business is probably looking at AI?
But more importantly, how do I as a cyber professional utilize AI to create efficiencies, to create excellence, to optimize what I'm doing within the environment and to better keep people safe?
So, Tony, what I'm going to do in front of a thousand or so people is have the conversations you and I regularly have had for the past three years and talk about, tell me what your vision is.
And then let's talk about how we get there in a way that makes sense, yet doesn't needlessly slow things down.
So just talk to me.
What, you know, I may see so.
What should I be thinking about as I look at AI in the future and utilizing it within my organization?
Four is yours.
Yeah, I think the challenge is actually, I think the challenge is, is massive, but also
think the opportunity is, is just as massive.
I mean, we're, we're basically at a crossroads, not only within cybersecurity, but
across, you know, every industry in which we traditionally have thought about things
incrementally.
So it's like, hey, you know, we've got this existing kind of process.
We've got this existing kind of technology.
We're going to upgrade it a bit.
It's going to get a little bit fast.
It's going to get a little bit more accurate.
It's going to kind of help us be a little bit,
maybe 5, 10% more operationally efficient
across an organization.
And I think that there's a lot of organizations
that are thinking about AI in a very similar capacity.
And I think that actually is a mistake.
What I think we should be doing
is fundamentally rethinking
the approach that we have
for all of these types of problems
that we're actually solving for.
Because if you think about it today,
the solutions that we have in place,
are basically we've got humans at the center
that are being augmented with technology
that kind of help them, you know,
kind of get to some outcome within some organization.
But if you were to fundamentally rethink the problem
to have AI at the center
to do a lot of the redundant,
you know, kind of repetitive tasks,
the things that we know are automatable
or the things that have some level of automation to them,
and then put humans into a position
in which they are, in some cases,
fact checking with the AI is doing
or giving the ability to, you know,
know, to help govern an AI-centered system versus augmenting a human-based system with AI.
I think that opens up the possibilities to much more autonomous, you know, kind of systems which
could solve the types of problems that I think we're going to face in the future, which are
much faster than human speed.
So I've got to push a little bit, and I know that's shocking to you, but I've got to push a
little bit, you know, you've said a couple of things. Well, the smaller portion regarding creating
efficiencies through automation is something that every good business leader, that includes every good
cyber business leader, because every CISO runs the business of security within their organization
is always attempting to do. But reflecting back, you're talking about flipping the model and
making it automation-centric for a lot, or specifically AI-centric within the environment. The challenge
that I have with that.
And first, not that I disagree with you,
because I think there are plenty of opportunities there.
But the challenge that I end up having there right now
is that's great until you run into that nasty A word,
accountability.
And right now, you know, CSOs are being held liable.
You know, the CSO Solar Winds is still under indictment right now,
having spent millions of dollars due to something, you know,
in a human-based system.
Now you're going to tell me to turn this over to agentic AI, basically, for lack of a better term, in the environment.
And if it goes sideways, you're still going to attempt to throw my large butt in jail.
So how do we balance that, you know, agreeing that we could do more?
How do I balance that as a CISO in terms of, okay, you want me to divest a level of positive
control within the environment over to the automation to drive certain decisions, not analysis,
but decisions.
Yet, what does that do for a liability engine within the environment?
Because you know as well as I do, your boss currently and their boss currently aren't going
to let you off the hook for that.
So how do we create an ecosystem that allows a CSO to do that in this highly litigious society
where two C-SERs have been placed on trial
and one has been found guilty?
Yeah, I don't think the accountability shifts at all.
I think that, so what we're talking about
is a future that is inevitable.
Like, I mean, AI will power a lot of the systems
that today are being powered.
Okay, so I'm going to cut you off
and I'm going to push back on you on that.
Telling me it's inevitable
is not something I disagree with.
Okay.
But telling me how to do it is,
and again, this is the conference.
Yeah, we had this over at a steakhouse in RSA.
I was going to recount that story.
I literally was.
And I agree with you.
I'm not talking about the inevitability.
Let's assume that we're heading there.
The issue is the how, which is the same conversation we had.
You know, I am sitting here pointing out that we have to create a how that enables the person who is that responsible charge to take that step.
That requires an organizational structure change potentially.
It requires issues regarding liability and accountability.
It requires potentially decisioning and regarding how AI impacts that within the environment.
So my push for you, and again, this is nothing new.
You and I do this all the time is, okay, telling me it's inevitable is telling me the sky is blue.
You know, I'm not a Luddite, and I agree with you because I want to innovate.
What steps do I need to get there that are going to allow me to protect the company as well as myself?
That's what I'm asking.
So, Kim, listen, totally get it.
And I think one of the issues is that people think that AI is some magical kind of genie, that's 100% right all the time.
And they're putting it in positions where, you know, if it's wrong, then it can be catastrophic to the organization.
Now, you know, I'm a huge fan of, you know, kind of Waymo and the technologies that they've got and kind of how they've taken this extremely difficult problem of navigating, you know, a city street with a life inside of it, you know, safely across, you know, across, you know, even some of these.
the most busiest streets in America.
Now, what I'm never going to do is take a Waymo up some cliffside California highway.
Like, that's not a thing I'm going to do until this thing becomes way more stable.
But at the end of the day, there have been checks and balances that have been put in place
to make sure this thing actually does, you know, react in such a way that it's not as,
that is not a catastrophic to it, to a human life.
Should we be prepared to change organizationally to enable that?
Those are the questions that I want to push you towards.
Yeah.
So if you think about, you know, kind of what happens today, you know, you've got a security
operation center, you've got some person sitting in front of a, you know, from a, you know,
a PC or some terminal. They get an alert. They respond to that alert. You know, I mean, in all,
you know, frankness, a lot of these things today are either outsourced to, you know,
to other organizations or we're doing a lot of the tier one things with automation.
So if you think about it, setting a goal within your organization where you want to automate a lot of
your tier one operations. So not determining kind of what those are, but letting your discrete
teams decide what tier one actually means, and then empowering them to be able to go out and make
the technological decisions to put these things in place, but also giving them, you know, the room
to experiment and to actually fail in some cases. Because at the end of the day, what you want
is for them to have the psychological safety to actually go out, to find the tools, to train the users,
to expect the first responders to actually, you know, to take the systems that are,
or to take the cases that are being generated by the automated system and to just double-check
what it's doing. So it doesn't necessarily have to be this person is making a decision about a
particular incident that's happening. It's double-checking what the system recommends as a course
of action. And that in itself allows you to automate, I mean, which I think is an actual standard
operating procedure and a lot of more advanced you know kind of cybersecurity organizations today
Have you ever imagined how you'd redesign and secure your network infrastructure if you could start from scratch?
What if you could build the hardware, firmware, and software, and software and software,
with a vision of frictionless integration, resilience, and scalability.
What if you could turn complexity into simplicity?
Forget about constant patching, streamline the number of vendors you use,
reduce those ever-expanding costs,
and instead spend your time focusing on helping your business and customers thrive.
Meet Meter, the company building full-stack zero-trust networks from the ground up
with security at the core, at the edge, and everywhere in between.
meter designs, deploys, and manages everything in enterprise needs for fast, reliable, and secure connectivity.
They eliminate the hidden costs and maintenance burdens, patching risks, and reduce the inefficiencies of traditional infrastructure.
From wired, wireless, and cellular to routing, switching, firewalls, DNS security, and VPN, every layer is integrated, segmented, and continuously protected to a single,
unified platform.
And because Meeter provides networking as a service, enterprises avoid heavy capital
expenses and unpredictable upgrade cycles.
Meter even buys back your old infrastructure to make switching that much easier.
Go to meter.com slash CISOP today to learn more about the future of secure networking
and book your demo.
That's M-E-T-E-R dot com slash C-I-S-O-P.
You know, as we think about this automated sock,
so, you know, reflecting back, it requires culturally an environment,
a, you know, a culture that allows for, for lack of a better term, for failure.
And experimentation.
That's exactly right.
Yeah, so to experiment, we have, and I'll use that term, a culture that allows for experimentation
is a culture that allows for the possibility of failure, because if you experiment, not
everything is going to work.
Now, a thousand percent agree.
You and I've had this conversation on more than one occasion.
I absolutely agree with you.
And that's actually extremely important.
Yeah.
Because otherwise you, because us as literally risk owners, the first thing we're going to
is, I want to trust this thing.
Like, my job is on the lot.
Yeah, and by the way, I own no risk.
The business owns the risk.
I just report it to the risk.
That's right.
Yeah, yeah.
And you know how I feel about that one.
But, you know, so, you know, as we sit here and create that culture, I guess the question
that I would ask is culturally, and we're going to get, we're not going to get metaphysical,
but we're going to get beyond just sake as you.
you hear, Tony. Culturally, security has become, there has become an expectation of security
of lack of failure within most environments. You know, I have said in other venues that if the
expectation for perfection in security in the IT space existed in the physical space,
we would expect a murder and kidnapping and theft rate to go down to absolutely zero across the country, which is unrealistic.
Yet our business customers, our CEOs, our COOs, our CTOs, etc., expect that nothing is going to go wrong within an environment and want us to drive to zero.
And anything less than zero is considered problematic.
How do we, you know, change that mentality if we need to create a culture of experimentation?
I would say that I am not familiar with a lot of Fortune 500 organizations that do more than talk a game regarding experimentation within the operational arena.
I think that I think what we're talking about is a reframing of the risk.
of the risks that we're already accepted.
Because if you give three different stock analysts, the same alert, at different times of day,
it will get classified three different ways.
We talk about humans as if they are infallible, but in their current environment,
they're extremely fallible.
Humans themselves are some of the most indeterminate, non-deterministic things that exist on the planet.
Absolutely true, but I can hold the human accountable.
and based upon conversations we've had,
it can't necessarily do that with a gentile.
At least not yet.
So I think if the question is,
if we expect to hold the AI accountable,
I think that's not,
I think that's not the argument that I'm making.
So you want me to accept a level of error.
I mean, you and I were just talking about this earlier
from an operational standpoint,
there was a recent report
regarding a
AI coding tool
that wiped the production database
fabricated 4,000
users, and then actually
lied in terms to cover
extracts within the environment.
I would argue that
the code review may exist,
but the code review,
even if done by an agentic
system, is not
perfect. Humans are not
perfect. You know, machines are
not perfect. So,
The challenge here gets to be, there's a point where we create systems of accountability as a method of check in balance.
So as I roll to more agentic AI systems here within the environment and turn more of these processes over,
it's not that humans themselves aren't infallible because they are absolutely fallible, but they can also be held accountable.
So you want me to turn over more systems to make more decisions where I can't create that level of accountability.
No, I think so it's not, so yeah, we hold humans accountable by performance reviews, by looking at, you know, kind of how they, you know, how well they work with each other.
But with the genetic systems, what it should be are checks and balances that could be either humans checking and make sure that the agentic system is making the right decision or other agentic systems that are put in place to validate the.
decisions that are being made.
Okay, so you're talking about risk mitigation.
So you're not mitigation, but management, management, which will hopefully create some
level of reduction.
So using the NIST's AI risk management framework, you would want to create systems that
create human-in-the-middle-type interactions.
So what makes sense.
We're observers, right?
So it's not necessarily, and what we're not looking for is something that is infallible,
but we're looking for something that gives us a lift and efficient.
something that allows us to have much faster response times.
Because remember, the adversaries are using AI to traverse through our systems
to find vulnerabilities at faster than human speeds.
So the expectation is that we get an alert, we take a look at it.
In the meantime between detection is now, you know, 30 minutes or whatever it may be.
Let's say we even get it down to five.
Imagine the amount of damage that an agentic, you know, system or an AI-powered adversary
can do in five minutes to an organization.
And take that same argument regarding an agentic system that is responding inappropriately within an organization.
Imagine the amount of damage they can do.
So you're absolutely right.
And I guess the question here that I'm having is, yeah, putting a human in the middle is great.
But if an agentic system is reviewing 15,000 different actions within the space of 10 minutes and taking 15,000 different actions within that time frame,
I can't keep up in terms of that review other than sampling.
So we're still in the environment where I'm depending upon folks to make those decisions with a high level of autonomy, which again, and again, Tony, just for the sake of our audience, you know, I want to reiterate where I started.
You or I are in violent agreement that this is going to occur.
And, you know, this is happening.
But what I'm trying to get to is, you know, we're still in an environment where I'm looking down and staring down the barrel of that gun and I don't, you know, and I don't have a solution other than to continue to do what I'm doing at a pace that can't keep up within the environment and still accept the liability associated with that.
And then I'm told not as a security professional, I shouldn't be resisting this because it's coming, yet you don't have an answer for me.
It's not even coming.
It's already here.
But again, you don't have an answer for me.
You don't have an answer for me if I actually, at the pace you tell me is coming,
tell a system to make automated decisions, and it deletes tons of code within the environment.
Yeah.
I think not having the right checks and balances within any system, even a human power system is probably a very risky decision to me.
Agreed in your sidestepping because you still haven't told me what the right checks and balance.
are at speed?
How do I do this at speed?
It's a, it is a standby
agentic system that is double-checking
the decisions that are being made.
So think about it as a consensus-based system
where it has the ability
to determine if this agentic
system that is given a limited set
of functionality. So it shouldn't be,
hey, you can do whatever you want, which is what
we typically give people the ability to do
is to make whatever decision they want
and to affect lots of systems.
We can set guardrails for an agentic.
system to say, hey, you can kind of do these three things. And then we'll put a secondary
system in place to validate that this is the appropriate action to take. And if it doesn't fit
within a certain requirement or it doesn't fit within a certain set of parameters, there is an
exception flow in which a human in the loop can actually decide if this is the right thing to do.
But none of these should be destructive. You should not give it the ability to delete databases or
to commit code to your repository that is unreviewed. That's not reviewed.
The challenge that I have here as I go to operationalize these within the environment is not just checks and balances, but how do we put appropriate checks and balances on the system that's moving faster?
You did answer that in terms of a sort of check and balance AI system within the environment, as well as limiting the access to what these agentic systems can do at least initially until you can build up the knowledge, you can build up the trust, you experiment within that environment and say, oh, that went wrong.
you actually open this up within the environment.
That makes sense to me.
I am still struggling.
You want to know, hey, how do I as an organization?
Like, how do I actually operationalize this?
How do I tell the team to actually, like, achieve this goal?
Less that from the technical standpoint.
But the things that you're talking about, which you did bring up, yes, I was actually
trying to poke at you to get Tony to come out.
It's always more fun when you hear.
I don't want to gloss over the piece that is necessary for this as you have laid out.
And that is not just a mindset shift, but a cultural shift within the environment.
It is a mindset shift that says we have to experiment within the environment
and that we have to be able to experiment without failure being a significant career impacting event within the organization.
So I guess where I'm trying to get to is, you know, as someone who has been, you know, in cyber, not for not a short time, probably not as long as I am because I'm an old guy.
You know, how do we empower our constituents, meaning, you know, the businesses we support to allow us to create that mentality?
Because I've got to tell you, you know, it's not there as deeply as it ought to be for this level of transformative approach.
so that's what are your thoughts there for me it is so i think about all of this in the same way
that a startup would because if you think about it you know from the traditional you know kind of
sense which is again incremental improvement i think you're going to get five 10 percent you know
kind of you know kind of efficiencies i think what you do is that you set an expectation that is
well beyond what is possible with our current systems you say hey within six months between
six, eight months, maybe 12 months. Let's say, let's give it the old enterprise 500. Let's say in an
FY, right, let's say an FY, you know, 26, we're going to achieve a level of autonomy in tier X
responses, period. You don't talk about the tactical. You don't define what tier one is, what tier
X is. You allow the team to make that decision, but you give them the resources that they need to actually
execute against that. So set the goal. Don't necessarily set the tactics for how to achieve.
it, but then empower them to make the types of decisions that need to be made. Maybe that's bringing in an
external, you know, a system that we haven't had in the past. Maybe that's bringing in, you know,
which could be a startup. It could be a well-established, you know, kind of providers, you know,
kind of soccer sim tool. But whatever the case may be, empower that organization to actually make
that decision and give them the budget and the psychological safety to achieve what I like to call
a moonshot, give, like, encourage them to, to deliver on this. And if, let's say they don't hit
the moon, let's say they, you know, they get halfway there, that is a, that is a tremendous
increase in the capacity of the organization and something that you can absolutely talk to out
to your executive team. Like at the end of the day, there has to be, you have to set the tone that
a transformative, you know, kind of result is what is expected, which means they're going to rethink
how they're doing everything. Yeah. So,
So it takes, from reflecting back appropriately, it takes both a mindset and a willingness
on the part of the business to take that wound shot to achieve that level of transformative
success.
And characterize it in that way.
And that will galvanize the team.
So it's not just, hey, let's do this thing that's a little bit better.
Hey, if you literally go to the team, say, hey, if you had a budget and you wanted to achieve
some massive transformational change within your organization, what would that look like?
And what support what you need for me to allow that change to happen?
So let me ask that one question then.
Let me give you a final word.
Let's answer that question for me.
Okay.
You're a CISO.
The business wants AI.
The business wants you to adopt AI.
The business doesn't know necessarily what adopting AI means for you, et cetera,
within the environment.
If there's one thing, one thing that I could do today as a C-So who's starting that journey, what would it be?
Trust your team, empower them.
It's literally all about you've got experts that have that know exactly where the areas of improvement exist within the organization and have given the opportunity, they will absolutely blow your mind away.
Just set the expectation that blow my mind.
Like, tell me where we can get so, you know, not 10% improvements, but 10x improvements, 100x improvements.
And you'd be surprised at the answer say you get.
And that's a wrap for today's episode.
Thanks so much for tuning in
and for your support as N2K Pro subscribers.
Your continued support enables us to keep making shows like this one
and we couldn't do it without you.
If you enjoyed today's conversation
and are interested in learning more,
please visit the CSO Perspectives page
to read our accompanying blog post,
which provides you with additional resources
and analysis on today's topic.
There's a link in this.
show notes. This episode was edited by Ethan Cook, with content strategy provided by
My On Plout, produced by Liz Stokes, executive produced by Jennifer Ibin, and mixing
sound design and original music by Elliot Peltzman. I'm Kim Jones. See you next episode.
Securing and managing enterprise networks shouldn't mean juggling vendors,
patching hardware, or managing endless complexity.
Meter builds full-stack zero-trust networks from the ground up,
secure by design, and automatically kept up to date.
Every layer, from wired and wireless to firewalls, DNS, and VPN, is interesting.
integrated, segmented, and continuously protected through one unified platform.
With meter security is built in, not bolted on.
Learn more and book your demo at meter.com slash CISOP.
That's METER.com slash CISOP.
And we thank Meeter for their support in unlocking this N2K Pro episode for all Cyberwire listeners.
Thank you.
