CyberWire Daily - Season finale: Leading security in a brave new world. [CISOP]
Episode Date: December 30, 2025In the season finale of CSO Perspectives, Ethan Cook and Kim Jones reflect on a season of conversations exploring what it means to lead security in a rapidly evolving “brave new world.” From the r...ealities behind AI hype and the slow-burn impact of quantum computing to the business forces shaping cybersecurity innovation, they revisit key lessons and lingering challenges facing today’s CISOs. The episode closes with an optimistic—but candid—look at why fundamentals, critical thinking, and leadership still matter as the industry moves forward. Want more CISO Perspectives? Check out companion blog posts by our very own Ethan Cook, where he breaks down key insights, shares behind-the-scenes context, and highlights research that complements episodes throughout the season. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyberwire Network, powered by N2K.
This exclusive N2K Pro subscriber-only episode of CISO Perspectives has been unlocked for all Cyberwire listeners through the generous support of Meter, building full-stack zero-trust networks from the ground up.
Trusted by security and network leaders everywhere, meter delivers fast, secure by digital.
design and scalable connectivity without the frustration, friction, complexity, and cost of managing an endless proliferation of vendors and tools.
Meter gives your enterprise a complete networking stack, secure wired, wireless, and cellular in one integrated solution built for performance, resilience, and scale.
Go to meter.com slash CISOP today to learn more and book your demo.
That's M-E-T-E-R-com
slash C-I-S-O-P
Welcome to the season finale of SISO-P, I'm Ethan Kook, and for today's episode, I sit down with show's host, Kim Jones, to reflect on this season's conversations.
Over the past season, we've dove into some complex, impressive conversations.
Whether they were looking at what is looming on the horizon
or what challenges are already taxing us today.
These are the realities that we, as an industry, need to stay on top of.
We've had an interesting series of guests talking about interesting topics
and bringing some interesting perspectives.
This has been really lots of fun as we've been able to deep dive into some of these issues as we go.
Yeah, I, you know, reflecting back on the past couple episodes,
I think there has been some interesting conversations,
both about technologies that are already here,
technologies that are coming down the pipeline,
as well as the really interesting different viewpoint on this ecosystem
and how businesses look at it from a non-security perspective on
why certain groups fail, why certain ones are successful and how we can really evolve that
and I think take some of those lessons into our operations.
So let's start with the first conversations and go back to the AI.
We met with, we had two different episodes and we talked about both AI's implementations
from a security perspective as well as kind of this promise that AI holds.
You know, it can do all these things and you look around and there's 40 million AI startups
It feels like every other week
that I'm all merged from stealth
and they're all going to do, change the world.
So I think, you know,
before we dive into both episodes
and kind of dive into the specifics,
I would love to take a step back and look at,
you know, what are your thoughts
on this AI culture,
especially as it's continued to evolve over it,
just this year alone?
Wow. Yeah, there's a loaded question.
I wouldn't have asked it if I didn't want it.
Yeah.
So, you know,
It almost seems today that if you are a naysayer regarding AI, you're treated as a Luddite or an ignoramus within the environment.
And I don't consider myself to be either of those things, but I've been around long enough to see a lot of ready fire aim happen within technology.
And what I do not believe we are effectively doing is recognizing the potential challenges and problems that exist out there because we are all leaping to, it has to be AI, it has to be AI, it has to be AI, without understanding the potential ramifications, the potential threats that exist out there within the environment.
AI can do some great and wonderful things.
The problem is severalfold.
One, as we test the limits, note the big air quotes, of what AI can do,
we're deploying AI in memes that are probably over-extending or increasing risk within the environment.
And two, we are operating and being encouraged to operate in a model that says,
AI should be trusted within the environment.
I'll use Google as an example,
or any major search engine as an example.
One of the challenges we have with protecting data
is when you go to protect data, you have two options.
You either build Fort Knox,
or you convince people that the data that they're surrendering
isn't worth as much as the service they're getting.
Google took option two and said, hey, Gmail, Google,
Google utilities, et cetera,
Google search engine, all of this data itself is meaningless.
Just give it to us.
We'll be fine.
And they're using that to market to us, sell to us, sell our data, et cetera,
where we've become the product within the environment.
Now, as of October, we have all agreed, if you haven't done the formal opt-out,
that Google can then utilize this data to train AI tools and large language models.
That was one of these changes within the terms and services that if you didn't,
do it by Halloween, trick-a-treat, you've agreed to do this if you're using any Google
products within the environment. The analytical power and the intelligence that can be derived
from that is potentially massive. Yet we have freely surrendered that data for the ability
to have a quote-unquote free email address or a quote-unquote free word processor or quote-unquote
free cloud storage within the environment. I think we're seeing similar habits or traits
erupt with AI within the environment.
And everybody is focused on we need to do this quickly to try and stay ahead.
And in doing so, we're squashing and not paying attention to the potential threats that
are out there, the potential risks that are out there.
And my concern is that these things will erupt bigger with a bigger blast radius when it's
too late, as Silicon Valley and other organizations attempt to push.
AI without looking at all of the ramifications that are associated with it.
So I think AI is a great tool.
I think used properly AI can be, you know, absolutely meaningful, helpful, and raise the bar.
I think we're racing towards chaos and disaster.
I actually read a research paper and I'm going to have to dig out to find it, saying that if, you know, garbage in garbage out, if AI
takes in data that's inaccurate
and tries to synthesize that to do certain things.
And then agentic AI then utilizes that data
to build on its own algorithms and its own code.
And then the cycle continues.
The data poisoning is crazy.
And frankly, we're building crap is what it amounts to.
And we're heading down a path where that possibility exists.
Nobody's talking about it.
Yeah.
Because everyone sees the potential.
Nobody sees the potential harm, and those that do are being labeled as tinfoil hat wearing idiots within the environment.
So my concern here is not that we've leapt on the AI bandwagon, but that we have done it haphazardly, as usual.
And I think the light at the end of the tunnel may be a train if we're not careful.
Yeah. So, you know, I agree with you on that perspective that it is, there are, well, it can do a lot of great things. There are massive, massive caveats that need to be considered. But I think there is this concern. I was at a panel a couple weeks ago, and this came up, which is one of the people were talking. And they said, the reality is, is that shadow AI is a thing. If you don't get ahead of it, your employees are absolutely going to be using it with or without your approval because that's just the name.
So let's be clear, who here thinks they're ahead of it?
If you're already behind, who you think you're going to get ahead of it within the environment.
I agree with that.
I think it's evolving at such a pace that you can't truly get ahead of it,
but I think there's a difference between getting ahead of it,
like quote unquote, actually being ahead of it,
properly and managing it and trying to take at least some proactive measures
to control maybe what AI goes out.
You met with Ben Yellen earlier this year.
And actually him and I had a great conversation about how,
UMD, the school that he works for, has an AI program that is controlled and it is heavily built up and it is heavily monitored about what data can go in, who can log in to use it. It is vetted, et cetera. And I don't think that obviously that is a perfect solution. I'm sure. I don't know the solution in it out myself, but the aspect of attempting to just not have a gung-ho people just putting in and just going crazy with it.
Yeah, and I have no problem because, again, AI is not the Antichrist, AI is not Skydown.
And, you know, we have to figure out how to adopt the tools.
So coming up with a structured formal plan in terms of how to adopt for utilization of AI in your environment makes sense.
Trying to adopt, you know, from storage, from Q&A, et cetera, that is a different animal than turning over all of your Tier 1 SOC analyst work to Agentic AI.
That is a different animal from what shop has done
is entered AI within their employment charge
and started asking questions of interviewees
as to what can you do that AI can.
Why should I hire you?
Those are very different approaches.
I agree.
Making sure, yeah, so what you're talking about
makes perfect sense.
But think about the things that I've just mentioned
and that's going on now and continues to do so.
And transitioning a little bit
to some of the conversations.
conversations we've had.
You know, both of the individuals who I had conversations with are former colleagues,
and I consider them both very, very dear friends.
But, you know, I will go back to the conversation I had with Tony Goda, who, you know,
Tony is a serial entrepreneur.
He's an innovator.
And part of the challenge I have with Tony in some cases, because I brought him because
I am the operator and have to operationalize his, you know, his wild idea.
is on some occasions is to say, Tony, you're an evangelist. You're evangelizing. And I have no problem
with evangelizing, but I have to implement what you evangelize in the environment. And Tony's solution to
that was to evangelize for the entire podcast, which is cool because it was very helpful.
You know, sorry, Tony, I got to give you, I got to give you shade because it's us and we do this.
It was helpful. But that's what's happening in the environment. I have great people who are
evangelizing and telling me what's going to go wrong if we don't adopt now, but these same people
aren't contributing to the solution, and they'd rather ignore it. And then when the, excuse my language,
when that shit hits the fan, the rest of us have to clean up. And these evangelists have gone on,
in some cases, have taken their cash and gone on while the rest of us are cleaning up the mess.
That's my concern.
It feels like, to your point in the market right now, not just within, you know,
the broader mark, but really within cyber
where if you don't see on a cyber page
AI somewhere, you are
losing the race because
everyone is looking for the next solution
that's going to revolutionize it. And that's why
we talk about there's an AI bubble in general, right?
Because everyone has it. It's all perfect. It's all going to
change the world. And reality is
that most of these companies are not going to be successful.
And most of them are going to get either
or are going to fail or they're going to get swallowed up,
right? No, I
agree with you completely. People are firing spaghetti
on the wall, the seawood sticks.
and it will be a very small fraction
who do within the environment
and what that uniqueness looks like
within the environment
and where the need is within the environment.
And right now the technology is so new
that I think everyone is throwing down
and saying, well, maybe you can do this
and maybe you can do this.
And we're just trying in different places.
But it gets back to that old adage
just because you can
doesn't mean you should.
And we need to understand
the differences
between the two and we haven't drawn those lines because we're all saying if you don't try it
we are behind that sense of where we have to catch up we are behind so we're trying everything
not understanding the nature of the problems we may be creating so let's you know let's take that
back you know for the the listeners who you know either are aspiring or are current sissos etc
or leaders within the space what do you do
when you get tasked with this because oftentimes these things get there above your pay credit
or whether they get put in or not to your point about Tony this is what we're going to do
and you kind of have to deal with it right and uh what do you do how do you manage that what are
the steps that people can take to implement security measures or as attempt to make it as
secure as possible without having a even if they don't have the option to outright refuse or say
well, maybe let's pump the brakes.
So I go back.
I'll give this a specific example,
and that's our other guest on AI, Eric Knox,
in terms of what Eric's role was.
And, you know, I think we mentioned it during the episode.
You know, Eric is a recovering CISO like myself.
He was on staff at Intuit like I was.
I think he's actually just moved on
to be an advisor for an AI startup,
if I remember correctly,
within recent weeks, as a matter of fact.
Yeah.
Yeah, they're all being simulated.
But Eric is also, you know, a patent attorney because it's the California bar.
When he went to grad school, he went to law school.
So having someone understand both the legal ramifications, the risk, and the technology, etc.,
allowed him to put in governance models and reasonable controls and reasonable guardrails
around what Intuit was doing within AI.
And we were doing some innovative stuff within the environment.
So what I would say here is for people who are being tasked to put this in is to say,
okay, what is the end outcome and the desired outcome that you're looking for within the environment?
What is the risk you're willing to accept within the environment?
And part of that means we need to understand the technology,
the potential risks that are out there with the technology,
and communicate those accordingly.
And then once we understand and have communicated those,
and there's an organizational desire to accept those things within the environment,
we now need to put in appropriate guardrails to make sure that risk stays.
And in other words, we have to do our job because I could have said this about cloud.
I could have said this about wireless.
I could have said this about outsourcing.
I could have said this about offshoring.
This is the same thing we do with any other massive place in technology.
The challenge that we have is we're now being placed even more in a position
because of an artificial sense of urgency
that if we slow down enough to do this,
we're standing in the way.
But guys, this is the same challenge
we've been phased in for decades
within the environment just faster.
Yeah, stand to ground, you're going to have to.
This is a case where, and I've had conversations
on this podcast before,
about professionalism versus careerism, et cetera.
This is the case where professionalism has to win
even at the expense of careerism.
You know what you need to do.
You know what you need to do.
to understand. Don't ignore it. Educate yourself. Make sure you're educating your constituents out there
and do the job we're being paid for. The how can be more complex. We understand that. I'm not
downplaying the how within organizations, cultures, et cetera. But the what guys we have been doing
since networking existed. This is what we've been paid to do for decades.
So let's take that same conversation and apply it to a different conversation that we had
about technology and something that is you know you talk about how we've had the same approach
over the past i don't know decade with different technologies that would be cloud etc a is the current
one but the thing that has been promised for is coming every five years for the better part of 20 years
is quantum the next i think hot button word you know theoretically it's here or it's going to be
here uh you know i've heard that since i was in high school so you know yeah sure i you know i think
think we all make that joke, but I think it's starting to become very real now that you have
government organizations putting out recommendations, putting out requirements for timelines, et cetera.
And I think that that is kind of the momentum shifter where it's like, okay, this may it actually be
a reality, not something that's a promise.
Yeah. Quantum computers exist now, and they exist beyond just academia within the environment.
Are you going to go down to your local Best Buy and buy one within the next year to four years?
Probably not.
Definitely not within the next year to two.
But in the next year to four years, probably not.
But there are impacts of quantum becoming more commercially available
even to large organizations within the environment.
And we talk about that with Michael Sotilli, who is the C-S-O-A of Quantinium, if I'm pronounced, Quantilium, I apologize.
does, you know, in the environment and some of the things to do.
But, you know, the big things for me, as I look at the new tech, also start with basics.
Part of basics is asset analysis.
You know, any good CISO who wants to defend needs to know what their assets are.
In the case of quantum, what are your quantum vulnerable assets?
And we talked about this, you know, during, you know, in the essay leading up to that episode,
in terms of what systems are using pre-quantum encryption algorithms, where those keys stored,
is the encryption baked into the actual application or into the system, what's the ability
to disentangle that within the environment?
And just right now, be aware and build awareness.
Be aware of the different quantum standards that are out there and build awareness that
this is what quantum means within the environment.
And it's not tomorrow, but it's not a decade out either.
And that's all we can really do right now on that space, Ethan.
And there's nothing wrong with understanding those quantum fondable assets
and migrating as much as possible to quantum assured encryption algorithms out there.
If I can begin to do that now, we won't be in the scramble.
because remember what happened with Gen A.I.
It's coming. It's coming.
Holy shit, it's here.
And there's a lot of scramble.
If we begin to take that level of approach now,
we won't see that scramble when the time comes.
And do you think there is some concern that,
I mean, it's something that I've been wondering,
that, you know, obviously everyone's so caught up in Gen.
Some of it's catching up to your point of being like,
oh, we didn't think it was here and suddenly it's here.
We need to get ahead of this now.
Like get on to it now, no, no, now, no, et cetera.
that quantum isn't getting probably the attention it deserves
for the impact that it's going to have,
especially on security in terms of encryption,
stored data, PII, et cetera,
that the attention that is being gobbled up,
so to speak, by AI, Gen A.A.A. etc., is that going to detract?
I don't think so. I don't think about this for a second.
AI's impact was B to C.
Yeah?
Quantum's impact will initially
be B2B. It will allow things within large enterprises to work differently, smarter,
or faster, et cetera, within the environment. Your mom and pop store is not going to write away,
I don't think, I don't think within the next five to ten years, have a quantum laptop on their
desk or be interfacing in that environment. Their impact is going to be a change of the
underlying encryption software that
exist on it. You need to
upgrade your laptop because your
laptop can't handle
the quantum insured encryption software.
I think we're going to see that level of
change down to the individual consumer.
But the impact of AI
and the scramble, in my opinion,
and I've got the business acumen
of a fiddle or crab, is because
it went directly to the consumer.
And now we're all
responding as to how do we build upon
that momentum that exists within the
consumer. I don't think quantum is going to have that level of consumer-based impact versus
business-based impact, and it will start with large-scale enterprises who are doing research
or product development. And I'm totally riffing here. So if this comes true, you heard it here
first. I could see large pharmaceutical companies using it for research. I could see companies
that do massive amounts of data analytics,
the alphabets of the world, etc.,
utilizing quantum-based computing
to speed up their processes as well
within the environment.
That combined with AI engines,
you know, can present a lot of opportunities.
Speaking of opportunities,
I can see a lot of nation states
utilizing AI engines and quantum computing
within the background
to glean different pieces of intelligence
and do levels of predictive analysis
within the environment.
I see those being the first big markets
before quantum commercializes within the environment.
It's for that very reason that there are folks that are pushing back on the concern
regarding the encryption algorithms, a la Harvest Now, decrypt later,
because there's a belief that, look, Google doesn't necessarily have an incentive
to try and break your encryption.
Apple doesn't necessarily have an incentive to try and break your encryption.
Apple doesn't necessarily have an incentive to try and break your encryption.
It's not the first thing they're going to do with the quantum computer.
And Google and Apple and Meta are going to be probably the first big buyers of the big quantum computers, if they have not done so already.
So I don't see quantum be – quantum got pushed to the back burner because a new shiny toy and widget got marketed to the average consumer.
And now everyone's jumping on that bandwagon.
I don't think it's changed the trajectory of quantum
and I don't think it will cause quantum to be downplayed.
I think quantum has always been downplayed
because the question we've been asking is
how it's going to impact my day-to-day,
my week-to-week by month-to-month,
and other than encryption,
I don't see that answer yet.
Again, other than what Michael discussed,
and what Michael discussed,
and what we're talking about is in line
with a lot of what he said.
Have you ever imagined how you'd redesign and secure your network infrastructure if you could start from scratch?
What if you could build the hardware, firmware, and software with a vision of frictionless integration, resilience, and scalability?
What if you could turn complexity into simplicity?
forget about constant patching, streamline the number of vendors you use, reduce those ever-expanding costs,
and instead spend your time focusing on helping your business and customers thrive.
Meet Meter, the company building full-stack zero-trust networks from the ground up,
with security at the core, at the edge, and everywhere in between.
Meter designs, deploys, and manages everything in enterprise needs for fast, reliable,
secure connectivity.
They eliminate the hidden costs and maintenance burdens, patching risks, and reduce
the inefficiencies of traditional infrastructure.
From wired, wireless, and cellular to routing, switching, firewalls, DNS security, and VPN,
every layer is integrated, segmented, and continuously protected through a single unified platform.
And because Meter provides networking as a service, enterprises avoid heavy capital
expenses and unpredictable upgrade cycles.
Meter even buys back your old infrastructure to make switching that much easier.
Go to meter.com slash CISOP today to learn more about the future of secure networking
and book your demo. That's M-E-T-E-R.com slash C-I-S-O-P.
Yeah, so to pivot to the last episode of the season, because we talk about businesses that are here now, a technology that's here now, and the marketing that's going on with it, and technology that is coming.
And in the last episode, you met with John from Data Tribe, and you guys talked about...
It was a fun conversation.
It was a very great conversation. I highly implore people to go listen to it because I think it shines a light on the business side of cyber and tech that many don't consider.
And I think in previous conversations, you've alluded to this, which is, obviously there has to be caveats, but, you know, we have to, the goal is to make as secure infrastructure as possible.
The goal is to get technology that isn't just, I think, the way you would describe it as window tinting and not, you know, and is actually changing what we're doing and making it better, right?
And I think, you know, he shined a light on this perspective, which is sometimes it's not even, oh, let's keep the money in the house and do these small little incremental.
upgrades, there's other issues with trying and getting these companies funding.
It has nothing to do with that at times.
Yeah, and it's interesting because those of us who are sitting in the trenches, we talk
about the goal being there to make the environment as secure as possible.
One of the things I teach when I teach Sands is to do so in a way that allows the business
to operate and generate revenue in advance a strategy within the environment.
And when I say that to a group of technologists,
who push back and say, question, do you work for free?
Do you work for free?
Would you work as hard as you do right now when I pay you no money?
All of you, all of us want to generate revenue.
I generate revenue by going to work every day and doing the things I'm doing in the environment.
So us securing the environment to a point where the business fails is,
a gesture and stupidity, and we need to change that perspective.
What I liked about my conversation with John was it was a reminder of the need to change
that perspective.
In many cases, it's not that VC folks aren't listening.
It's not that private equity folks aren't listening.
It's not that they don't care.
It's how we balance out the need for us to adopt, you know, to solve the problem in a way
that will allow those investors
who are investing not small dollars
to generate revenue in some
basis of return and balancing that
collectively across the board.
In the essay that I
intro this one,
that episode with, I talk about the
use case where a technology
that would have solved,
genuinely solved. A lot of the identity
problems we're trying to solve right now
was put in front of a
VC firm X number of years ago
and was rejected because
he was told that would drive the rest of the portfolio out of business.
You know, while that does not feel good as someone who's trying to solve an identity problem
and watching companies almost 10 years ago try and solve the same problem with more inferior
technologies out there, it was not in a reasonable stance to take.
Why adopt something that's going to cause 50% of my portfolio to go away?
It doesn't make that individual evil, though they were kind of rude.
about how they did it, and it doesn't make them unreasonable in terms of what they did
within the environment.
And John gave some good insight into how VCs, you know, pick different, look at different
startups, look at the problem, and some of the things that they're doing to try and close
the gap between, you know, the needs of me, the operator, and the needs of them, the investor.
There were two things that he mentioned during that.
that I want to reemphasize here.
One is their time horizon.
I mentioned during the episode that I genuinely believe that they're just not enough truly strategic CISOs out there who are thinking about the problem strategically.
And I spend a lot of time on these shows and when I teach Sam's to try and elevate thinking they're truly strategic thinking.
But strategic thinking tends to, at the best case, go five years out.
Usually it's one to three, some cases it's one to four, one to five years out.
In some cases, VCs are looking at time horizons beyond the strategic window.
So truly beginning to stretch the thinking within the cone of plausibility beyond your initial strategic window is hard, even for the best strategic C-Sems.
But the other thing that he mentioned that we need to do more of is if we are going to complain that we're not.
seeing investment in those companies that meet our needs, we have to show up and communicate
those needs to venture capitalists.
And things like, you know, the advisory boards or the dinners or the calls, etc., are how
we do that.
So when we choose not to do that, we take away an opportunity for someone who is asking our
opinion to get that opinion and insight so that they can make better decisions.
In other words, if we want to solve the problem, we got to show up, and we don't do that as well as we should.
And I think there's an argument to be made there that, yeah, like, that I don't want to say the word politicking, but or maybe networking, that aspect of having those conversations.
Some of them may not be fruitful, and that can be both discouraging and frustrating.
But I think it is valuable because even if only one or two or a handful of them do bear fruit, that does lead to an eventual better market.
position for you as a leader to buy better security products that actually have impact on
what you're trying to do. And I thought it was very interesting from John's conversation with you
that, you know, another challenge that he saw was entrepreneurs who don't really want to
commit to being a full-time entrepreneur. Maybe they are professors. Maybe they are semi-retired.
Maybe it could be many other things. Maybe they have their multi-entrepreneurs. They have multiple
ideas. And the aspect that while the idea may be really good or the product may be very good,
there's a risk element that VCs take on. And that's a reality. And I'm going to ask you for a
seven-digit figure to help fund my new idea. I want to know that you're committed as that
seven-digit figure. And that's not a part-time commitment. And that can be difficult. I've run
into, you know, Arizona State also has an incubation model where one of our professors, a former
military guy, actually, was developing a product in-house but didn't want to give up
this teaching assignment. So figuring out how to do both of ours was very, very difficult.
Yeah. It's something that makes sense as a challenge, but nothing that I would normally
have considered because my thought process was from the VC side. If someone's looking for money,
they're all in. They're ready to go. They're not looking to buy and sell off or just be half-ended.
I always thought it was, okay, we're going VC. Let's get everyone involved on this.
Yeah, and it gets really interesting because there's that balance between that and putting food on the table.
I had mentioned after the show my time with working with Jack Jones, who was founded the Fair Institute and built and created the fair model.
I met Jack at the time.
He was working with that model, but he was also working a CISO for a Midwestern bank at the time because he had.
had to keep food on the table while he was developing the company that was deploying, you know, the model at the time.
So balancing those two can be difficult.
So taking a step back, because while we did this reflection on the past couple episodes, I'd like to put it into picture for the whole season.
Because throughout this season, we've had fantastic conversations about emerging technologies.
We've had fantastic conversations about hard realities.
We've had fantastic conversations about existing blind spots that could get worse if they're not addressed.
And I think when you sit back and you look at this entire season, how do you feel about this brave new world, this, you know, this word, this phrase that we've been talking about, about the, what are the major challenges that you're seeing from a business side and from a security leader side, as well as what are the biggest opportunities,
that you think are emerging out there?
Great question.
So first is a recap, because we've only used it a couple of cases throughout the season.
The tagline internally for this season was Brave New World.
I use leading into the season, congratulations your CISO, now what?
What are some of the things that you are facing, you know, beyond, you know, beyond, you know,
just the Tech Stack and beyond just the incident of the month or beyond just the new legislation
that you need to be aware of.
So what I hope this did this entire season was allowed us to deep dive into some issues
like identity and like fraud and like a regulatory landscape and AI and quantum, et cetera,
to provide that education for even currency.
So we know what our day is like that don't necessarily have the opportunity to deep dive
to begin to have those conversations and begin to get a little education on that as we put the pieces,
as we put the pieces together within the environment.
So that was the intention of the season.
As we are now looking at wrapping up the season in terms of where my head is at,
I see every CISO is an optimist, and I genuinely believe that.
Because every day you look at 10 quintillion different ways.
that things can go wrong, and you get up underfunded, very tired, not enough sleep, et cetera.
And you get up and go stand in the gap and say, yeah, we can take them.
And you go ahead and then you get up, you know, battered and bruised and do the same damn thing the next day.
So every CISO, in my opinion, is a constant optimist.
And as a former CISO, I'm still an optimist.
I believe that the world is a little better, you know, because, you know, we stand in the gap,
line shoulder to shoulder, you know, we're trying to beat back the bad guys.
So on the positive side, I believe in the opportunities, I believe in the value of the
technology, I believe we are going to see some great things out of AI, I believe we're going
to see some great things out of quantum, I believe that technologies are going to continue to
evolve to beat back fraud better than we have before, but I also believe that I'm not going
to lack for work while that is going on to be doing less with you.
But the other thing that I would emphasize that is a cause for not pessimism or skepticism, but concern, is I believe we are losing sight of the fundamentals.
I believe that, and this is an education problem, it's a critical thinking problem, it's also cyber problem.
I think the disconnect that exists between old farts like myself and people were hiring is what we're not.
not necessarily seeing are the critical thinking skills.
As it becomes easier as I hold up my iPhone for us to get everything we need by Googling on
the iPhone and now by using chat GPT on the iPhone, we are depending upon external sources
for answers as to what went wrong and have to understand much less about the underlying
pieces and parts of the systems that are in the environment to call.
the problem. And that's a concern within cyber. I was talking to my class that I teach at
Berkeley. But half of one of my sections are computer science majors. So I said, okay, you know,
there are six of you that computer science majors. How many of you had to take a basic
assembler course, you know, within college, and four of them put their hands down? You know,
if you don't understand the basic fundamentals of how the system works, your ability to affect
effectively secure it will be limited.
And as tools make it easier for us to get answers,
chat, GPT, anyone, you know, just spit out at us if we frame the right question in.
Our need to understand those pieces and parts will continue to diminish.
I mean, I'm going to be old again.
I'm old enough to remember where there's things called script kitties didn't exist.
If you wanted to hack, you damn sure better know the code
versus have an account to pay somebody some Bitcoin to send you a little.
a piece of code at my environment.
So what, you know, Skip Kitty is a real thing right now within the environment.
So, you know, our continuing diminishment of the need to understand how things work as the technology
becomes more capable of doing things, I believe is going to represent a significant challenge
within the next 10 years
of our ability to secure
the environment. Now you
add that to the conversation you
and I had about AI
continuing to produce bad code
based upon bad code, based upon bad data.
We're going to see an increase
in potential vulnerability
an increase in potential blast
radius. At about the same time
we have a decreased ability to
understand truly what's going on
in the environment.
So all I will say is this is a good
time for me to think about retiring, but there's a good chance like for the last two times
I ain't going to be able to because someone's going to tap me on the shoulder and say we need
one more person with his sword and shield standing in the gap because I think that gap will
be bigger unless we solve those problems. Part of that is educational. Part of that is the
education system figuring out what the requirements are for a good cyber professional. A goodly portion
of that is the profession because we still often figured out what the requirements are in the
environment. Part of that is our ability to give back because there aren't enough of us who are
whining and complaining about the lack of talented skill that we see coming out of various systems
who are stepping up and doing anything about it except whining and complaining. So we need to
show up, tell people what we want, and participate in the process rather than just complain and
watch things continue to fall by the wayside. So I am still very positive. I am still very
optimistic, but I see that problem cresting the horizon.
I hope in, you know, like the old, the old show Monk, you know, the theme song,
I may be wrong now, but I don't think so.
Well, Kim, I thank you for your time today to take a step back and reflect on the conversations
we've had, not just over the past couple episodes, but this season in general,
it's been different from the last one, but I think just as equally.
valuable and insightful.
So I appreciate everything that you've provided and all the quality conversations
that your guests have also provided.
And that's a wrap for today's episode and for this season of CISO Perspectives.
This episode was edited by Ethan Cook, with content strategy provided by Myon Plout, produced by Liz Stokes, executive produced by Jennifer Eibon, and mixing sound design and original music by Elliot Peltzman.
Thanks so much for tuning in and for your support as N2K Pro subscribers.
Your continued support enables us to keep making shows like this one, and we couldn't do it without you.
We're so grateful to have had you with us this season, from all of the season.
here, thank you for listening. We look forward to bringing you more expert insights and meaningful
discussions next season.
Securing and managing enterprise networks shouldn't mean juggling vendors,
patching hardware, or managing endless complexity.
Meter builds full-stack, zero-trust networks from the ground up,
secure by design, and automatically kept up to date.
Every layer, from wired and wireless to firewalls, DNS security, and VPN
is integrated, segmented, and continuously protected through one unified platform.
With meter security is built in, not bolted on.
Learn more and book your demo at meter.com slash CISOP.
That's METER.com slash CISOP.
And we thank Meeter for their support in unlocking this N2K Pro episode for all Cyberwire listeners.
