CyberWire Daily - AI's impact on business [CISOP]
Episode Date: December 2, 2025In this episode, Kim Jones sits down with Eric Nagel, a former CISO with a rare blend of engineering, legal, and patent expertise, to unpack what responsible AI really looks like inside a modern enter...prise. Eric breaks down the difference between traditional machine learning and generative AI, why nondeterministic outputs can be both powerful and risky, and how issues like bias, hallucinations, and data leakage demand new safeguards—including AI firewalls. He also discusses what smaller organizations can do to manage AI risk, how tools like code-generation models change expectations for developers, and the evolving regulatory landscape shaping how companies must deploy AI responsibly. Want more CISO Perspectives? Check out a companion blog post by our very own Ethan Cook, where he breaks down key insights, shares behind-the-scenes context, and highlights research that complements this episode. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyberwire Network, powered by N2K.
This exclusive N2K Pro subscriber-only episode of CISO Perspectives has been unlocked for all Cyberwire listeners through the generous support of Meter, building full-stack zero-trust networks from the ground up.
Trusted by security and network leaders everywhere, meter delivers fast, secure by digital.
design and scalable connectivity without the frustration, friction, complexity, and cost of managing an endless proliferation of vendors and tools.
Meter gives your enterprise a complete networking stack, secure wired, wireless, and cellular in one integrated solution built for performance, resilience, and scale.
Go to meter.com slash CISOP today to learn more and book your demo.
That's M-E-T-E-R-com
slash C-I-S-O-P.
Welcome back to C-S-O-P
I'm Kim Jones, and I'm thrilled that you're here
for this season's journey.
Throughout this season, we will be exploring
some of the most pressing problems facing our industry today
and discussing with experts how we can better address
them. Today, we're looking at AI's impact on business. As AI has only continued to advance
and proliferate across every sector, managing its impact is more important than ever.
Let's get into it.
For those filling out their buzzword bingo cards, it's time to talk about artificial intelligence.
I first heard the term AI outside of an academic setting in the fall of 2018.
The incoming CEO of a large company stood up in front of the entire organization and made the pronouncement that AI was going to be the next revolutionary advancement in technology.
Further, he prognosticated that this advancement would be upon us within the next next.
five years. I remember that many of us in the audience weren't certain were the CEO meant to take
the company, much less at the time the information security team. Shortly after this pronouncement,
the cyber leadership team met to discuss strategic planning and how the CEO's vision would impact
our planning and initiatives. While I was one of the two old guys in terms of experience,
I'd been in the company for less than two months.
As such, I intended to listen and absorb the insights of my new team and new boss.
After two hours of sitting on my hands, the CISO called me out.
I know you're new here, Kim, but your opinion counts as well.
What are your thoughts?
I sat for a moment, took a deep breath, and responded.
I think we're having the wrong conversation.
Up until then, most of the leadership teams seemed focused on tweaking their operational plans and adding the term AI to existing initiatives versus looking at the broader questions presented by an AI-driven future.
When one of my peers asked sarcastically what I meant, I grabbed the marker and wrote out a list on the whiteboard.
Does the company intend to build its own AI engine or just integrate into existing third-party products?
AI is data-driven. How do we normalize our data and break down silos securely?
What does breaking down these data silos mean in terms of our compliance posture,
as several of our environments have unique compliance requirements to include our access management posture and controls?
Are there new threat factors associated with AI other than accelerating existing attacks against our environment?
If acceleration is the main threat adjustment we can expect,
are our products, processes, and tools capable of handling this volume shift?
As AI becomes integrated into tools sets that we use,
how do we evaluate the security of these tools?
How do we as a security team capitalize on the many benefits that AI has to offer
as we continue to whip up on the bad guys?
After a brief back and forth, the consensus was that I just didn't understand how things were done in my new company.
So my top-of-head bullet list was dismissed as irrelevant.
Cut two four years later and the release of ChatGBT, GBT in November 2022.
My peers, three-fourths of them the same individuals who are at the 2018 meeting,
and the new CSO now found themselves scrambling to address the above list of questions,
and so many more, as the organization surged to capitalize on AI's advantages as quickly as possible.
Companies are continuing to come to grips with AI and the advances the technology can create.
Unfortunately, in the race to capitalize on these advances, many companies are taking a ready-fire-aim approach to AI adoption.
While this approach is nothing new to the security practitioner, the desire for speedy adoption, combined with the psychological predisposition that technology can be implicitly trusted, has led to many organizations believing false information provided by AI platforms, also known as hallucinations.
worse, individuals have a tendency to enter information into AI engines without realizing
that the AI platform is a third-party platform outside the scope of control of the organization.
The result, sensitive corporate information, and a regulated information have been uploaded into AI engines.
CISOs need to ask the hard strategic questions surrounding AI if we hope to stay ahead of potential pitfalls and challenges,
this advancement in technology
might inadvertently cause.
Why two cents?
I don't consider it an exaggeration to say that Eric Noggle
is one of the finest minds today
in the area of operationalizing generative AI.
His electrical engineering background
gives him a predisposition
toward meticulously understanding the technology.
His many years of experience as a CISO
allow him to understand both the advantages and risks
associated with any new technological innovation.
And finally, his knowledge as a patent attorney
gives him a unique understanding of the potential legal pitfalls
associated with fast innovation around a largely untested technology.
Eric sat down with me to speak about some of the things
he's been doing lately around AI.
A quick note that the opinions expressed by
Eric in this segment are personal and should not be interpreted as representing the opinions of any
organization that Eric has worked for, past or present.
Hey, I really appreciate you making the time here.
I think it should be a good conversation.
I think you will bring a perspective to the topic that a lot of my listeners need, but don't
necessarily have.
So, again, I genuinely appreciate you taking the hour for me.
Happy to do it.
You and I have known each other for a while, but my audience might not.
Tell us about Eric Noggle.
I am a recovering former CISO, I think, is the best way to say it, working in a security department for a large tech company.
And my background is semi-unique in the sense of I'm an electrical engineer by training, but I've been in the security industry almost my entire career.
And so I come with a double-e as well as an attorney background.
So in addition to being an attorney, I'm a patent attorney.
So I have an interesting mix of backgrounds that I have found to be very helpful in this industry.
And so I've been in my current employer for almost 12 years,
help them implement a responsible AI program as well as figuring out how to best secure it.
Fantastic.
So responsible AI, convince me that's not an oxymoron.
You know I was a bull punches, brother.
It's a goal.
I would say, you know, doing it,
responsibly. A.I. has existed for quite some time, right? And so we've had a responsible
AI program within companies, including our company, for quite some time, but it was
classic AI or ML, right, machine learning. But now with the advent of generative AI and the black
magic that that is in terms of how it operates, companies have had to really consider how to
safely launch it into their environments in ways that basically are their customers would consider
both useful, but also protecting of their information, both company information, as well as
the information we process on behalf of our customers.
So I'm going to take us back half a step because our audience is varied and very backgrounds.
You know, I grew up geek as well. So you telling me we've had AI for a while and AI versus ML for
while makes perfect sense to me, poke a little bit so that we understand that distinction in
terms of where we were and what we were doing versus what we need to do now with generative
AI, if you would, please.
Yeah, I mean, the thing that has existed for a long time is really the idea of applying
AI principles for machine learning environments, right?
So if you wanted to detect fraud in large amounts in granular ways,
basically you could train a model that would basically be very good at recognizing patterns within your data
in order to identify them and hopefully stop them in their tracks.
The difference with classic AI is that basically you get a deterministic result.
So if you put in the same data, you get a determined or the same output.
But if you did it tomorrow, you did it five minutes later, it is absolutely the same.
With generative AI, there's a randomness component because of how it operates.
And so it isn't deterministic in the sense of you will get slightly different answers
every time you ask the same question, even if in quick succession.
And so unfortunately, as a software development company, you know, we're used to the deterministic side of it.
And so the ability to understand how to operate in this environment safely is something that required taking a risk-based approach back into its coming into the business.
So getting slightly different answers with the same sets of data, even if asked in rapid succession.
And again, I'm going back very basically, Eric, so that our listeners have the full understanding.
It would seem at face value this isn't a good thing.
Why is it?
Why is it a good thing?
Or is it really not a good thing?
Please.
Yeah, no, it is a good thing.
It's really amazing.
It started with the transformer models that came out of Google.
But really, it's unexpected results.
So, again, the generative AI is based on the
concept of a large language model, right? And so a large language model is trained in a certain
way in order to be able to provide, you know, results that are, you know, at an initial first
glance, surprising. So these are the basis of chat bots and other things that basically
your listeners are actually experimenting with, you know, with chat GPT and, you know, Gemini
from Google, Anthropic Cloud. These are all large language models that exist in the environment.
They've been spent millions and millions of dollars and lots of GPU cycles to train.
And it's a very interesting training process, but the reason why they're useful is that basically you can ask it in natural language questions, and it will give you a very well-formed response.
One way to think about how they operate is that they are a regurgitation engine, right?
So if you train it on a whole bunch of books, if you train it on a whole, excuse me, a whole bunch of documents, PDFs, you know, images, these models are trillions of parameters.
It's hard to believe how many parameters are in place inside these large language models.
But what it really is doing at some basic level is it's predicting the next word in a sentence.
and it keeps, you know, based on the direction it's going, basically, it can give you a very coherent
English in most cases, but now doing pretty well in other languages, a response that basically
makes sense to humans. So it can interact with it like it actually is a person. You can ask it a
question and you can get a coherent response. And that's nothing like the original classic AI.
The classic eye would say, does it fit this parameter?
Does it fit this profile?
As it's been trained, the answer is yes or no.
And you can use it.
It's very useful.
So we still use ML for a lot of useful things.
But generative AI is fundamentally a new way of interacting with computers
that actually allows us to interact with natural language
and get natural language responses.
That actually makes sense.
Okay.
You said it was a regurgative model.
regurgitation does not necessarily indicate analysis, but I'm assuming that given infinite
variables, infinite inputs, infinite processing power and infinite time, the end result in some
form or fashions would be the equivalent of human analysis. You know, what we're really doing
when we do human analysis is just taking the facts in front of us and putting them together
to determine, you know, the pieces that might be missing. Would you agree with that statement
or am I oversimplifying?
I don't think you're oversimplifying.
That is actually kind of how it works in the sense of the reason it's useful is because of its training.
So the reality is that of all the different ways a LLM could respond,
and it takes an amazing amount of processing power and a whole lot of skill
to train a language model, a large language model, in a way that makes it useful.
And so that's why all these companies are given.
getting these amazing valuations and they're, you know, getting amazing offers for, you know,
people that are great in this field, you know, millions and millions of dollars to, to go work for
one of the big three or the big six in this industry. But what I would tell you is that
that's the training model is basically, you know, how the language model learns, right? But is it
reasoning? You know, I would say it's getting better at reasoning and getting closer as an
approximation to reasoning, but it's really performing vector math.
And at the end of the day, it's not reasoning in the way that humans consider reasoning,
but it approximates it with ever better clarity, if that makes any sense.
And that makes perfect sense to me.
I guess my concern is, and we've seen this happen, we saw this happen with outsourcing,
offshoring, wireless, cloud, et cetera.
The expression I use is ready fire aim.
You know, we want to pull the trigger and say, okay, we're going to use it for this,
and then figure out whether or not it makes sense or not.
And I don't mind that from an innovative standpoint, but it does exacerbate risk in many business environments.
You know, talk to me regarding, and I mentioned them earlier, you know, we've had cases where AI has
exhibited bias against certain categories of individuals as it's providing response, some favorable,
some unfavorable.
We've had cases of hallucinating within the environment.
Talk to me about those things, how they happen, and what can be done to prevent that,
if anything?
Yeah, no, it's a great question.
So absolutely, that's part of responsible AI, is figuring out how to make sure this AI,
when you expose it to actual customers, it doesn't offend them.
It doesn't do things that are basically against the law or against what you would want it to do from a reputational standpoint for your company.
And so what we have found that we had to do is actually build a system.
Think of it as a firewall.
You know, given your listeners and how their backgrounds, I would think most of them understand the concept of a network firewall.
But frankly, in an AI sense, particularly genetic AI, you know, we actually built.
one of the first ones in the industry, an actual AI firewall.
So to get from one side to the other, if you think about it this way.
So I put a prompt in, essentially, it has to go through a series of ML modules,
which is where we started.
We had 13 individual models.
One was an anti-bias module.
So anything that would come up with or a prompt from a user that is likely to result in a biased response,
basically it would get flagged.
And then we would either rewrite it on behalf.
of the customer or we would basically block it and say we can't answer that question because it's obvious that you're seeking a biased response but we also gate it on the way back so think of it as a two-way firewall which basically the completion as it's called coming back basically also cannot be biased and so we check for that and then we basically continually retrain these modules initially it was a bunch of ML modules that we trained to be very good at detecting
risks like prompt injection, fairness, accuracy.
These are all things that we had modules for.
But then you have to worry about weird things like emojis.
Hard to believe, but emojis basically can get the LLM to respond in very unpredictable ways.
And so we basically, same thing with code, code detection.
If you can actually make the model hallucinate, if you can actually pass Python code
or other code as part of your prompt.
So we use code detection.
But essentially, to your point, right,
companies that are trying to put this into their environment
need to have the ability to make sure that it is safe.
And out of the box, they are not.
So I would say that the initial training
that all these LLMs go through
is intended to be safe,
but we have found in many cases that we have to supplement
what the actual LLM manufacturer or trainer has done
with actual code on our environment
that basically makes it a whole lot safer.
Okay, so understanding what you're saying
and that makes perfect sense to me,
but also understanding that I'm talking to someone
who was on staff at a Fortune 400 company
with several hundred people on security staff
in a fairly robust and sizable security budget.
What does someone not of your size and scope do to put in reasonable levels of control
as they're looking at bringing in AI within their environment?
It's a good and bad scenario.
I would say there's a whole bunch of startups as well as now acquired startups
that are now in larger companies that offer this as a service.
And so when we looked around two years ago, two plus years ago now, we found that basically
what we needed to be safe in this space didn't exist.
So we ended up having to build it ourselves.
But I would say smaller companies, companies that don't have the same kind of sizable security
budget and staff, have the ability to consume these services on a per size or per application
basis that is not completely out of reach for companies with more modest resources.
And so, you know, there's a new startup every day in this space, literally, or many, more than that,
on a daily basis. A lot of them are coming out of Israel, but also in other parts of the world.
Silicon Valley has its own share. And so I would encourage companies and small proprietorships
that basically want to consume it to consider the risks, you know, of making it available
and to put reasonable, you know, capabilities, you know, in front of that, that allows them
to operate with a safe manner.
Okay, we talked about a couple of risks, Eric, through this conversation.
But for a company or a small shop or whomever, that is approaching the we want to be,
we want to deploy AI in some sort of logical fashion, et cetera.
What are the top three or four things from a risk standpoint that they need to be aware of?
Well, the best thing that we have found for anyone to do, large or small,
is to basically use it for what it's good at and then not use it for what it's not good at.
That would be my top one.
The second one is to basically constrain it.
In other words, unbounded chatbots are not.
not considered very useful. They're much more likely to come back with off-topic responses.
It's similar to what you see when you query Google today. If you query Google today, it comes
up with a Gemini AI response, as well as the things that you're used to seeing, which is the
links to potential page rank answers to your query. What Google is doing with its Gemini LLM is basically
saying, hey, you know, you prompted it the way you've always prompted it, which is search for this, right? You put in a topic and it gives you a bunch of things to go choose and click on. But the AI thing is actually giving you a sample of what it can do for you. In other words, it's basically saying, you know, let me summarize the best things from the links below in a way that's very consumable. And so a lot of people thought that, you know, AI or Open AI in particular was going to kill Google's.
search business, read an article that says their search business is better than ever,
but partially because they have figured out how to profit from it in both ways,
which is people still want the search, but they also love the AI summary at the top, right?
You can discount it, but it's getting better and better.
And I find it's amazingly useful.
So I think that's two of the top things.
The third one is basically constraint.
the prompt in the sense of the ability to say an unbounded prompt is is not useful let me just
capture the things that I actually want the user to interact with and then only have those components
right and so if you're doing the air conditioning business you you give a little bit of information
on your outgoing phone call that says you know thank you for calling you know and then basically
ask it ask the person to basically identify three things
that would be useful for us to know that will actually optimize for our scheduling software and those kinds of things.
And so, you know, maybe that was more elaboration of number two, but the third one is basically look for tools to enhance, you know, on a as needed basis to protect against hallucination and some of the other problems that come with the use of the law.
Have you ever imagined how you'd redesign and secure your network infrastructure if you could start from scratch?
What if you could build the hardware, firmware, and software with a vision of frictionless integration, resilience, and scalability?
What if you could turn complexity into simplicity?
Forget about constant patching, streamline the number of vendors you use, reduce those ever-expanding costs, and instead space.
your time focusing on helping your business and customers thrive.
Meet Meter, the company building full-stack zero-trust networks from the ground up
with security at the core, at the edge, and everywhere in between.
Meter designs, deploys, and manages everything in enterprise needs for fast, reliable,
and secure connectivity.
They eliminate the hidden costs and maintenance burdens, patching risks, and reduce the
inefficiencies of traditional infrastructure.
From wired, wireless, and cellular to routing, switching, firewalls, DNS security, and VPN,
every layer is integrated, segmented, and continuously protected through a single unified platform.
And because Meeter provides networking as a service, enterprises avoid heavy capital expenses
and unpredictable upgrade cycles.
meter even buys back your old infrastructure to make switching that much easier.
Go to meter.com slash CISOP today to learn more about the future of secure networking and book your demo.
That's M-E-T-E-R dot com slash C-I-S-O-P.
So let's put back on your research.
recovering CSO hat and think about, you know, any new technology injected into an environment
presents some level of cyber risk. Some of these, we probably already know, but from a cyber
standpoint, what are the concerns that we have regarding, let's start with unbounded AI
operating within an enterprise environment? Well, I mean, the biggest concern that we had,
and I think anyone will have is data loss, right?
Basically, you can leak model data into the model.
And so one of the first things we did was by contract,
have our own standalone instance of these LLM models.
So we paid extra and executed contracts to be able to use one that was dedicated to us.
It was not multi-tenant.
If you read the actual legal language around these models that are available on
internet, you know, they basically say we reserve the right to take all your data and use it
to train our models. And they do that in a constrained way because it can be bad. And so they,
they pick and choose. But for a company or even an individual that pays for their own subscription
to, you know, Chad GPT, you know, individually, you can get assurances that your data will not
leak in that way. But to your point, you know, there's ways that basically
just interacting with the model can allow data leakage.
So there's a thing called prompt injection,
the ability to put things into the prompt
that will cause the actual LLM to go off the rails.
You can get it to provide a biased response.
You can get it to provide a profane response.
You can get it to start to do things
that explain some of the private information
that's in the model, like the model weights.
These are all things that basically the model creator is trying to prevent.
But in many cases, you have to take steps to make sure that those things don't happen.
And so that against the AI firewall that we put in place,
it was really to guard against the main risk, which is leaking your data in ways that you should not.
Two questions I wanted to ask you, but I'm going to start with going about five degrees off.
I've heard, we've all heard, regarding some of the things.
things AI can and can't do.
Let's talk code.
I've heard a statement actually made by someone in your company as well that AI prompted appropriately
can potentially code about 80% as good as an entry-level software engineer.
Agree or disagree on a personal level?
I think I agree.
Or no comment if you need no comment.
No, no, no, it isn't a no comment.
No, I think it's actually quite good for doing prototypes.
I think you can get all the way done with an actual prototype,
but can it actually produce code that's in compliance with your coding standards?
Can it not have the errors that basically an entry-level engineer would make or often does make?
I think it's a little bit overblown to basically say it's going to replace all of our entry-level engineers,
because one way of thinking about that is that if you don't have entry-level,
how are you going to get the next level up?
Fair.
So the reality is that the death knell for the software coding industry, I think, is a bit premature.
But is it going to change the requirements for those entry-level engineers?
And here's your hypothetical.
Yeah.
Right now, I pay that entry-level engineer to code.
Do I now need to have that entry-level engineer who understands the critical thinking necessary
to review the code that is spit out via an AI engine?
to determine whether it meets standards and put that last 20%,
which is a slightly different skill set
than I saw entry-level engineers are being recruited on today.
What are your thoughts?
No, it's actually a very big concern for a company
that does software development as our main output.
And so what I would tell you is that, you know,
one, I don't think entry-level engineers are obsolete.
I think the ones that know how to use,
the tool tools well will exceed where those that don't know how to use the tools well will not
succeed. But the big thing was we wanted to make sure the engineers were accountable for whatever
code that they deliver. And so to your point, lazy engineers, you know, produce lazy code.
And so, you know, the whole idea of coding using this method, we had to train people to basically
say, this is your code. You are responsible for whatever you check in. If you borrowed it from
some other place, you better know how it's operating and be able to explain it, right? And then we have
the four-eyes rule or the two-people rule. Anything that gets merged into the environment basically
has to be peer-reviewed. But these days, it's gone even further, Kim, right? The Code Whisperer,
you know, some of the other ones that basically have wind-surf and cursor.
These are, they call vibe code coding tools, which allow people that don't even have a coding background to basically use natural language to say, I need a routine that does the following things.
Yeah.
And it can get you very close to that.
And those companies are having amazing evaluations because of the utility of that.
But again, I think the way it's going to work is that you can get close with that.
but it's something that basically will have to be vetted.
And the one thing that they don't talk about is vibe debugging.
So vibe coding works, you know, and it gets you to the 80%,
but you have to understand the 80%,
and then you have to finish it with the other 20% for it to be useful.
Yeah, yeah, that makes sense to me.
So let me shift and ask you to now put on, you know,
your lawyer hat for a moment.
what are you seeing in the regulatory landscape and the legal framework right now or the legal fabric?
And let's start U.S. base for the legal fabric.
If you're seeing anything significant overseas, we'd love to hear it as well.
Because AI has created new and interesting challenges that have started in terms of some legal challenges within the environment.
and I expect it to continue.
What are you seeing in the environment now
and what do you predict will come in the future?
Well, in lieu of the government,
the U.S. government, basically having a unified voice on this,
which given the current state of our union
and Congress in general, I think, is unlikely.
You have individual states that are leading in this space,
similar to CCPA, CPR, you know,
California was out in front on the,
privacy side.
Colorado has a, I would call it flawed law that is coming into space, and they had a chance
to fix it.
They didn't, but they know it needs to be fixed.
Talk to me, if you're willing to deep dive and if you can't, I understand, why is it flawed?
Just because it's ambiguous.
And so a company, a company like ours that is trying to evaluate when we operate in all 50
states and certainly around the world we try and take a lowest common denominator approach to
regulations and so we pick the most restrictive interpretation when we can and code to that if you will
so therefore we would meet all the lesser ones but new york massachusetts Colorado have all
kind of split off and done different things in that space the one that's closest to being in
force which will happen i think early next year
is Colorado. And so it affects some of our money lending, you know, in the commercial space and
some of those kinds of things. And we think we have a way of dealing with it, the ambiguity in the
law. But we're also trying to predict where it's going to go. But what I would tell you is that,
you know, from a regulation standpoint, the like it always is in the cybersecurity space as well
as privacy, you know, in general, legislation tends to lag the actual industry and where it's
going. And so we keep a strong eye out on where things are going in that space and then we
code to it. To your point on Europe, Europe has the AI Act, right? And so they've kind of said,
you know, if it's in the following high risk areas, you're not allowed to do it at all. Or there's
a whole bunch of regulation that will come with it. And then here's slightly lower risk and it has
these, you know, semi-relaxed requirements. And then the ones that basically aren't really
of a concern basically are allowed to go into force without a whole lot of review from the
regulators.
If I were to summarize a lot of what you're talking about within the environment and put
on my old CSO hat, a lot of this falls under the concept of governance within the environment.
Talk to me about the challenges in terms of, you know, starting up the AI governance,
having the, I mean, you mentioned it yourself, Eric, you have a, quote Liam Neeson and taken,
you have a unique set of skills that most people don't have or have within their teams within the
environment. So, you know, let's talk about the challenges of starting up AI governance.
Let's talk about the requirements to start up AI governance and, you know, some of the things you've
seen, some of the things that went well, some of the things that didn't go well within the
environment. Talk to me.
no that's that's actually a great question what i would what i get from talking to ceo peers is that
they wish that they hadn't you know basically desired to create a governance program after the
horse was already out of the barn within their business units yeah the thing that they most are you know
and they even use the j word jealous of of what we have done is basically is that we started with
a risk-based approach um i was asked to write the risk paper you know two and a half years ago for
the company. And so we identified all of these risks. And then they gave me the charter to go
build something to protect against each of those risks. And so we hired a technical team. We
brought them in. We've hired experts out of universities that are PhDs in this space. And we
took advantage of all that expertise to create an environment plus a single path. And so the
biggest thing that they wish they had was a single path that all their business units were
forced to use that basically provides transparency, visibility, observability in the environment,
as well as the security protection. So if you use, we call it a paved road. If you use what we
have announced as GenOS, we also have a product that we call internally Gen SRF or security
risk and fraud. And that's where all of those individual ML models
have started, but what I would tell you is that they most wish they had established a risk-based
approach and then have the will, the business and internal politics will, to basically say,
this is the one way for people to make these experiences available to their consumers.
All right. I think we're going to have to leave it there. Eric, I really appreciate you taking the time to lay this out,
for me and for our listeners as well.
And again, thanks as always, man,
and I'm looking forward to catching up with you real soon.
Thank you for this opportunity, Kim,
and look forward to seeing you soon.
And that's a wrap for today's episode.
Thanks so much for tuning in
and for your support as N2K Pro subscribers.
Your continued support enables us to keep making shows like this one,
and we couldn't do it without you.
If you enjoyed today's conversation and are interested in learning more,
please visit the CISO Perspectives page to read our accompanying blog post,
which provides you with additional resources and analysis on today's topic.
There's a link in the show notes.
This episode was edited by Ethan Cook,
with content strategy provided by Myon Plout, produced by Liz Stokes,
executive produced by Jennifer Ibin,
and mixing sound design and original music by Elliot Peltzman.
I'm Kim Jones. See you next episode.
Securing and managing enterprise networks shouldn't mean juggling vendors,
patching hardware, or managing endless complexity.
Meter builds full-stack, zero-trust networks from the ground up,
secure by design, and automatically kept up to date.
Every layer, from wired and wireless to firewalls, DNS security, and VPN
is integrated, segmented, and continuously protected through one unified platform.
With meter security is built in, not bolted on.
Learn more and book your demo at meter.com slash CISOP.
That's METER.com slash CISOP.
And we thank Meeter for their support in unlocking this N2K Pro episode for all Cyberwire listeners.
