CyberWire Daily - Security remediation automation. [CSO Perspectives]
Episode Date: September 30, 2024Rick Howard, N2K CyberWire’s Chief Analyst and Senior Fellow, turns over hosting responsibilities to Rick Doten, the VP of Information Security at Centene and one of the original contributors to the... N2K CyberWire Hash Table. He makes the case to invigorate the automation first principle cybersecurity strategy. In this case, he is specifically addressing remediation automation. References: Staff, n.d. National Pie Championships [Website]. American Pie Council. Rick Doten. Rick’s Cybersecurity Videos [Youtube Channel]. YouTube. Joe, 2020. The Unbearable Frequency of PewPew Maps [Explainer]. Stranded on Pylos. Aanchal Gupta, 2022. Celebrating 20 Years of Trustworthy Computing [Explainer]. Microsoft Security Blog. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K.
Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions.
This coffee is so good. How do they make it so rich and tasty?
Those paintings we saw today weren't prints. They were the actual paintings.
I have never seen tomatoes like this.
How are they so red?
With flight deals starting at just $589,
it's time for you to see what Europe has to offer.
Don't worry.
You can handle it.
Visit airtransat.com for details.
Conditions apply.
AirTransat.
Travel moves us.
Hey, everybody.
Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try Delete.me.
I have to say, Delete.me is a game changer.
Within days of signing up, they started removing my personal information from hundreds of data brokers.
I finally have peace of mind knowing my data privacy is protected.
DeleteMe's team does all the work for you with detailed reports so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for DeleteMe.
Now at a special discount for our listeners.
private by signing up for Delete Me. Now at a special discount for our listeners,
today get 20% off your Delete Me plan when you go to joindeleteme.com slash N2K and use promo code N2K at checkout. The only way to get 20% off is to go to joindeleteme.com slash N2K and enter code
N2K at checkout. That's joindeleteme.com slash N2K, code N2K.
Hey, everybody.
Welcome back to Season 15 of the CSO Perspectives podcast.
This is Episode 3, where we turn the microphone over to some of our regulars
who visit us here at the N2K Cyber Wire Hash Table.
You all know that I have a stable of friends and colleagues who graciously come on the show to provide some clarity about the issues we are trying to understand.
That's the official reason we have them on the show.
In truth, though, I bring them on to hip-check me back into reality when I go on some of my more crazier rants. We've been doing it that way for almost four years now. And it occurred to me
that these regular visitors to the hash table were some of the smartest and well-respected
thought leaders in the business. And in a podcast called CSO Perspectives, wouldn't it be interesting
and thought-provoking to turn the mic over to them for an entire show? We might call it other CSO perspectives.
So, that's what we did.
Over the break, the interns have been helping these Hashtable contributors get their thoughts together for an entire episode of this podcast.
So, hold on to your butts.
Hold on to your butts.
This is going to be fun. But, but, but...
My name is Rick Howard,
and I'm broadcasting from the N2K CyberWire's
secret Sanctum Sanctorum studios,
located underwater somewhere along the Patapsco River
near Baltimore Harbor, Maryland, in the good old U.S. of A.
And you're listening to CSO Perspectives,
my podcast about the ideas, strategies, and technologies
that senior security executives wrestle with on a daily basis.
with on a daily basis. Rick Doughton and I have been friends forever, and he is a man of many talents. Bartender, yoga instructor, boxer, YouTube host, rock climber, and foodie. In his past life,
he judged the annual National Pie Championships, where amateur, professional, and commercial pie bakers compete in their categories for the best pies in the country.
How great is that?
And he is also a world-class cybersecurity mind.
He's so smart that we've had him on the show 14 times. In his current gig, he is the VP of Information Security at Centene, ranked 22nd in the Fortune
500 list this year.
He advises a boatload of cybersecurity startups and knows practically everybody who is anybody
in the InfoSec profession.
He's a big deal.
For this show, he's taking on the cybersecurity first principle strategy of automation to specifically talk about security remediation automation.
And at the end of the show, when he gets done, I'll come on and ask him a few questions about it.
Here's Rick Doughton.
Thanks for that great introduction, Rick.
Wow, that's a trip down memory lane.
Hey, everybody.
My name's Rick Doughton,
and I'm so happy to be talking about this topic today.
And that's because today's infrastructures
are so complex and dynamic
that if we're still trying to rely on humans
for configuration updates and patches
and vulnerability remediations,
then we're never going to get ahead.
Remediation is a journey, not a destination.
And in cloud workloads,
there's never just hundreds of vulnerabilities
or configuration changes.
There are thousands or tens of thousands.
We need to be able to automate this to scale remediation.
Now, while not impacting the organization,
and this will take a combination
of solid governance process and supporting technology. Notice I'm specifically leaving the people out because those are who we're
supporting. And essentially, governance is of the people by the people, right? There are several new
tools coming out that support automated remediation workflows, and some leverage AI to determine what
to fix, how to fix, and others that will automatically remediate.
But those can only be effective without potential inactive impact when there's a process in place
that has automated QA and testing gates at scale. And yes, I understand that I'm advocating for this
topic and the weight of CrowdStrike, but I'll talk about that event later. Our main challenge is that we have no shortage of tools that find problems,
whether vulnerability scanning, posture management, application code scanning,
asset inventories, attack surface management, all those things.
We have lots of things that tell us we need to fix things.
But the problem is the security team doesn't usually fix things.
It's the IT department that fixes things.
And frankly, IT often resents security teams for continuously tossing reports of what they need to fix over the fence.
That resentment also is due to giving them more work to parse the report and figure out what they need to do.
Or these reports don't align with the chain control process IT uses.
So as a result, IT gets overwhelmed.
with the chain control process IT uses.
So as a result, IT gets overwhelmed.
These reports usually have little context of what the organizational risk is,
only often leveraging the scanning tool severity ratings.
So a severe finding in a publicly facing system
where the vulnerability has a known exploit
is very different than a severe finding
in a development system in a test lab
deep inside the organization. Some vulnerability tools can help by creating tickets for them, but
I know one case where tickets actually were just put into the security team ServiceNow instance,
where the IT team used a different platform. So the IT team was forced to create accounts in that
new system or script out accounts to pull what they need to do. Again, adding more work into the remediation process.
Because on the IT side,
once you're given the findings, that's the start.
They still need to research what the impact is,
prioritize, determine which team is responsible,
assign a ticket to their platform,
which may vary depending on if, for instance,
it's a code change or an infrastructure change,
and then find or create the remediation.
This step may involve finding or downloading a patch or researching a code
or some other configuration change.
Then they need to determine if there's dependencies that this fix might break.
And then finally, they actually test the fix and push it out at scale.
There are new remediation workflow tools that support prioritization,
normalization, deduplicating of findings to route them to the appropriate team,
and even create tickets to assign to a specific people. You can do all that today with SOAR tools,
the Security Orchestration Automation Response tools, but that's only if there's a process and
workflow to support that automation and you've already implemented it. And while
these are all great and brings tremendous improvement, it only gets us so far.
Another step within this remediation workflow process is the risk assessment to decide to
accept or mitigate this risk. Not much can be transferred unless there is compensating control
on another platform. So unfortunately, many of these risks are accepted because they can't physically remediate with impact to the
systems, either due to either compatibility requirements on the applications, resource
requirements, performance or latency limits, or stability reasons. And often the problem is just
a basic lack of resources. There aren't enough people to fix all the things. So they only focus on the most
important remediations. And hopefully the process was effective enough in prioritizing the right
ones in the first place. 20 years ago, I ran ethical hacking teams and we did both internal
and external network testing. But after a while, customers would just opt out of the internal
testing because as customer told me, they didn't want to know about the internal findings because if it was documented, they would be on the hook to eventually fix it.
So they just accepted the risk based on it being inside the firewall. When we're doing testing on
just a handful or a couple dozen externally facing systems, that would only produce a dozen or so
findings. But against hundreds or thousands of workstations, servers, and network devices,
findings, but against hundreds or thousands of workstations, servers, and network devices,
that would generate thousands of findings that they didn't have the resources or the time to fix.
And at that time, there wasn't a mature process to prioritize. We just all relied on the severity levels from the tools, like I mentioned before. We also then had to argue with the IT department
when we were reviewing the findings,
convince them they were actually real issues to fix,
because many of the IT shops were used to their security teams using scanning tools
to produce a whole bunch of false positive.
So doing vulnerability testing or pen testing by hand is very different
than using just automated tools and letting it produce a report.
This is because there's a human in the loop to verify these are real findings. Back then, we had a hierarchy of testing. At the bottom
were those automated vulnerability scanning tools that automatically regenerated reports.
The next level was vulnerability testing, which used the automated tools, but then the findings
were manually verified by the human and evidence was collected that it existed. And then the top level is penetration testing or ethical hacking,
which after the vulnerability testing and the human verified it, the human would then exploit
the findings to see how far they could get. Pen testing is similar to red teaming, but red teaming
is where you are mimicking a specific adversary's capabilities against a specific target.
So to convince the IT department this isn't a false positive, we would include screenshots of success as evidence.
But even then, for some, that wasn't enough because they would actually insist that it couldn't be true because they already patched that system.
Or they actually literally accused us of photoshopping the picture to make them look bad.
So in those cases, we would have to drop a file on the system that read,
we were here, and that was always fun.
After a few minutes of protest from the IT team that that FTP vulnerability wasn't possible to exploit externally,
they'd have to look in the directory and see our drop file that proves they were wrong.
We also would come into policy roadblocks or loopholes, really. I remember we
did an external pen test to a state government network, I won't say which, even if it was 20
years ago. After we gave them a report, they created a remediation plan, which we reviewed
and approved. Then the next year, as per their policy, we tested them again. But we found the exact same findings.
It was like nothing changed.
When we asked about this in the outbrief, they indicated that their policy was to create a remediation plan, not actually fix things.
So they just did what was required because they didn't have the resources to actually fix it.
I was in shock.
I'm now not surprised by anything.
it. I was in shock. I'm now not surprised by anything. So in subsequent years, when scoping pen tests, I would always ask, is it your intent to actually fix these or just create a mediation
plan? So that was a lesson learned. I was always amazed how many people answered that they just
had to test it and create the plan, not actually fix it. I got to experience this firsthand 10
years ago when I was a CISO of a mid-sized multinational company. We had a tiny security team and a small IT team of like less
than eight people. Most of our process was ad hoc, but I still, I spent most of my time convincing
them they need to fix things, but could see they only had one or two people with the expertise to
do it on different platforms. And some of our findings took some remediation research to come
up with a fix and then script how to fix it and they didn't
have the expertise to do any of that so I had one of my guys do it they get the
details create the script package it and run it for them otherwise we'd never get
anything fixed boy I wish soar was invented back then
so how do we automate I'm fortunate enough to talk to more vendors and startups than
most people. So I get to see new categories of tools develop and how different startups approach
different problems. First, as I mentioned, there's the workflow support tools. These will crowdsource
findings from the scanners, whether it be to the network, the applications, the posture management
tools, bug bounty groups, DLP data, etc., etc.,
deduplicate and group them together, determine who's responsible for remediation,
what workflow process they use, develop a priority based on business impact,
not the CVE severity rating, and then create a ticket in the appropriate ticking system.
Some tools also will batch them as to not send them all 100 findings at once,
but queue them up to push only like 20 a week to throttle if needed. Others will include steps
for a person to take, and still others will read the tickets to find similar issues to highlight
that this has been fixed elsewhere before, and this is how it was done. And because these tools have the visibility into both sides,
the ticketing system and the testing system,
that if they see a finding again,
they can just increment an open ticket that was already seen first
instead of creating a new ticket.
That interface can then track their mediation status
based on what teams, platforms, or business areas are being reviewed,
sending reminders to them them and measuring the teams
based on SLAs. These give a central picture of the process. The next level of tooling are the
remediation automation tools. Now, I'm not talking about SOAR, where the humans actually script the
work based on previous experience, but ones that generate the automation themselves through AI,
be it specific code updates, configuration script, or maybe
compensating controls if a patch isn't out yet. They can be configured to create a ticket with
a button to fix the problem or just fix it and create a ticket saying the problem was remediated
and then have a button to back it out if there was a problem with it.
This is where teams start to get nervous, though. They're always very scared
with automated updates to our systems, because if it goes bad, it can go really bad, just like the
CrowdStrike incident in July of 2024. Before talking about that incident, let me tell you a
story from almost 25 years ago. When I was a pen tester, we used to invite our customers in while we did the testing, because back then our competitors working in the large accounting firms...
And that's our show. Well, part of it. There's actually a whole lot more, and if I do say so
myself, it's pretty great. So here's the deal. We need your help so we can keep producing the
insights that make you smarter and keep you a step ahead in the rapidly changing world of cybersecurity.
If you want the full show, head on over to the cyberwire.com slash pro and sign up for an account.
That's the cyberwire, all one word, dot com slash pro. For less than a dollar a day,
you can help us keep the lights and the mics on
and the insights flowing. Plus, you get a whole bunch of other great stuff like ad-free podcasts,
my favorite, exclusive content, newsletters, and personal level-up resources like practice tests.
With IntuK Pro, you get to help me and our team put food on the table for our families,
and you also get to be smarter and more informed than any of your friends. I'd say that's a win-win. So head on over to
thecyberwire.com slash pro and sign up today for less than a dollar a day. Now, if that's more than
you can muster, that is totally fine. Shoot an email to pro at intk.com and we'll figure something out. I'd love to see you over here at N2K Pro.
One last thing.
Here at N2K, we have a wonderful team of talented people doing insanely great things to make me and the show sound good.
And I think it's only appropriate you know who they are.
I'm Liz Stokes. I'm N2K's CyberWire's Associate Producer.
I'm Trey Hester, Audio Editor and Sound Engineer.
I'm Elliot Peltzman, Executive Director of Sound and Vision.
I'm Jennifer Iben, Executive Producer.
I'm Brandon Karf, Executive Editor.
I'm Simone Petrella, the President of N2K.
I'm Peter Kilpie, the CEO and Publisher at N2K.
And I'm Rick Howard.
Thanks for your support,
everybody.
And thanks for listening.
Your business needs
AI solutions that are not only ambitious, but also practical and adaptable. Thank you. AI agents connect, prepare, and automate your data workflows, helping you gain insights,
receive alerts, and act with ease through guided apps tailored to your role. Data is hard. Domo
is easy. Learn more at ai.domo.com. That's ai.domo.com.