Screaming in the Cloud - An Open-Source Mindset in Cloud Security with Alex Lawrence
Episode Date: November 16, 2023Alex Lawrence, Field CISO at Sysdig, joins Corey on Screaming in the Cloud to discuss how he went from studying bioluminescence and mycology to working in tech, and his stance on why open sou...rce is the future of cloud security. Alex draws an interesting parallel between the creative culture at companies like Pixar and the iterative and collaborative culture of open-source software development, and explains why iteration speed is crucial in cloud security. Corey and Alex also discuss the pros and cons of having so many specialized tools that tackle specific functions in cloud security, and the different postures companies take towards their cloud security practices. About AlexAlex Lawrence is a Field CISO at Sysdig. Alex has an extensive history working in the datacenter as well as with the world of DevOps. Prior to moving into a solutions role, Alex spent a majority of his time working in the world of OSS on identity, authentication, user management and security. Alex's educational background has nothing to do with his day-to-day career; however, if you'd like to have a spirited conversation on bioluminescence or fungus, he'd be happy to oblige.Links Referenced:Sysdig: https://sysdig.com/sysdig.com/opensource: https://sysdig.com/opensourcefalco.org: https://falco.org
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
This promoted guest episode is brought to us by our friends over at Sysdig.
And they have brought to me Alexander Lawrence, who's a principal security architect over at Sysdig.
Alexander, thank you for joining me. Hey, thanks for having me, Corey.
So we all have fascinating origin stories, invariably. You talk to someone,
no one in tech emerged fully formed from the forehead of some god. Most of us wound up starting
off doing this as a hobby late at night, sitting in the dark, rarely emerging.
You, on the other hand, studied mycology. So watching the rest of us sit in the dark and
growing mushrooms was basically how you started, is my understanding of your origin story.
Accurate, not accurate at all, or something in between?
Yeah, decently accurate. So I was in school during the wonderful tech bubble burst, right?
High school era.
And I always told everybody, there's no way I'm going to go into technology.
There's tons of people out there looking for a job.
Why would I do that?
And let's face it, everybody expected me to.
So being an angsty teenager, I couldn't have that.
So I went into college looking into whatever I thought was interesting.
And it turned out I had a predilection to go towards
fungus and plants. Then you realize some of them glow and that wound up being too bright for you.
So, all right, we're done with this time to move into tech. Strangely enough, my thesis,
my capstone was on the co-evolution of bioluminescence across aquatic and terrestrial
organisms. And so did a lot of focused work on specifically bioluminescing fungus and
bioluminescing fish like photobelphara and palpebratus and things like that. When I talk to people who are trying to figure out,
okay, I don't like what's going on in my career. I want to do something different.
And their assumption is, oh, I have to start over at square one. It's no, find the job that's a
halfway between what you're doing now and what you want to be doing and make lateral moves rather
than starting over five years in or whatnot.
But I have to wonder, how on earth did you go from A to B in this context?
Yeah, so I had always done tech. My first job really was in tech at the school districts that
I went to in high school. And so I went into college doing tech. I volunteered at the ELCA
and other organizations doing tech.
And so it basically funded my college career. And by the time I finished up through grad school,
I realized my life was going to be writing papers so that other people could do the research that I
was coming up with. And I thought that sounded like a pretty miserable life. And so it became
a hobby. And the thing I had done throughout my entire college career was technology.
And so that became my new career and vocation.
So I was kind of doing both and then ended up landing in tech for the job market.
And you've effectively moved through the industry to the point where you're now in security architecture over at Sysdig, which when I first saw Sysdig launch many years ago, it was, this is an interesting tool. I could see observability stories. I can see understanding what's going on at a deep level. I
like that as a learning tool, frankly. And it makes sense with the benefit of hindsight that,
oh yeah, I suppose it does make some sense that there are security implications thereof.
But one of the things that you've said that I really want to dig into that
I'm honestly in full support of because it'll irritate just the absolute worst kinds of people
is one of the core beliefs that you espouse is that security in when it comes to cloud is inherently
open source based or at least derived. I don't want to misstate your position on this.
How do you view it?
Yeah, yeah. So basically, the stance I have here is that the future of security in cloud is open source. And the reason I say that is that it's a bunch of open standards that have basically
produced a lot of the technology that we're using in that stack, right? Your web servers, your
automation tooling, all of your different components are built on OpenStacks,
and people are looking to other open tools to augment those things. And the reality is,
is that the security environment that we're in is changing drastically in the cloud,
as opposed to what it was like in the on-premises world. On-prem was great. It still is great. A lot
of folks still use it and thrive on it. But as we look at the way software is built
and the way we interface with infrastructure, the cloud has changed that dramatically. Basically,
things are a lot faster than they used to be. The model we have to use in order to make sure
our security is good has dramatically changed, right? And it all comes down to speed and how
quickly things evolve. I tend to take a position that one single brain, one entity, so to speak,
can't keep up with that rapid evolution of things. Like a good example is Log4J, right?
When Log4J hit this last year, that was a pretty broad attack that affected a lot of people.
You saw open tooling out there like Falco and others. They had a policy to detect and help
triage that
within a couple of hours of it hitting the internet. Other proprietary tooling took much
longer than two hours. Part of me wonders what the root cause behind that delay is, because
it's not that the engineers working at these companies are somehow worse than folks in the
open communities, in some cases the same people. It feels like it's almost corporate process ossification of, okay, we built a thing. Now we need to make sure it goes through branding
and legal and marketing, and we need to bring in 16 other teams to make this work. Whereas in the
open source world, it feels like there's much more of a, I push the deploy button and it's up the end.
There is no step two. Yeah. So there is certainly a certain element of that. And I think
it's just the way different paradigms work. There's a fantastic book out there called Creativity Inc.
And it's basically a book about how Pixar manages itself, right? How do they deal with creating
movies? How do they deal with doing what they do well? And really what it comes down to is
fostering a culture of creativity. And that typically
revolves around being able to fail fast, take risks, see if it sticks, see if it works. And
it's not that corporate entities don't do that. They certainly do. But again, if you think about
the way the open source world works, people are submitting, you know, PRs, pull requests, they're
putting out different solutions, different fixes to problems. And the ones that end up solving it
the best are often the ones that end up coming to the top, right? And so it's just the way you iterate is much more
akin to that kind of creativity-based mindset than I think you get out of traditional organizations
and corporations. There's also, I think, I don't know if this is necessarily the exact point,
but it feels like it's at least aligned with it, where there was for a long time,
by which I mean pretty much 40 years at this point, the debate between open disclosure and
telling people of things that you have found in vendors' products versus closed disclosure. You
only want, or whatever the term is, where you tell the vendor, give them time to fix it, and it gets
out the door. But we've seen again and again and again where researchers find something, report it, and then it sits there in some cases for years.
But then when it goes public and the company looks bad as a result, they scramble to fix it.
I wish it were not this way, but it seems that in some cases,
public shaming is the only thing that works to get companies to secure their stuff.
Yeah, and I don't know if it's public shaming per se that does it, or it's just priorities,
or it's just, you know, however it might go.
There's always been this notion of, okay, we found a breach, let's disclose appropriately,
you know, between two entities, give time to remediate.
Because there is a potential risk that if you disclose publicly, that it can be abused
and used in very malicious ways, and we certainly don't want that.
But there also is a certain level of onus once the disclosure happens privately that we got to go and take care of those things. And so it's a balancing act. I don't know what the
right solution is. I mean, if I did, I think everybody would benefit from things like that,
but we just don't know the proper answer. The workflow is complex. It is difficult. And I
think doing our due diligence to make sure
that we disclose appropriately is the right path to go down. When we get those disclosures,
we need to take them seriously is what it comes down to. What I find interesting is your premise
that the future of cloud security is open source. I could make the strong argument that today we
definitely have an open source culture around cloud security and need to.
But you're talking about that shifting along the fourth dimension.
What's the change? What do you see evolving?
Yeah, I think for me, it's about the collaboration.
I think there are segments of industries that communicate with each other very, very well.
And I think there's others who do a decent job, you know, behind closed doors. And I think there's others who do a decent job,
you know, behind closed doors. And I think there's others, again, that don't communicate at all.
So all of my backgrounds predominantly has been in higher ed, K-12 academia. And I find that a lot
of those organizations do an extremely good job of partnering together, working together to move
towards kind of a greater good, a greater goal. An example of that would be a group out in the Pacific Northwest called NWAC,
the Northwest Academic Computing Consortium.
And so it's every university in the Northwest all come together to have CIO summits,
to have security summits, to trade knowledge, to work together, basically,
to have a better overall security posture.
And they do it pretty much out in the open, collaborating with each other,
even though they are also direct competitors, right?
They all want the same students.
It's a little bit of a different way of thinking than they've been doing it for years.
And I'm finding that to be a trend that's happening more and more outside of just academia.
And so when I say the future is open, if you think about the tooling academia typically uses,
it is very open source oriented.
It is very collaborative.
There's no specifications on things like EDU person to be able to go and define what a user looks like.
There's things like, you know, Kaz and Shibboleth to do account authorization and things like that.
They all collaborate on tooling in that regard.
We're seeing more of that in the commercial space as well.
And so when I say the future of security in cloud is open source, it's models like this that I think are becoming more and more
effective, right? It's not just the larger entities talking to each other. It's everybody talking with
each other, everybody collaborating with each other and having an overall better security
posture. The reality is, is that the folks we're defending ourselves against, they already are
communicating. They already are using that model to work together to take down who they view as their targets, us, right?
We need to do the same to be able to keep up.
We need to be able to have those conversations openly, work together openly, actors be able to see what they're looking at and how they're approaching it, and in some cases, move faster than they can, or in other cases, effectively, wind up polluting the conversation by claiming to be good actors when they're not.
And there's so many different ways that this can manifest.
It feels
like fear is always the thing that stops people from going down this path. But there is some
instance of validity to that, I would imagine. Yeah, no, and I think that certainly is true,
right? People are afraid to let go, quote unquote, the keys to their kingdom, their security posture,
their things like that. And it makes sense, right? There's certain things that you would want to
not necessarily talk about openly,
like specifically, you know,
what Diffie-Hillman key exchange
you're using or something like that.
But there are ways
to have these conversations
about risks and posture and tooling
and, you know, ways you approach it
that help everybody else out, right?
If someone finds a particularly novel way
to do a detection with some sort of piece of tooling, they probably should be sharing that, right? If someone finds a particularly novel way to do a detection with some sort of
piece of tooling, they probably should be sharing that, right? Let's not keep it to ourselves.
Traditionally, just because you know the tool doesn't necessarily mean that you're going to
have a way in. Certainly, you know, it can give you a path or a vector to go after. But if we can
at least have open standards about how we implement and how we can go about some of these
different concepts, we can all gain from that, so to speak.
Part of me wonders if the existing things that the large companies are collaborating on
lead to a culture that specifically pushes back against this. A classic example from
my misspent youth is that an awful lot of the anti-abuse departments at these large companies
are in constant communication. Because if you work at Microsoft or Google or Amazon,
your adversary, as you see it in the trust and safety group is not those other companies,
it's bad actors attempting to commit fraud. So when you start seeing particular bad actors
emerging from certain parts of the network, sharing that makes everything better.
Because there's an understanding there that it's not, oh, Microsoft has bad security this
week, or Google will wind up approving fraudulent accounts that start spamming everyone.
Because the takeaway from customers is not that this one company is bad.
It's, ooh, the cloud isn't safe.
We shouldn't use cloud.
And that leads to worse
outcomes for basically everyone. But they're also, one of the most carefully guarded secrets
at all these companies is how they do fraud prevention and spam detection. Because if
adversaries find that out, working around them becomes a heck of a lot easier. I don't know,
for example, how AWS determines whether a massive account overage on
a free tier account is considered to be a bad actor or someone who made a legitimate mistake.
I can guess, but the actual signal that they use is something that they would never in a
million years tell me. They probably won't even tell each other specifics of that.
Certainly. I'm not advocating that they let all of the details out per se, but I think it would be good to be able to have more of a open posture in terms of
like, you know, what tooling do they use? How do they accomplish that feat? Like, are they looking
at a particular metric? How do they basically handle that posture going forward? Like, what
could I do to replicate a similar concept? I don't need to know all the details, but it would be nice
if they embrace, you know, open tooling, like say a Trivi or a Falco or whatever the thing is, right? They're using to do this process and
then contribute back to that project to make it better for everybody. When you kind of keep that
stuff closed source, that's when you start running into that issue where, you know, they have that
quote unquote advantage that other folks aren't getting. Maybe there's something we can do better
in the community. And if we can all be better, it's better for everybody.
There's a constant customer pain in the fact that every cloud provider,
for example, has its own security perspective, the way that identity is managed, the way that security boundaries exist,
the way that telemetry from these things winds up getting represented,
where a number of companies that are looking at doing things that have to work across cloud for a variety of reasons some good some not so good
have decided that okay we're just going to basically treat all these providers as
more or less dumb pipes and dumb infrastructure great we're just going to run kubernetes on all
these things and then once it's inside of our cluster then we'll build our own security overlay
around all of these things.
They shouldn't have to do that.
There should be a unified set of approaches to these things.
At least, I wish there were.
Yeah, and I think that's where you see a lot of the open standards evolving.
A lot of the different CNCF projects out there are basically built on that concept.
Like, okay, we've got Kubernetes.
We've got a particular pipeline.
We've got a particular type of implementation of a security mesh or whatever it might be. And so there's a lot of projects built around how do
we standardize those things and make them work cross-functionally regardless of where they're
running. It's actually one of the things I quite like about Kubernetes. It makes it be a little
more abstract for the developers or the infrastructure folks. At one point in time,
you had your on-premises stuff and you built your stuff towards how your on-prem looked. Then you went to the cloud and you started building your
stuff to look like what that cloud looked like. And then another cloud showed up and you had to
go use that one, had to go refactor your application and now work in that cloud.
Kubernetes has basically become like this gigantic API ball to interface with the clouds.
And you don't have to build an application four different ways anymore. You can build it one way
and it can work on-prem.
It can work in Google, Azure, IBM, Oracle,
you know, whoever, Amazon, whatever it needs to be.
And then that also enables us
to have a standard set of tools.
So we can use things like, you know, Rego,
or we can use things like Falco,
or we can use things that allow us to build tooling
to secure those things the same way everywhere we go.
And the benefit of most of
those tools is that they're also configured via some level of codification. And so we can have
a repository that contains our posture, apply that posture to that cluster, apply it to the
other cluster in the other environment. It allows us to automate these things, go quicker, build the
posture at the very beginning along with that application. One of the problems I feel as a customer is that so
many of these companies have a model for interacting with security issues that's frankly obnoxious.
I am exhausted by the amount of chest thumping you'll see on keynote stages, all of the theme,
we're the best at security. And whenever a vulnerability researcher reports
something of a wide variety of different levels of severity, it always feels like the first concern
from the company is not fix the issue, but rather control the messaging around it. Whenever there's
an issue, it's very clear that they will lean on people to rephrase things, not use certain words.
It's, I don't know if the words used to describe this cross-tenant vulnerability are the biggest
problem you should be focusing on right now.
Yes, I understand that you can walk and chew gum at the same time as a big company, but
it almost feels like the researchers are first screaming into a void, and then they're finally
getting attention, but from all the people they don't want to get the attention from, it feels like this is not a welcoming environment
for folks to report these things in good faith. Yeah, it's not. And I don't know what the solution
is to that particular problem. I have opinions about why that exists. I won't go into those here,
but it's cumbersome. It's difficult. I don't envy a lot of
those research organizations. They're fantastic people coming up with, you know, great findings.
They find really interesting stuff that comes out. But when you have to report and do that due
diligence, that portion is not that fun. And then doing the, you know, the fallout component, right?
Okay, now we have this thing we have to report. We have to go do something to fix it. You're right.
I mean, people do often get really spun up on the verbiage or the implications and not just go fix the
problem. And so again, if you have ways to mitigate that are more standards-based, that aren't
specific to a particular cloud, like you can use an open source tool to mitigate, that can be quite
the advantage. One of the challenges that I see across a wide swath of tooling and approaches to
it have been that
when I was trying to get some stuff
to analyze CloudTrail logs in my own environment,
I was really facing a bimodal distribution of options.
On one end of the spectrum,
it's a bunch of crappy stuff or good stuff,
hard to say, but it's all coming off of GitHub,
open source, build it yourself, et cetera, good luck.
And that's okay, awesome, but there's business value here,
and I'm thrilled to pay experts to make this problem go away.
The other end of the spectrum is commercial security tooling.
And it is almost impossible, in my experience,
to find anything that costs less than $1,000 a month
to start providing insight from a security perspective.
Now, I understand the market forces that drive this.
Truly, I do, and I'm sympathetic to them.
It is just as easy to sell $50,000 worth of software as it is five to an awful lot of
companies.
So yeah, go where the money is.
But it also means that at the small end of the market, as hobbyists, as startups just
getting started, there is a price barrier to engaging in the quote-unquote proper
way to do security. So the posture suffers. We'll bolt security on later when it becomes important
is the philosophy, and we've all seen how well that plays out in the fullness of time.
How do you square that circle? I think the answer has to be open source improving to the point where
it's not just random scripts, but renowned projects. Correct. Yeah, no, I'd agree with that. And so we're kind of in this interesting phase. So if
you think about like raw Linux applications, right? Linux always the tenet that you build
an application to do one thing, does that one thing really, really, really well. And then you
ended up with this thing called like the cacti monitoring stack. And so you ended up having like
600 tools you strung together to get this one monitoring function done. We're kind of in a
similar spot in a lot of ways right now in the open source security world,
where like if you want to do scanning, you can do like Claire or you can do Trivi or you have a couple different choices, right?
If you want to do posture, you've got things like CubeBench that are out there.
If you want to go do runtime security stuff, you've got something like Falco.
So you've got all these tools you string together, right, to give you all of these different components. And if you want, you can
build it yourself and you can run it yourself and it can be very fun and effective. But at some point
in your life, you probably don't want to be care and feeding your child that you built, right? It's
18 years later now and you want to go back to heaven in your life. And so you end up buying a
tool, right? That's why Gartner made this whole CNAP category, right? It's this humongous category
of products that are putting all of these different components together into one
gigantic package. And the whole goal there is just to make lives a little bit easier because running
all the tools yourself, it's fun. I love it. I did it myself for a long time. But eventually,
you want to try to work on some other stuff too.
At one point, I wound up running the numbers of all of the first party security offerings that
AWS offered. And for most use cases of significant scale, the cost for those security services was
more than the cost of the theoretical breach that they'd be guarding against. And I think that
there's a very dangerous incentive that arises when you start turning security observability into your
own platform as a profit center. Because it's, well, we could make a lot of money if we don't
actually fix the root issue and just sell tools to address and mitigate some of it. Not that I
think that's the intentional direction that these companies are taking these things in. I don't want
to ascribe malice to them. But if you can feel that start to be the trend
that some decisions get pushed in.
Yeah, I mean, everything comes down to data, right?
It has to be stored somewhere, processed somewhere,
analyzed somewhere that always has a cost with it.
And so that's always this notion
of the shared security model, right?
We have to have someone have ownership over that data.
And most of the time, that's the end user, right?
It's their data, it's their responsibility.
And so these offerings become things that they have that you can tie into to work within the ecosystem, work with their infrastructure to get that value out of your data,
right? You know, where is the security model going? Where do I have issues? Where do I miss
configurations? But again, someone has to pay for that processing time. And so that ends up having
a pretty extreme cost to it. And so it ends up being a hard problem
to solve. And it gets even harder if you're multi-cloud, right? You can't necessarily use
the tooling of AWS inside of Azure or inside of Google. And other products are trying to do that,
right? They're trying to be able to let you integrate their security center with other
clouds as well. And it's kind of created this really interesting dichotomy where you almost
have frenemies, right? Where you've got a big Azure customer who's also a big AWS customer. Well, they want to go use Defender
on all of their infrastructure, and Microsoft is trying to do their best to allow you to do that.
Conversely, not all clouds operate in that same capacity. And you're correct. They all come at
extremely different costs. They have different price models. They have different ways of going
about it. And it becomes really difficult to figure out what is the best path forward. Generally,
my stance is anything is better than nothing, right? So if your only choice is using Defender
to do all your stuff and it costs you an arm and a leg, unfortunate, but great. At least you got
something. If the path is, you know, go use this random open source thing, great, go do that.
Early on when I was at Sysdig about five years ago, my big message was, you know, I don't care
what you do, at least scan your containers. If you're doing nothing else in life, use Clare, scan the darn
things, don't do nothing. That's not really a problem these days, thankfully, but now we're
more to a world where it's like, well, okay, you've got your containers, you've got your
applications running in production, you've scanned them, that's great, but you're doing nothing
at runtime. You're doing nothing in your posture world, right? Do something about it.
So maybe that is buy the enterprise tool
from the cloud you're working in.
Buy it from some other vendor.
Use the open source tool.
Do something.
Thankfully, we live in a world
where there are plenty of open tools out there
we can adopt and leverage.
You used the example of CloudTrail earlier.
I don't know if you saw it,
but there was a really, really cool talk
at SharkFest last year from Gerald Combs,
where they leveraged Wireshark to be able to read cloud trail logs, which I thought was awesome.
That feels more than a little bit ridiculous, just because it's, I mean, I guess you could extract the JSON object across the wire,
then reassemble it, but yeah, I did think on that one.
Yeah, so it's actually really cool.
They took the plugins from Falco that exist,
and they rewired Wireshark to leverage those plugins
to read the JSON data from the CloudTrail,
and then wired it into the Wireshark interface
to be able to do a visual inspect of CloudTrail locks.
So just like you could do like a follow this IP
with a PCAP, you could do the same concept
inside of your CloudLog.
So if you look up LogRay,
you'll find it on the internet out there.
You'll see demos of Gerald showing it off.
It was a pretty darn cool way to use a visualization,
let's be honest, most security professionals
already know how to use,
in a more modern infrastructure. One last topic that I want to go into with you before we call this an episode is something that's been bugging me more and more over the years. And it annoyed
me a lot when I had to deal with this stuff as a SOC 2 control owner, and it's gotten exponentially worse every time I've had to deal with it ever since. And that is the seeming view of compliance and security as being one and the same, to the
point where in one of my accounts that I secured rather well, I thought. I installed Security Hub
and finally jumped through all those hoops and paid the taxes and the rest, and then waited 24
hours to gather some data, then 24 hours to gather more. Awesome. Applied the AWS-approved foundational
security benchmark to it, and it started shrieking its bloody head off about all of the things that
were insecure and not configured properly. One of them, okay, great. It complained that the block
all S3 public access setting was not turned on for the account. So I turned it on.
Great.
Now it's still complaining that I have not gone through
and also enabled the block public access setting
on each and every S3 bucket within it.
That is not improving your security posture
in any meaningful way.
That is box checking so that someone in a compliance role
can check that off and move on to the next thing
on the clipboard.
Now, originally they started off being good intentioned, but the result is, is I'm besieged
by these things that don't actually matter. And that means I'm not going to have time to focus
on the things that actually do. Please tell me I'm wrong on some of this. I really need to hear that.
I can't. Unfortunately, I agree with you that a lot of that seems erroneous, but let's be honest,
auditors have a job for a reason. Oh, I'm not besmirching the role of the auditor, far from it.
The problem I run into is that it's the humanesis report that dumps out, here's the 700 things to go
fix in your environment, as opposed to, here's the five things you can do right now that will
meaningfully improve your security posture. Yeah. And so I think that's
a place we see a lot of vendors moving. And I think that is the right path forward because we
are in a world where we generate reports that are miles and miles long. We throw them over a wall
to somebody and that person says, are you crazy? Like, you want me to go do what with my time?
Like, no, I can't. No, this is way too much. And so if we can narrow these things down to what matters the most today,
and then what can we get rid of tomorrow,
that makes life better for everybody.
There are certainly ways to accomplish that
across a lot of different dimensions,
be that vulnerability management
or configuration management stuff, runtime stuff.
And that is certainly the way we should approach it.
Unfortunately, not all frameworks
allow us to look
at it that way. I mean, even AWS's thing here is yelling at me for a number of services not having
encryption at rest turned on, like CloudTrail logs or SNS topics. It's, okay, let's be very clear what
that is defending against. Someone stealing drives out of a data center and taking them off to view
the data. Is that something that I need to worry about in a public cloud provider context?
Not unless I'm the CIA or something pretty close to that.
I mean, if you can get my data out of an AWS data center and survive,
congratulations, I kind of feel like you've earned it at this point.
But that obscures things I need to be doing that I'm not.
Back in the day, I had
a customer who used to have storage arrays, and their storage arrays logins were the default
login that they came with with the array. They never changed it. You just logged in with admin
and no password. But I was like, you know, you should probably fix that. And he sent a message
back saying, yeah, you know, maybe I should. But my feeling is that if it got that far into my infrastructure where they can get to that interface, I'm already screwed.
So it doesn't really matter to me if I set that admin password or not.
There is a defense in depth argument to be made.
I'm not disputing that.
But the Cisco world is melting down right now because of a bunch of very severe vulnerabilities that have been disclosed.
But everything to exploit these
things always requires, well, you need access to the management interface. Back when I was a network
administrator at Chapman University in 2006, even then I knew, well, we certainly don't want to put
the management interfaces on the same VLAN that's passing traffic. So is it good that there's an
unpatched vulnerability? No. But Shodan, the security vulnerability search engine,
shows over 80,000 instances that are affected on the public internet.
It would never have occurred to me to put the management interface
of important network gear on the public internet.
That just is, I don't understand that.
Yeah.
So on some level, I think the lesson here is that there's always someone
who has something
else to focus on at a given moment.
And it's a spectrum.
No one is fully secure.
But ideally, you don't want to be the lowest of low-hanging fruit.
Right, right.
I mean, if you were fully secure, you'd just turn it off.
But unfortunately, we can't do that.
We have to have it be accessible because that's our jobs.
And so if we're having it be accessible, we've got to do the best we can.
And I think that is a good point, right? Not being the worst should be your goal at the very, very least.
Doing bare minimums, looking at those checks, deciding if they're relevant for you or not.
Just because it says the configuration's required, you know, is it required in your use case? Is it
required for your requirements? Like, you know, are you a FedRAMP customer? Okay, yeah, it's
probably a requirement because, you know, it's FedRAMP. They're going to tell you you got to do
it. But is it your dev environment?
Is your demo stuff?
You know, where does it exist, right?
There's certain areas
where it makes sense to deal with it
and certain areas where it makes sense
to take care of it.
I really want to thank you
for taking the time
to talk me through your thoughts on all this.
If people want to learn more,
where's the best place for them to find you?
Yeah, so they could either go
to sysdig.com slash open source.
Bunch of open source resources there. They could go to Falco.org, read about the stuff on that site as well. Lots of different ways to kind of go and get yourself educated on stuff in the space.
And we will, of course, put links to that into the show notes. Thank you so much for being so generous with your time. I appreciate it.
Yeah, thanks for having me. I appreciate it. Alexander Lawrence, Principal Security Architect at Sysdig. I'm cloud economist Corey Quinn,
and this episode has been brought to us by our friends also at Sysdig. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice. Whereas if you've
hated this podcast, please leave a five-star review in your podcast platform of choice,
along with an
insulting comment that I will then read later when I pick it off the wire using Wireshark.
If your AWS bill keeps rising and your blood pressure is doing the same, then you need the
Duck Bill Group. We help companies fix their AWS bill by making it smaller and less horrifying.
The Duck Bill Group works for you, not AWS.
We tailor recommendations to your business, and we get to the point.
Visit duckbillgroup.com to get started.