Screaming in the Cloud - Battling Back Against Data Breaches with Maya Levine
Episode Date: August 27, 2024Data breaches can throw countless lives into disarray. With massive leaks and compromises happening on what feels like a daily basis, what can be done to protect people and services? On this ...episode, Sysdig Product Manager Maya Levine joins us for a discussion on the current state of affairs in the world of cybersecurity. Why do these attacks keep happening? Are they becoming too frequent? What can we do to prevent them? Maya has all the answers as well as tips to help keep you and your organization safe.Show Highlights:(0:00) Intro(0:37) Sysdig sponsor read(0:58) Product management at Sysdig(2:09) Are cyber attacks becoming more frequent in the cloud?(5:58) Urgency (or lack thereof) while under attack (10:37) Motives and methods in modern data breaches(15:57) Sysdig sponsor read(16:20) The cost (and necessity) of audit logging(18:46) “If breach is inevitable, what can people do?”(22:36) Maya’s “I am Confused” talk(25:40) Stopping attacks before they spiral out of control(32:32) Where can find more from Maya and SysdigAbout Maya Levine:Maya Levine is a Product Manager for Sysdig. Previously she worked at Check Point Software Technologies as a Security Engineer and later a Technical Marketing Engineer, focusing on cloud security. Her earnest and concise communication style connects to both technical and business audiences. She has presented at many industry conferences, including AWS re:Invent and AnsibleFest. She has also been regularly interviewed on television news channels, written publications, and podcasts about cybersecurity.Links:Maya’s LinkedIn: https://www.linkedin.com/in/maya-levine/Sysdig: https://sysdig.com/SponsorSysdig: https://sysdig.com/
Transcript
Discussion (0)
The challenge is that all of these individual logs by themselves can often be, you know,
just typical cloud operations. But if you can add some kind of logic on top of it, where
it's looking at, okay, now this is atypical, you know,
LIS S3 bucket, that's a normal call, but 100 of them, not so much.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
This promoted guest episode is brought to us by our friends at Sysdig,
where Maya Levine is a product manager.
Maya, thank you for joining me.
Thanks so much for having me, Corey.
Sysdig secures cloud innovation with the power of runtime insights.
From prevention to defense, Sysdig prioritizes the risks that matter most.
Secure every second with Sysdig. Learn more at Sysdig, S-Y-S-D-I-G dot com. Our thanks as well
to Sysdig for sponsoring this ridiculous podcast. So let's start at the very top. Product management
means an awful lot of things to an awful lot of different companies. Where do you start and stop? For me, product management is all about understanding what are your pain points?
When I'm thinking of customers, what is hard for them?
What are the problems that they really need help solving?
And obviously, Sysig is looking at that from a cloud and container security point of view.
And what have you found?
What is the painful part about, I guess, cloud security
other than to be unflattering?
All of it.
Yeah, I was going to say,
where do I start?
Yeah, it's like,
it seems like a target-rich environment.
There's a lot of challenges.
One thing that has come up a lot
in this past year
is just how quickly attackers
are able to execute their attacks.
And we found that the average cloud attack
takes about 10 minutes.
And when we think about that,
it becomes painfully obvious
that we need some kind of cloud detection and response
or CDR as this industry is so fond of their acronyms.
We do love our acronyms dearly.
It's some of the things we won't get away from.
Yes, too many.
Do you find the speed of attack increasing
is in part, I guess, due to the fact that it is cloud,
where there's a consistent set of standards
of how things are deployed?
If you have an S3 bucket,
getting access to that looks an awful lot the same
as getting access to another company's S3 bucket.
Whereas back in the days of data centers,
when everyone runs their own bespoke little unicorn, it took a lot more time and was
less automation-friendly, I guess is probably the best way to frame that. Or do you think there's
something else to it? At first, maybe we saw that cloud attacks were harder for attackers because
the complexity of cloud systems, I think, is harder to understand. But what we've seen in
recent times is that attackers are becoming more well-versed
in cloud-native technologies,
and they're embracing the same things that we are
when we are adopting cloud, right?
The ease of deployment and automation services
and all of those things are being utilized
and utilized well in different cloud breaches.
What I find interesting in some of the areas that you personally have been focusing on has been
around the idea of identity. One of my frequent talking points that I trot out from time to time
is that the internet long ago took a collective vote and decided that our email inboxes were the
cornerstone of our online identities. Get access to someone's inbox and for most purposes, you can
become them on the
internet. How has that manifested in like some more infrastructure centric way when it comes
to cloud? Because an awful lot of folks are talking about identity these days. How does that,
what is the, I guess the impact of that on CDR as you put it? The impact is that credentials are
usually what we're seeing is the initial access point for attackers.
And I think that we can't always control how attackers make their way into our environment.
Where CDR is helpful is that we can be notified once they're in there and respond effectively.
But going back to the identities and the credentials, I always like to
make the analogy of a key
left under the mat.
If I'm a robber and I'm scoping out homes, that's the first place I'm going to look is
these known places where people are leaving their keys.
And when it comes to secrets harvesting, attackers know where to look.
And this can be things like serverless function code, IAC software.
These files often contain credentials
or secrets or other sensitive information, and they're often overlooked.
There's a, yeah, but I found a number of things doing suspicious work, some of which in production
products, which is never terrific, where they would just scan the local environment of your
dev environment and find any credentials that were being used to access AWS in this case,
and then just silently send them up to the company server so they could take action on your behalf.
Now, doing that with permission is one thing, but when people were surprised to discover this,
they didn't really have the traction they did ever again and haven't been heard from since.
There's a certain approach to, I guess, consent of customers, but there's also the sense as well
of looking at most laptop environments or
desktop environments that I've worked in. Over time, I start to see sloppiness where I've stashed
credentials in dot files all over the place because that's where they have to be. There's
no better way for a lot of systems to start adopting that. And I guess I like to ideally
hope that people clean up their hygiene once things start moving into production a little
bit more. But the constant drumbeat of breaches seems to suggest otherwise. Yeah. I mean, SaaS applications are also a huge
attack service. We see credentials being left everywhere from repos to AD to Slack. I really
do think that attackers are better than we are at secrets management. And the reason why is the
motivation is different here. For them, getting
access to a really privileged long-term credential can be their golden ticket to get lots of money
and execute their attack successfully. And for defenders, this usually isn't the highest priority
item for them. I have to give customers some credit here because whenever I'm puttering around
and building code badly, as a general rule, my options are I can either embed the credential temporarily, quote unquote, in the code, or I can go half an hour
out of my way and bring in extra libraries and do the responsible handling of credentials or whatnot.
When I don't even know if the thing is going to work or not, the temptation to take the shortcut
is terrific. Getting to a point of being able to do that and stop doing that requires significant
scaffolding on the development experience side that I can't necessarily blame people who are
under time pressure to bypass. That's the given pull between development and security that we
often see, right? It's between the quickness that we can deploy amazing features to our customers versus kind of securing and locking things down
and making sure that it's done in a safe way.
And I don't think one or the other is correct, right?
There needs to be a balance between the two.
Think of something like employing
a secrets management system.
This just reduces your likelihood of credential leaks.
If you can keep your keys and your credentials
in a centralized location and provide an API to dynamically retrieve them, it'll reduce the
likelihood that your credentials are going to be inadvertently left in files.
And on some level, I keep waiting for some IDE to come up with a simple drop-in replacement for
this that's transparent, does all the correct things on the backend, but developers don't have
to worry about it. Instead, it seems like developers aren't worrying about it. These things make their way into
production and that's where what you do seems to come into play.
Yeah. And again, there's two aspects that I think security practitioners should be thinking about.
The first is on the prevention side. This is where the identity hygiene and the misconfiguration part
is going to be helpful.
You know, how can I lock things down and harden and just prevent attacks from happening in the first place?
But it's not complete without the runtime security side.
And this runtime security piece should be able to detect at real time. Because of how quickly we're seeing cloud attacks are happening, it's not enough
to be notified about weird, malicious behavior an hour after it happened. If an attack takes
10 minutes to execute, you can see why that's too late. It's always been, if you're kind of
catching these things relatively quickly, there's always the big question then of how long has it
been going on? What have they had access to? What in the environment can I trust versus
what should be considered compromised? And I guess every time I've been looped in on the early stages
of a breach, there's been this massive confused fog over everything where no one really knows
what's going on. People are making wild hypotheses and throwing them around to see what sticks.
And people in some cases are overreacting, misinformation, it runs rampant,
et cetera, et cetera. It always tends to feel like regardless of how mature the processes are,
there's a bit of a sudden surprise fire drill. Everyone's running around screaming for help.
Totally. And I honestly recommend actually doing fire drills, you know, trying to putting your
systems under stress and seeing how things happen in a simulated way.
And so when things really do happen,
at least you've kind of practiced it a couple of times.
But yeah, I think that it is a challenge.
It's hard to understand the scope.
And that's where part of what you need
in a cloud detection and response solution
is actually the ability to
see things after they've crossed a detection boundary. So what do I mean by that?
We've actually, SysTix threat research team has actually observed attacks where the threat actors
moved laterally from an AWS environment to an EC2 compute instance to an on-premise server.
And typically when we think of lateral movement, we think of going from one account to another.
And the reason why this type of lateral movement is a challenge for defenders is because once
an attacker moves from AWS into EC2, CloudTrail no longer provides any information about what
the attacker is doing. To see what the attacker is doing on the EC2, CloudTrail no longer provides any information about what the attacker is doing. To see what the
attacker is doing on the EC2, you need
security at
runtime on that workload. And so
the kind of challenge
here is that you need to be able to
see these logs, see these
detections occurring from all
of these different areas and have
kind of a solution, ideally, that can
tie these things together and show, you know, this is what happened in CloudTrail, and then this is what happened on
runtime on the compute. And the challenge is being able to correlate those actions and being able to
kind of see and paint the picture of the attack. Do you find that, I guess, modern cloud technologies,
Kubernetes being one example of this, are leading to novel forms of breaches, or is it effectively still the same things we used to see in the old days of three-tier web
apps running on pets instead of cattle? We can't say it's the same. I think it depends
on what you're talking about. Attackers' goals are usually financial. That's not going to change,
unless you're talking about espionage, which is a whole different thing. But most attackers are after your money
or after money in general.
And they're just looking,
they're just finding new ways to get money.
In the cloud, it's more often going to be,
you know, crypto miners,
or we're still seeing ransomware happening in the cloud
as well as on-premise.
So I think, yeah, the motives are the same,
but the techniques are different.
Even along those lines, you folks recently had a blog post about coining the term LLM jacking,
where people would grab credentials for something like OpenAI or whatnot, and then use those
credentials either for malicious purposes or to not have to pay for it themselves. Can you give
any color on what that looks like? It would not have occurred to me that that would be something
that people would use directly since, I guess I'm stuck in the old world where, yeah, I'm going to
basically capture access to a bunch of compute resources and use them to mine cryptocurrency,
which is super economical in someone else's account. But I hadn't made the leap yet to using
LLMs directly as a revenue generator. Yeah, it was an interesting attack. So we saw them gain access
with stolen credentials,
again, the initial access point,
and start targeting specific LLM models
that were hosted by cloud providers
like Anthropics Cloud.
And so they're using scripts
to check credentials
against a bunch of different AI services
like OpenAI and Bedrock.
And they're checking the capabilities and quotas of these
stolen credentials without triggering any alarms and using kind of a reverse proxy to
basically manage and then sell access to these compromised LLM accounts.
Once these companies wind up being compromised on the LLM key, how long does the credential remain good for?
Because it always struck me as, oh, you start getting access to something and you blow the
bill to the stratosphere. People find out relatively quickly and turn it off. Ideally,
that gets noticed within minutes or hours and practice days or a week or two. The way you're
talking about this sounds almost like it's a more persistent, longer term breach.
You know, anyone who discovers this and shuts it down, they kind of eliminate that for themselves.
But yeah, we were seeing that the attackers would disrupt legitimate LLM usage by maximizing
quota limits and changing things in the setting. So maybe it doesn't trigger as many alarms
as usual.
There was a potential for significant financial damage here,
something like $46,000 a day.
So that's something that hopefully your billing would notify you about for sure.
But even just one day,
that can be a lot of money for an organization.
Weird confessions here.
I started noticing that in an open AI key of mine
two months ago for, I was using it for a system to generate internal placeholder text. And it went from
costing me about 10, 15 bucks a month or so to started to get a number of series where it was
costing me basically $8 to $10 every three days. It was increased usage. And I never bothered to
track it down and figure out, okay, is this just
due to a weird logic bug or I'm seeing a lot more throughput through that system or something
external doing it? Because it's an open AI key. There's no auditability that I'm able to discover
to figure out, all right, what are those queries? What is being used? What time of day? And correlate
that. So I shrugged, I wound up turning the key off and didn't have to worry about it again.
But it does make me wonder now that you're saying that.
With any new technology and anything new that's kind of buzzing in the industry,
we can expect attackers to also be into the buzz. And so especially for AI, security for AI,
I say starts with just a visibility aspect. Where do you have AI packages?
Where do you have AI deployed in your environment?
Is there any shadow deployments of developers
that spun it up somewhere that you aren't aware of?
And then layer on top of that,
what risk do these workloads or wherever have, right?
Just typical risk.
Are they misconfigured in some egregious way?
The only place these credentials lived
was inside of AWS Secrets Manager.
So if that got popped,
we all have different things to worry about.
But so I'm wondering now, okay,
where else would they have been picked up?
Now, of course, thinking it through a bit more logically,
and the reason I generally dismissed it
was that if this had been a breach,
I have the sneaking suspicion
that seven bucks every three days
would not have been the sign of
compromise. It also is directionally in line, at least with the right order of magnitude for
traditional usage, which is why I was mostly okay letting it slide. But now I really am starting to
wonder. I may have to dig into the CloudTrail logs. I mean, it's cool that you have these
CloudTrail logs to look at, right? Some kind of audit and logging system to be able to go back.
And that's the problem with cloud.
Often you're dealing with resources that may not even exist anymore, right?
They're very dynamic.
And so you need to have some kind of ability to look back and try to understand what happened.
In the cloud, every second counts.
Sysdig stops cloud attacks in real time by instantly detecting changes in risk with runtime insights and open
source Falco. We correlate signals across workloads, identities, and services to uncover
hidden attack paths and prioritize the risks that matter most. Discover more at sysdig.com.
Audit logging is incredibly expensive at scale, but not having it can be even more expensive.
It's one of those areas where it's a complete waste of money until suddenly it's very much not.
And that instance pays for everything you've ever spent and then more on it.
But it's sometimes difficult to get leaders on board with thinking through these responsibilities.
That's part of the reason, I believe, that CISO is a C-level position.
It's not just some director of security somewhere.
It has to be fought for on some level in boardrooms.
A hundred percent.
And I think there can be an element of prioritization here.
We witnessed an attack recently this year
where we saw an attacker got access into the environment
and then they checked to see if an S3 bucket
had versioning enabled,
which basically allows, you know,
customers to easily restore data
if something happened to it on that S3 bucket.
And it was enabled, so they disabled it
and then deleted all the data,
exfiltrated the whole ransomware bit.
And so having CloudTrail on data events,
like exfiltrating data,
that can be really, really expensive.
But maybe it's worth it, prioritizing it for certain data storage where you know you have really critical data being stored, customer data, HIPAA data, whatever it is.
And then ideally, defense in depth, we have an SCP that disables the removal of aversioning policies, etc.
There are ways to do this, but they're all very obvious
after you really should have thought about that.
Because it seems to me that most people's understanding
of attack vectors has not matured
to the level of understanding that,
no, no, no, not only are some of these attackers
as good at cloud as you are,
many of them are markedly better.
Yeah, I mean, it depends on the company, right?
Some companies are completely born in cloud, cloud native. They live and breathe that. And a lot of companies are coming to us with
old mindsets, right? There's no longer the same perimeter to secure. It's different. It's
different in the cloud. And it is really difficult actually to understand the complexity of all of
the systems involved, right? We're asking people
to be so in-depth with so many different technologies and tools and systems.
And that's challenging. That's hard. I'm not one here to say, oh, you know, humans, we suck,
and we just were really bad at this. And I think humans are the weakest link,
but that's very understandable because I think what we're asking of people is difficult. We would not evolve in this environment. These are advancements in the
last couple hundred years. And a quote I heard from Swift on security on Twitter that I really
love and has resonated has been, of all the levers available to you, human nature is not one of them.
You will not be able to change it. And that's, that is the realistic
truth of it. I get very tired of security awareness trainings that basically castigate
employees for things of, if you click the wrong link in an email, you could destroy the company.
Well, frankly, that sounds like a failure on the part of corporate security. If one person clicking
a link brings everything down, maybe focus on the other side of that equation because, spoiler, we all click the wrong link sometimes. That, again, is human nature. The attacker has to
get lucky once. Defenders only have to lose once. And there's an unequal parity here.
So not to sound like a Debbie Downer here, but I have to ask then,
so if breach is inevitable, what can people do? Before I answer that question,
I will say that it is proven that educating and those annoying trainings actually do have an effect. They reduce the risk of people clicking on phishing links or doing those kinds of things
significantly. There was some Forbes study that I saw that it really actually does reduce that likelihood. Again, that chance is not
eliminated entirely. Very fair. I am an overly negative cynic on that. I'm not trying to dunk
on people who make mistakes. We all make mistakes. I make them in triplicate, generally speaking.
But yes, I definitely accept what you're saying. Part of why these trainings are annoying is that
they're not coming at it from the perspective of, hey, this is so understandable to make this mistake that
you clicked on a link that looked exactly like the font and button and copy and everything that
this company usually has. It is an understandable mistake to make. In fact, we're seeing things that
are even more understandable. There was a breach where somebody kind of stole MFA credentials to,
I think it was Uber's VPN, and they kept trying to log in repeatedly and then contacted that
person on WhatsApp and pretended to be IT support and convinced them to reset their credentials.
And all I'm saying is that if somebody can be tricked to clicking a phishing email link,
they can almost certainly be tricked into clicking a phishing email link, they can almost
certainly be tricked into accepting a notification from their employer's own MFA. And overconfidence
is absolutely one of those problems. Like, oh, I'm far too smart to ever ever click a link like
that. Really? You've never gotten a push notification? Sounded important at two in the
morning when you're getting home from the bar? Really? You're at your best then? What's your
secret? I'd like to be I'd like to be functional then as
well. But yeah, the idea of flooding the MFA codes and click here to accept through Duo historically,
people just start spamming them until eventually people got annoyed and hit the OK button and then
they were in. It's easier to believe that your IT team has misconfigured something that's spamming
you instead of, no, this is an actual attack. There is an awareness issue.
A hundred percent. And I think, and also, I mean, we talked earlier about developers
leaving their credentials in all sorts of locations. That's also an awareness issue,
right? I don't think many of them are thinking, hey, this is a great, this is like me going out
of town and leaving the key under the mat for weeks and weeks, right? I don't think they're
thinking about it in that term and maybe just framing it in that term can be helpful to reducing it.
But at the same time, I firmly believe that we should expect to be breached.
We can't control how attackers get in always, right?
There's zero day threats that we learn about all the time.
And so that's where the runtime piece comes in, right?
You want to prevent, you want to harden, you want to do everything you can to make it hard
for attackers to take advantage of your systems.
But you also need to operate under the assumption
that things can go wrong
and what do you do when that happens?
I like carrying things through one step further
to a level of turning things back around again.
For example, when AWS releases a new service,
sometimes what I like to do
is very carefully bound an IAM policy to just that service and then leak some credentials with
that policy attached onto GitHub. Then people will, of course, subvert it almost instantly,
get the thing to mine cryptocurrency. Great. Now I can kill the credentials. I can stop whatever
Bitcoin mining thing it is and then figure it's mostly configured for the purposes that I need it
and off to production it goes. I'm mostly kidding here, but there's an uncomfortable element of truth to it.
You, it appears that I'm not the only person who finds a lot of these things confusing.
You've given a talk several times now called I Am Confused, which, talk about titles that
really resonate with me. My God, I am the confused deputy. Please, tell me about your talk.
Yeah, it was, so I, one of the things I run at Sysdig
is our identity security solution.
And I was kind of baffled
at how many organizations I saw
that recognize the importance of zero trust
around least permissive,
but hadn't prioritized it as a major initiative
that their other things kind of took higher precedence.
So I created this talk where I walked through
eight real-life examples of breaches
that occurred in the past year or two
that utilize identities in some way to achieve their goal.
Just to highlight the fact that almost every single breach
that you hear about on the news took advantage of mismanaged identities or secrets or permissions in some way.
Yeah. Increasingly, people hear about these things. Oh, there was a hack. Generally,
it feels like they come from the two big categories. One is the provider themselves
wound up leaking data somewhere, or it's the credential stuffing approach where people wound
up using the same password everywhere with poor password hygiene. Increasingly, though, it starts to seem like
quote unquote hacking is mostly taking advantage of either social engineering or else weak
credential management, sometimes one combined with the other. Is that materially changing in
cloud? We're seeing more of that, less of that, or is that always the way of things?
It is really what we're seeing as the most common initial vector,
initial access point for attackers
is getting their hands on some kind of credentials,
whether that's buying them on the dark web
from a previous breach that got kind of leaked,
taking advantage of password reuse
or finding those kind of long-term credentials
that are stored somewhere.
There's many, many different ways
that attackers can gain that initial access.
But once they're in,
there's all sorts of crazy stuff
that they can and do do.
But it's really about,
I think for me,
highlighting that, again,
you can't control always how they get in.
But if you are not adhering to least permissive, what you're doing is kind of giving them a gift to go and do whatever they want to do and escalate however they want to escalate and move to wherever they want to move.
And that's that should scare you.
That should be scary.
It's one of those unfortunate areas where there just isn't a lot of, I guess, wiggle room.
It's unfortunate because you can't fix human nature.
There's also an inherent trust built into society as a whole.
Bruce Schneer wound up writing a book about that a while back.
The embedded trust that societies need to survive and thrive.
People don't generally take a paranoid baseline
of assuming every person talking to them
is out to, is out with ill intent.
So I do have a lot of empathy for these things.
I'm also increasingly of the mind
that a strong defense isn't enough.
Assume you're going to get breached,
you need to be able to detect
and respond to that rapidly.
Otherwise it leads to the M&M security.
Once you're through the thick candy shell,
everything inside is gooey and chocolatey and you can basically have a field day in there
until everything comes crumbling down around you. I've used to be fairly dismissive of response
approaches as opposed to, as far as detection goes rather than prevention. But now it's, yeah,
the reality is, assume people will get in. How do you mitigate, detect, and stop that damage quickly?
Yeah. And I know that you've talked to some other people about this 555 benchmark that Sysdig is
talking about. And this is really, I think, driving this point home. If cloud attacks are
happening in 10 minutes or less, how can we match that speed, right? So you do need to detect,
we're saying within five seconds of that signal occurring, whether this is a log from an
application or cloud, a system call, something from network traffic, needs to be real time so that
you can correlate and triage in around five minutes and initiate some kind of response in five minutes.
The idea of being able to detect things in five minutes is tricky, especially when things like
CloudTrail can often take 20 to wind up announcing, oh, hey, this thing changed.
It's fair to that team.
Their responsiveness has improved over the years,
but there's still no guarantee.
There's no public SLA around when events will,
when you can count on them being there for things like that.
So it's a bit of a best effort at the moment.
That's true.
I will say though on the,
so on the correlating and triage piece,
this is another place where
good identity things can help you. Because if you can stitch actions that attacker took
between identities and between different environments together with a common thread,
which is the identity, that helps you to paint a picture of that. I think more important on
what you just said on the response side, how do you respond in five minutes?
Many of these needs to be automated.
You're not going to manually be able to respond with the speed of attacker's automation.
So there are auto response actions that are available in the cloud.
And some actions do need to be manual, obviously.
You know, I wouldn't automate an action that could take down
your entire website. I would propagate that up to the right person to see if that's kind of what's
needed. But whatever you can automate, that is what's going to help you to respond at the correct
speed. And you just alluded to something as well. It's been the bane of my existence, uneven levels
of audit coverage. Okay, we know someone got into the VPN and then connected from there to a bastion host and did some things. We're
not entirely sure what. We can analyze traffic returns and see that there was some encrypted
traffic going to the database, some to a web server, but we don't really know what it was
or what it contains. So that's a big hole in our understanding. It's tricky to get holistic
coverage of these things of what you'll need from an audit perspective, but it absolutely needs to be done. Yeah. And I think that's one of the
biggest challenges for us security vendors, right? It's a pain point that we're trying to solve for
is how can we help you find that needle in the haystack? You're drowning in noise, right? It's
great that we have all these logs and all these audit things.
But the problem now is that we've become kind of over and inundated and overwhelmed with them.
And sometimes the indicator of compromise that's within those logs gets buried in all of them.
There's no good answers that I can see here.
I wish there were otherwise, but I really don't see how they are.
There's a better path forward that is going to be, I guess, clear enough. Something will get
there, but it just still seems like there's still so much work to be done. At the moment,
being aware of it happens, computers are going to be able to react faster than humans can,
and having a plan seemed to be the baseline, do the work. Everything else sort of extends downstream from that.
The good news is, is that the techniques that attackers do
aren't that different across a bunch of different breaches.
For example, most attackers,
once they make it into your environment,
will do some reconnaissance.
They'll enumerate, they'll try to find out
what can I access with my current
credentials? What other credentials can I get access to? And so they're making calls like
list S3 buckets, list that, list, list, list, right? All of these calls, calls that are pretty
typical in a normal cloud day-to-day operation. However, what's not typical is seeing them
happen. There was a 100 calls within a minute.
That should be an indication that there's malicious activity potentially here.
And that's kind of what we're trying to do.
We're trying to add extra logic on top of it because the challenge is that all of these
individual logs by themselves can often be, you know, just typical cloud operations.
But if you can add some kind of logic on top of it, where it's looking at, okay, now this is atypical, you know, LIS S3 bucket, that's a normal call, but a hundred of them, not
so much.
And okay, that's, that's one of our DevOps people.
They're presumed trusted.
Sure.
But when their activity begins to look very different than what it normally does and possible it's correlation of, okay, because their tooling tends to work through Python, why are they suddenly making requests from Golang?
That's a little odd.
They start to be able to identify heuristically that there are something fishy might be afoot.
But it seems like it's such a great ideal, but getting there is such a hard part.
It is, but we are getting there, I think.
I'm not, you know, I think the security and attackers,
that's always going to be this cat and mouse game, right?
Where threats innovate in some way
and then we innovate to kind of defend against it.
And there's always going to be that push and pull.
But I think if we can embrace these kind of automations and these logic and machine learning in a way that is actually effective for security,
I don't think that AI is the silver bullet that everyone says, oh, we can just run it and then that's it.
We don't have to do our jobs anymore. But I do think that there are use cases where AI can help us.
And cutting through the noise and finding these abnormalities is one of those.
I really want to thank you for taking the time to speak with me today.
If people want to learn more, where's the best place for them to go?
They can visit cystic.com or reach out to me directly on LinkedIn.
And perfect.
We'll put links to both of those in the show notes.
Thank you so much for taking the time to speak with me today of those in the show notes. Thank you so much for
taking the time to speak with me today. I really do appreciate it. Thank you, Corey.
Maya Levine, Product Manager at Sysdig. I'm cloud economist Corey Quinn, and this is Screaming in
the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform
of choice. Whereas if you hated this podcast, please leave a five-star review on your podcast
platform of choice, along with an angry, insulting comment
that no doubt will not be attributed to you
because that platform provider
wound up getting breached
due to poor credential management.