Screaming in the Cloud - The Current State of Cloud Security with Crystal Morin
Episode Date: March 18, 2025Sysdig’s 2025 Cloud-Native and Security Usage Report is hot off the presses, and Corey has questions. On this episode, he’s joined by Crystal Morin, a Cybersecurity Strategist at Sysdig, ...to break down the trends of the past year. They discuss Sysdig’s approach to detecting and responding to security and the success the company has seen with the rollout of Sysdig Sage (an AI product that Corey thinks is actually useful). They also chat about what’s driving a spike in machine identities, practical hygiene in cloud environments, and the crucial importance of automated responses to maintain robust security in the face of increasingly sophisticated cyber threats.Show Highlights(0:00) Intro(0:39) Sysdig sponsor read(2:22) Explaining Sysdig's 5/5/5 Benchmark(4:06) What does Sysdig's work entail?(10:03) Cloud security trends that have changed over the last year(14:30) Sysdig sponsor read(15:16) How Sysdig is using AI in its security products(19:09) How many users are adopting AI tools like Sysdig Sage(25:51) The reality behind the recent spike of machine identities in security(29:24) Handling the scaling of machine identities(35:37) Where you can find Sysdig's 2025 Cloud-Native and Security Usage ReportAbout Crystal MorinCrystal Morin is a Cybersecurity Strategist with more than 10 years of experience in threat analysis and research. Crystal started her career as both a Cryptologic Language Analyst and Intelligence Analyst in the United States Air Force and as a contractor for Booz Allen Hamilton, where she helped develop and evolve their cyber threat intelligence community and threat-hunting capabilities. In 2022, Crystal joined Sysdig as a Threat Research Engineer on the Sysdig Threat Research Team, where she worked to discover and analyze cyber threat actors taking advantage of the cloud. Today, Crystal bridges the gap between business and security through cloud-focused content for leaders and practitioners alike. Crystal’s thought leadership has been foundational for pieces such as the “2024 Cloud-Native Security and Usage Report” and “Cloud vs. On-Premises: Unraveling the Mystery of the Dwell Time Disparity,” among others.LinksSysdig’s 2025 Cloud-Native and Security Usage Report: https://sysdig.com/2025-cloud-native-security-and-usage-report/Sysdig on LinkedIn: https://www.linkedin.com/company/sysdig/Crystal’s LinkedIn: https://www.linkedin.com/in/crystal-morin/SponsorSysdig: https://sysdig.com/
Transcript
Discussion (0)
The way that I like to compare Sysdig Sage to like my previous work is the genius guy that used to sit behind me when I didn't know something and I could turn around and swivel my chair and be like, hey, Dave, what does this mean?
And point to my screen and then he would wheel his chair over and be like, da da da da da da da da da da da is to me, like it's my Dave. But I don't have to bother anybody to get the answer.
["Spring in the Cloud"]
Welcome to Screaming in the Cloud.
I'm Corey Quinn, and in a remarkably refreshing
change of pace, I'm gonna start off by doing the ad read
on this one live instead of splicing it in later, because this episode is brought to
us by our friends at Sysdig.
Sysdig helps companies secure and advance innovation in the cloud, because building
in the cloud enables businesses to accelerate time to market, which is of course important
to those folks.
But the cloud has introduced a world where it only takes 10 minutes to initiate an attack.
It would take less time if the cloud providers would get off of it, but so far, kind of fortunately,
they have it.
But what that means is that security teams have to protect the business without slowing
it down.
So how do they identify and prioritize the real risks?
Well, that's where Sysdig comes in.
Sysdig is a complete CNAP, that's C-N-A-P-P, which probably means something to security
people, and it uses AI, because of course it does, to help security teams prioritize
and stop the threats that matter most now.
To learn more, you can visit sysdig.com or you can listen to me talk to Crystal Morin.
Crystal's a cybersecurity strategist at Sysdig. Crystal, thank you for listen to me talk to Crystal Morin. Crystal's a cybersecurity strategist at Sysdig.
Crystal, thank you for listening to me.
Thank you for having me, Corey.
I'm excited to be here.
I assume you don't disagree meaningfully with any of the ad read that I just did, given
that it's your folks that sent it to me.
But if you want to argue, I am thrilled to wind up, oh boy, it's drama time.
No, not at all.
That's exactly what we do. We like to help protect organizations from the attackers that want to get all of the
good stuff in your organization and use it against you.
Your data, your customer information, your money, all of that stuff.
We want to help protect you from them.
I've spoken with some of your colleagues in previous years
whenever your usage report comes out, which like clockwork, it
just has again.
And I was taken by the five five five benchmark that you folks
talked about last time, which the important part that really
resonated with me was that from breach to things start to be
exploited is roughly five minutes was the top part of it.
And I forget what the trailing fives are, which probably indicate that I'm not the only one.
So could you please refresh me on what the five by five benchmark is?
Yes. So five, five, five stands for five seconds to detect five minutes to investigate or correlate
and triage what's going on and five minutes to
respond so that equals ten minutes or ten minutes plus that extra five seconds
so ten minutes in total. Ten minutes ish because I haven't synced with an NTP server lately.
Exactly. Wonderful. The takeaway that I took with it is that you have to be able
to respond rapidly because most people are not going to answer a page in less than 10 minutes.
Therefore, computers basically have to do a lot of the auto-remediation for you.
The obvious challenge with that is computers making random decisions and turning off production
is usually frowned upon.
You're at least on the security side where it's more defensible.
I help companies fix AWS bills, So when we take something down,
suddenly we're not allowed to save money ever again. So it's,
it's a little bit lower stakes in your case. There's a very good argument to,
no, no, instead of having a breach, we did turn everything off,
but I'm told there is some middle ground between those two approaches.
Yes, there absolutely is. Do you want to dig into it?
I do. I'd love to learn a little bit more about what is CISDIG does,
other than, to be very frank, creates a report that I find compelling and gives me interesting things to comment on.
But under the hood, what is it you folks actually do?
OK, so this is what we do.
I can dig into this 555 and what we actually found in the report
and what is going in to how we stop attackers
in less than 10 minutes. Because that kind of will give you an idea of what's going on under the hood
in those few minutes. Shall we? Indeed, please lead on. Okay, so it starts in less than five seconds.
So what we found to begin with is in less than five seconds
when something happens in a customer's environment,
there's triggers going off,
an attacker is starting to go into their environment, right?
That triggers an alert.
Something's happening in their environment.
But there are several hops that have to happen
for that alert to go from their environment to their inbox or
Slack channel or whatever it may be, wherever their SOC analyst gets that alert, right? That
they're like, oh no, something's happening and they need to actually, you know, go and go look
at their computer screen and turn around and go do something about it. Right? So there's a lot of things that need to happen in the little computer network
for them to actually respond.
And let's be clear, this also assumes a 24-7 SOC
or, alternately, attackers who are polite enough only to operate during business hours.
Yes. So within milliseconds,
there are several different hops that that alert takes to go
from event happening to receiving that Slack alert on your computer screen.
And that takes less than five seconds.
So pretty much near real time, you're being alerted that something is happening right
then and there.
From that point, you have the alert on your computer screen.
Now you need to figure out what exactly that means.
Okay, I see that something's happening.
What exactly is going on?
Let's try to put some information together.
So that next five, we say,
you need to figure this out in five minutes.
The data in our report we actually found on average,
our customers are able to correlate and investigate
an incident in three and a half minutes.
So well within that five minute,
I mean what we tell them they should be doing in five minutes.
So with some of our functions and automated responses
and being able to correlate information from the dashboards that we provide them. Information about identities that are involved in the
incident, the containers that might be involved, things that are happening in their cloud environment.
Putting all of that information together in one place, being able to visualize the attack chain, right? Where did the attacker enter?
Where are they moving to?
Being able to see perhaps where your crown jewels are located. So
where might the attacker want to go?
Being able to see exactly what that looks like and not having to guess or insinuate in your head
visualizing it you're able to make a deduction and move on to the response part of that much, much faster.
Folks talk about the fog of war a lot, but I'm more accustomed, since I'm sort of a peaceful type,
to the fog of production, where you sort of have these things in your head, but trying to
to say, okay, I'm seeing this event
on this system, where does this fall on the diagram?
There's mental overhead involved in unpacking that.
And this all, of course, presupposes
that you're not using the good old days
of AWS user accounts with IAM,
where it can take a decent part of that three
and a half minutes to dig out the authenticator,
just to log into the console in the first place.
Yes, and we actually, this isn't just me talking.
We've spoken to some of our customers as well.
And those quotes are in the report.
They've said, too, that they have been able to do this.
And this is usage data that I looked at to write this report.
So I'm putting together averages.
This is a correlation of what they're doing
with our platform. This isn't survey data, folks just kind of guessing and putting together
their best efforts, where they used to take weeks or days to try to investigate a report
or investigate an incident prior. Some of them have said it's between 10 and 15 minutes
to be able to look and see what actually is going on
when they receive an alert from us.
So that was really exciting when I was able to find that
when writing this report.
And then if we're done with that,
I can move on to response.
But back in the early days,
I just want to point out that when Sysdig first launched,
it took upwards of 20 minutes for a CloudTrail event to show up showing that something had
happened in an AWS account.
That team's done amazing work and gotten that down to within a second or two most of the
time, which is awesome.
But back in those days, it didn't exist.
So it's pretty clear that what Sysdig does operates by looking at the workload directly
and not waiting for some of these slow provider mechanisms to take their time and work through
whatever systems they have. I've used other dashboards in the past too, and I would do some
proactive threat hunting and you write a script and you're trying to go and look to see if
something happened and it could take hours to be able to get results back.
So you have to try to change and manipulate that
to get results, and still it could take an entire day
to go and see if what you want to look for
is actually happening.
It's impossible to get anything done that way.
So, yeah.
So, as you mentioned, getting back to the report, the 2025
cloud native security and usage report, I'm always like these things from two perspectives. First,
it's a good read for folks who have not seen one of these things before, which I tend to assume is
the majority of folks, just because there's always more people that haven't read a thing than have
read a thing. But I also like to look at it from
the perspective of what has changed year over year in these things to identify broader trends.
Which direction do you want to attack this from? Um, so, Lais, I can move on to that last part of
the five because there is actually some trend information that has evolved over the last year
for that part. Okay, so with automated response, that's what I
looked at in particular. I looked at container drift, right? So when you have a container that
you start with and it changes from say, you know, golden image, what it's supposed to be,
when it's in production, it kind of changes from what it was.
When that can happen maliciously because someone enters it, or developers could be changing the
container, what just as it's in production, that's okay too. So you can turn on an alert for container
drift and you can be alerted that something's going on while the container is in production.
and you can be alerted that something's going on while the container is in production. We have options for automated response for container drift and a couple other things
as well for malware and crypto miners and things like that too.
But for container drift, you can pause, stop or kill the container.
If you kill a container and you don't have a mature system, right, you have developers,
for example, who like to go and make changes, and you have automated response to kill a
container set up, that could cause some issues, right?
You could have operations stop. So you definitely need to have mature security
practices in place to be able to have these kind of automated responses.
People do like to forget, given modern system stability, that containers are designed to
be ephemeral.
Yes. So last year, there was a very small number of organizations that we saw using container
drift automated responses.
I believe it was about 4%.
This year that has actually tripled.
It's still small.
It's about 11% of organizations.
A majority of organizations are still having alerts, so they do get alerted to container drift,
but there's now 11% of organizations
who are using automated response actions
for that kind of thing.
And we did also see an increase in the number
of organizations who are writing automated response actions
for malware and crypto miners and things like that as well. So that's really exciting to see because it's not just us.
There's tons of vendors and evangelists and thought leaders
and things like that who are telling you, you have to automate response.
Right. There's playbooks and source.
Everything has to be automated for a response to instant response.
The attacks are largely automated.
It's the only way that works.
People, this is not the 80s anymore
where people are sitting there at their keyboards
thinking really hard around what they're going to do next.
They have automated tooling that in many cases,
let's be honest here, is more robust and well-built
than the production environments they're attacking with it.
And it can be scary too.
I understand that, right?
Because you don't want to break anything.
But as long as you're communicating between teams
and you all understand you're on the same page,
you can absolutely do these things.
And in the report too, we talk out the variety
of different ways that you can go about
automatically responding to some of these incidents
and kind of building and manipulating
your own responses and tailoring them to what you want.
It's not just a set and forget kind of thing.
You can make it what you want it to be.
So that kind of helps too.
So hopefully we'll see more of that next year.
There's also another finding I want to get into around the prevalence of AI
because, you know, it's 2025, we're legally obligated to talk about AI things.
But first, that's right, it's time for me to talk to you
about the company you work for again.
Sysdig is sponsoring this episode.
What is a Sysdig?
Well, they help companies secure
and advance innovation in the cloud,
because building in the cloud enables businesses
to accelerate that all-important time to market.
And yet the cloud has introduced a world where it only takes, as we've just said,
10 minutes to initiate an attack.
Therefore, security teams have got to protect the business
without slowing it down and becoming the department of know of yesteryear.
So how do they identify and prioritize the real risks?
You guessed it, Sysdig, which is a complete CNAP that uses AI,
which we'll get to in a second,
to help security teams prioritize
and stop the threats that matter the most.
Now, learn more at sysdig.com.
Now, let's talk about AI in particular, Crystal,
because I want to figure out what is real and what is hype.
All right, well, we can get the smaller part of it
out of the way and then we get to the security part
because that's the really exciting part of this.
The hype, implementing AI, that's the hype, right?
And everybody's using AI, that's what we hear in the news.
How many organizations are actually using AI for security?
Sysdig has an AI tool for security.
It's a GenAI security assistant
that we have integrated into our platform
that you can use to help correlate investigations
and things like that. It's really cool.
It's called Cystic Sage.
Fascinating.
We have, as of the end of last year,
after four months of general availability,
45% of our customers have begun using Cystic Sage.
75% of them are DevOps folks.
So like I said, they're using it just to speed up.
Most of them like grab their cup of coffee
and ask Sysdig Sage, hey, like what happened last night
that I need to be on top of this morning
while I start my day?
That's what they use it for.
Like what's going on in this container?
What's going on in this environment with this identity.
That sounds legitimately useful,
something that you might, dare I say,
not have to shove onto people.
Everyone's talking about how AI is the next thing.
Well, okay, but if it's half as amazing
as people like to say, I mean, not to abuse a metaphor here,
but I have a four-year-old daughter
who's extraordinarily sugar-motivated.
I don't have to shove ice cream down her throat.
It's a pull rather than a push when it comes to that sort of stuff. With
with genuinely useful AI, yes it exists, that is the model. People seek it out.
They use it like this proactively. I don't know if other folks who are
listening to this have met DevOps folks before, but having been one myself for
many years, you can't get me to unwillingly do anything before a cup of
coffee in the morning.
So the timing sequence of that speaks volumes. Yeah. So the way that maybe you'll appreciate
this to you, the way that I like to compare Sysdig Sage to like my previous work is the genius guy
that used to sit behind me when I didn't know something and I could turn around and swivel my
chair and be like, hey, Dave,
what does this mean? And point to my screen and then he would wheel his chair over and be like,
da da da da da da da da da and explain it all to me. I'd be like, cool, thanks.
That's what cystic stage is to me. Like it's my Dave, but I don't have to bother anybody to get
the answer. To continue the Dave metaphor, I've worked with several Daves who, when they didn't know
the answer, were terrified to lose face and begin making things up, which the hallucination
problem is challenging.
But in this case, it almost feels like that is not the bad pattern when it comes to security.
Well, I made up an attack that didn't exist.
Yes, it's annoying you had an impromptu fire drill, but someone is actually in the midst of an attack and nope, all is well on the Western Front is not something
that is a terrific message to be sending out there. How do you split that difference?
Yeah, it's just, it's my senior security analyst is what I consider it to be. It's not trying to just put out fires.
It's just trying to help me figure out
what something is, period.
It's the assistant rather than the replacement.
It's the categorization.
Here's the things you probably want to look at first
versus, oh, you want to know what happened last night.
Here's eight gigabytes of logs
at this dumps onto your screen.
Terrific.
Yeah, let me get right on that.
So that's that.
And other exciting news.
Yes, the usage story of AI among the customer base.
Right.
So we looked at not browser-based AI or anything like that, but the number of AI and machine learning packages in workloads
that are running in a customer environment and found 500% growth of the number of packages
in running workloads.
And I went through all of them.
There were a lot.
There's some tables in the report
that broke out all of the names, types of Gen.ai
and machine learning, just all the different kinds of names,
what we saw.
There's some absolutely massive growth
for like OpenAI, TensorFlow, Transformers.
So you can kind of see those numbers in the report
of what that 500% growth actually looks like.
The Gen.AI packages specifically,
because again, a lot of it is actually
machine learning packages, but Gen.AI alone doubled.
The number of Gen.AI packages running in workloads doubled,
which again, kind of aligns with, you know,
almost 50% are using Sysdig Sage.
Gen.ai packages doubled over the last year.
So that kind of makes sense as far as growth goals.
Amidst all of this growth of AI
and the introduction of AI in our customer environments,
I did find something very soothing
and made us very happy.
Public exposure of these workloads with AI, right?
So exposure to the internet,
attackers constantly are scanning the internet for, you know, IPs,
websites, workloads, whatever that are exposed so they can look for misconfigurations, vulnerabilities,
whatever that will let them into your environment so they can wreak havoc, right?
That's the easiest way that they can get in unless they have credentials or something
like that.
So they're always scanning the internet.
I get those all the time.
Even you see it in the security researcher side,
I recently got something in the email
about a dangling sub domain that I had
where it was assigned to an elastic IP
that had since been released.
And so what wrote this very long write up
talking about the danger and asking for tips via PayPal.
And it sounded incredibly well researched and professional, except for the small minor problem
that it was targeting one of my test domains,
which I'm not expecting anyone to necessarily know
the purpose of a given domain,
but maybe finding a security exploit
in the shitposting.monster domain
might not be the high value target
that you think you just found.
So there's a lot of the, using AI for these things to create noise.
Seeing people create value with it is much more interesting to me.
Yes. So, public exposure.
In April of 2024, we saw public exposure of workloads with AI at 34%.
So 34% of workloads with AI at 34%. So 34% of workloads containing AI, which potentially has sensitive information, right?
Because people are feeding sensitive information, data, whatever, into AI, GenAI potentially.
34% of those packages were publicly exposed.
By the end of the year, that was down to less than 13%. So there was a 38% reduction in the
number of workloads with AI publicly exposed to the internet. So 500% growth in workloads with AI
and 38% reduction in public exposure.
So there's a massive use of AI,
but there is an obvious prioritization of the security.
That correlation is huge for us.
That's really, really exciting to see
that even though everybody is looking forward to
and trying to use GenAI, AI, whatever they may be trying to do, that
they are trying to keep security at the top of mind. There's also another graph in the
report as well. I looked at public exposure and then broke it down further into do those
packages have critical and high vulnerabilities?
Are those in use?
Are they in production?
Because those are the kinds of things that our attackers are going to look for.
So can I get to that publicly exposed workload?
Are there vulnerabilities that I can take advantage of?
And is it running in production?
All of those kinds of things that are layered. There's almost nothing there.
It was like less than one percent.
So the security of AI is definitely a high priority,
which made me very, very happy to see.
I wonder if that's a natural outgrowth of what I consider to be many companies
over indexing on the value of their data as a competitive advantage.
I understand the reasons behind it, truly I do,
but at the same time, I'm not convinced
that even if you were to get the complete code base
of a large competitor,
is that a meaningful gain for you necessarily,
especially at large scale
where everyone implements things differently.
I think it's a little bit less clear, but companies are remarkably concerned about it.
Is this lockdown that you're seeing of AI workloads in response to that concern?
I don't know. It could be. I hope so.
Honestly, I'll take the win wherever I can get it.
Exactly. Like I said, the positivity makes me happy.
The prioritization of security is good.
The fact that people are thinking of it is a good thing.
This was a part of the report that made me happy.
Now I want to talk about a part of the report that made me sad because it struck a little
close to home.
Specifically the wild proliferation of machine identities,
which to me, in my mind,
please correct me if I'm wrong on this,
is things like instance roles or execution roles
within the AWS context,
things designed for automated systems
as opposed to human beings logging in for things.
Applications, API calls, really, I mean,
it could be anything that's not a human that
is connected to your cloud environment.
I'm the only user in one of my AWS accounts.
It has 400 roles in it, most of which were created by AWS automated managed service things.
So are those properly scoped?
I don't know.
If there's something important in there, it is buried under a pile of other things.
Well thank you for validating my findings in the report.
I really appreciate that statistic.
No, thank you for confirming my own biases and suspicions with actual data.
It's great.
Yay, the confirmation bias thing.
Are we cherry pick things from reports to talk about that resonate?
This is one where there's a wild rise in the number of machine identities.
I'm a big fan of casting shade on these things.
I want to pull up the actual numbers on this so I can do it more effectively. But you compared even
among different cloud providers where the number one by I think a couple orders of magnitude
was Azure. Because apparently as a user, we're maneuvers through various Microsoft properties.
Every action they do counts as a different user, presumably billed by the seat. But it was wild. 67 times more users,
according to the report. First we looked at human users and we looked at the three
major providers, Azure, Google, and AWS. And this is again just human users alone, GCP and AWS had one to 200 human users
on average for those organizations,
and Azure had over 7,000.
And I looked at those numbers and I was like,
well, that doesn't make sense.
That seems like a little bit of an outlier.
So I went to some of our engineers and I was like,
can you help me make sense of this?
Why is Azure have 67 times more users than the other two?
Shouldn't those probably be about the same?
So we dug into it and then we realized that
for organizations that use Azure,
every time a human user logs into a Microsoft service,
like you log into OneNote and PowerPoint and Excel and Word,
for every service you log into,
it counts you as another user.
So you have 100 employees,
but you log into seven different services.
Now you are eight users and that's how you get to 7,000.
So that makes managing human users in Azure very complicated
and why they're counted that way.
I'm not quite sure. If it makes them, I don't know if it makes it more complex
to manage them.
Our customers who manage Azure accounts know this
and understand it.
I did speak to one who gets it.
He knew it.
He helped me understand it with one of our engineers.
So yeah, that's just the way it's counted. It's just really strange. So if you didn't know that,
now you know. I have to assume this is based on legacy account structures. Every company has
these. My personal favorite example of that is when you log into the Google Cloud console
and watch your address bar as it steps through various places, it bounces out and then back
to accounts.youtube.com as a part of the way you log into your company's, maybe a bank's,
infrastructure provider systems.
Because, you know, the video site where kids say horrible things in the comments is absolutely
something you want critical past.
I digress.
All account management, all user management is horrible.
But what is the answer to this between the individual users
having massive proliferation, the machine identities scaling
at similar massive rates?
How do you how do you wrap your head around this?
One of the things that we found, and this is actually silver
lining, and we'll get to
machine identities, we haven't even got there yet, but we did find a wonderful statistic
of maturity for managing human identities.
And there were 15%, 15% of organizations had zero human users in their environments, which
was a good thing.
It means that they're using a third party SSO provider.
So rather than logging directly into your cloud environment,
you're logging in via a third party.
So that additional layer of security,
like Okta, for example, right?
That's probably the best and most well-known example.
So again, instead of an attacker having direct access,
being able to log directly into your cloud account
and having again, 100, 200 or 7,000 options
directly into your cloud account, there are none.
They have to go through that third party to get in.
And then they wind up getting a role dispense, which inherently is time bound, as opposed to these permanent credentials you're going to have in a backup somewhere that gets discovered three
years later and then used to exploit you. Yes. So we found 15% of organizations
did have a little layer of maturity and no cloud human users in their environments.
Did you measure that in previous years?
No, I have not. So the last two years we looked at excessive permissions and those were really, really bad numbers the last two years.
So this year was a different approach looking at the human and machine identities.
So I'm hoping next year I'll probably look at these same numbers again
and see if we can find some new trends next year.
I certainly hope so.
Now, getting back to the machine identity piece.
Okay, so the human users, those numbers were weird.
Machine identities, I found 40,000 times more
identities, I found 40,000 times more machine identities than humans in an organization. Forty thousand times more. There was one organization in particular, I don't remember how many users there were, but the machine identities service accounts, there were 1.6 million machine identities in their environment.
Were they creating a new one every time a container would spin up and then just never deleting it? So they using your identity, this to my dentist as
a database, I Lord knows I've done stranger things.
Poor provisioning. So what we think happened is that they're
just being poorly provisioned, a majority of these have no
access. So they're probably very low risk. They don't have any assigned permissions,
right? So when we think of racking and stacking high to low risk priorities, these would fall
pretty low risk because there's no permissions. If an attacker tried to get into one of these
identities, it would be a little more complicated than say others who do have permissions.
Maybe they try to enumerate the identities and see what has permissions and this is defense
through not it's not even the security through obscurity.
It's security through what the hell are these people doing?
You know you have a strange approach to things when an attacker breaks in, fixes your environment
and then leaves you AWS credits to have a better attempt the next time.
So they shouldn't be there.
You could remove a lot of these identities, but when you have other concerns, high critical
vulnerabilities in your production environment, those are higher priority than these non-provisioned
machine identities that are sitting around, right?
I mean, you have to prioritize your risks.
These aren't a risk, but it's still not good.
You just got an enormous pile of hay
for the needles to hide in.
So, like 40,000 times more machine identities
does seem a little outrageous.
So, I did some data manipulation to try to, I don't know,
make it seem more palatable, I guess.
And I took out some of these outliers,
like the 1.6 million machine identities
and the Azure users who had 7,000 human identities.
So I ended up taking out 11% of organizations and the numbers were,
like I said, a little more digestible. There were about 150 users to 5,000 machine identities,
which is about a 35 times difference. So again, I mean, that in my mind makes more sense. 150 employees, still 5300 machine identities does not sound like
something I want to manage. 150 employees sounds like a real business though.
It really does, but you need a department to handle that.
That's still not good. Human identities are being provisioned well. Machine identities are still a very high risk.
They're not being provisioned well. They need to be taken care of. If they're not being used,
they can go away. That needs to be the next priority. There's a lot of good things in this
report. If there's anything that I could, there's some other things that we need to work on too
that are in here.
But if there's one thing that I could highlight,
it's that we need to focus on these non-human identities
because they're definitely an issue.
If people want to get a copy of this report for themselves,
where's the best place for them to do so?
They can go to cystig.com.
There's some banners there for you. Where's the best place for them to do so? They can go to cystic.com.
There's some banners there for you.
There's a press release.
But yeah, it'll be pretty easy to find if you go there.
Our LinkedIn website, you can go to my LinkedIn page.
I've got it there too.
Come find me or go to our website.
It'll be really easy for you to find.
No problem.
And all of this will, of course, be in the show notes.
Crystal, thank you so much for taking this time to speak with me today.
I appreciate it.
Thanks for having me. That was a lot of fun.
Crystal Morin, cybersecurity strategist at Sysdig.
I'm cloud economist Corey Quinn, and this is Screaming in the Cloud.
If you've enjoyed this podcast, please leave a five star review on your podcast
platform of choice.
Whereas if you've hated this podcast, please leave a five star review on your podcast platform of choice. Whereas if you've hated this podcast, please leave a five star review on your podcast platform
of choice, along with an angry comment from one of your 40,000 user accounts.