Screaming in the Cloud - Raising Awareness on Cloud-Native Threats with Michael Clark
Episode Date: October 13, 2022About MichaelMichael is the Director of Threat Research at Sysdig, managing a team of experts tasked with discovering and defending against novel security threats. Michael has more than 20 ye...ars of industry experience in many different roles, including incident response, threat intelligence, offensive security research, and software development at companies like Rapid7, ThreatQuotient, and Mantech. Prior to joining Sysdig, Michael worked as a Gartner analyst, advising enterprise clients on security operations topics.Links Referenced:Sysdig: https://sysdig.com/“2022 Sysdig Cloud-Native Threat Report”: https://sysdig.com/threatreport
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This episode is brought to us in part by our friends at Datadog.
Datadog's a SaaS monitoring and security platform that enables full stack observability for developers,
IT operations, security, and business teams in the cloud age. Datadog's platform,
along with 500 plus vendor integrations, allows you to correlate metrics, traces, logs, and security
signals across your applications, infrastructure, and third-party services in a single pane of glass.
Combine these with drag-and-drop dashboards and machine learning-based alerts to help teams troubleshoot and collaborate more effectively, prevent downtime, and enhance performance and reliability.
Try Datadog in your environment today with a free 14-day trial and get a complimentary t-shirt when you install the agent.
To learn more, visit datadoghq.com slash screaming in the cloud to get started. That's www.datadoghq.com
slash screaming in the cloud. Welcome to Screaming in the Cloud. I'm Corey Quinn.
Something interesting about this particular promoted guest episode that is brought to us
by our friends at Sysdig is that when they reached out to set this up, one of the first
things out of their mouth was, we don't want to sell anything, which is novel. And I said,
tell me more because I was also slightly skeptical, but based upon the conversations that I've had and
what I've seen, they were being honest. So my guest today, surprising as though it may be,
is Mike Clark, director of threat research at Sysdig. Mike, how are you doing?
I'm doing great. Thanks for having me. How are you doing?
Not dead yet, so we take what we can get sometimes. You folks have just come out with the 2022
Sysdig Cloud Native Threat Report, which is, on the one hand, it feels like it's kind of a wordy
title. On the other, it actually encompasses everything that it is, on the one hand, it feels like it's kind of a wordy title. On the other, it
actually encompasses everything that it is, and you need every single word of that report.
At a very high level, what is that thing? Sure. So this is our first threat report we've ever done,
and it's kind of a rite of passage, I think, for any security company in the space, you have to have a threat report. And the cloud-native part,
Sysdig specializes in cloud and containers.
So we really wanted to focus in on those areas
when we were making this threat report,
which talks about some of the common threats
and attacks we were seeing over the past year.
And we just wanted to let people know what they are
and how they might protect themselves. One thing that I've found about a variety of threat reports is that
they tend to excel at living in the fear, uncertainty, and doubt space. And invariably,
they paint a very dire picture of the internet about to come cascading down. And then at the
end, there's always the, but there's hope. Click here to set up a meeting with us. It's basically a very thinly veiled cover
around what is fundamentally a fear, uncertainty, and doubt-driven marketing strategy. And then it
tries to turn into a sales pitch. This does absolutely none of that. So I have to ask,
did you set out to intentionally make something
that added value in that way and have contributed to the body of knowledge? Or is it because it's
your inaugural report? You didn't realize you were supposed to turn it into a terrible sales pitch.
We definitely went into that on purpose. There's a lot of ways to fix things,
especially these days with all the different technologies. So we can easily talk
about the solutions without going into specific products. And that's kind of the way we went about
it. There's a lot of ways to fix each of the things we mentioned in the report. And hopefully,
the person reading it finds a good way to do it. I'd like to unpack a fair bit of what's in the
report. And let's be clear, I don't intend to read this report into a microphone.
That is generally not a great way of conveying information that I've found.
But I want to highlight a few things that leapt out to me that I find interesting.
Before I do that, I'm curious to know, most people who write reports, especially ones of this quality,
are not sitting there cogitating in their
office by themselves, and they set pen to paper and emerge four days later with the finished
treatise. There's a team involved. There's more than one person that weighs in. Who was behind this?
Yeah, it was a pretty big team effort across several departments, but mostly it came to the
Sysdig threat research team. It's about 10 people
right now. It's grown quite a bit through the past year. And it's made up of all sorts of
backgrounds and expertise. So we have machine learning people, data scientists, data engineers,
former pen testers and red team, a lot of blue team people, people from the NSA, people from other government agencies as well.
And we're also a global research team. So we have people in Europe and North America working on all
of this. So we try to get perspectives on how these threats are viewed by multiple areas,
not just Silicon Valley, and express fixes that appeal to them too.
Your executive summary on this report starts off with a cloud adversary analysis of Team TNT.
And my initial throwaway joke on that was going to be, oh, when you start off talking about any
entity that isn't you folks, they must have gotten the platinum sponsorship package. But then I read the
rest of that paragraph and I realized that, wait a minute, this is actually interesting and germane
to something that I see an awful lot. Specifically, they are, and please correct me if I'm wrong on
any of this, you are definitionally the expert, whereas I am obviously the peanut gallery. But
you talk about Team TNT as being a threat actor that focuses on
targeting the cloud via crypto jacking, which is an offensive word for, okay, I've gotten access
to your cloud environment. What am I going to do with it? Mine, Bitcoin and other various
cryptocurrencies. Is that generally accurate or have I missed the vote somewhere fierce on that,
which is entirely possible.
That's pretty accurate.
We also think it's just one person, actually.
And they are very prolific.
They've worked pretty hard to get that platinum support package because they are everywhere.
And even though it's one person, they can do a lot of damage, especially with all the
automation people can make now.
One person can appear like a
dozen. There was an old t-shirt that basically encompassed everything that was wrong with the
culture of the sysadmin world back in the noughts that said, go away or I will replace you with a
very small shell script. But on some level, you can get a surprising amount of work done on computers
just with things like for loops and whatnot.
What I found interesting was that you have put numbers and data behind something that I've always taken for granted and just implicitly assumed that everyone knew.
This is a common failure mode that we all have.
We all have blind spots where we assume the things that we spend our time on is easy.
And the stuff that other people are good at and you're not good at, those are the hard things. It has always been intuitively obvious to me as
a cloud economist that when you wind up spending $10,000 in cloud resources to mine cryptocurrency,
it does not generate $10,000 of cryptocurrency on the other end. In fact, a line I've been using for years is that
it's totally economical to mine Bitcoin in the cloud. The only trick is you have to do it in
someone else's account. And you've taken that joke and turned it into data. Something that you found
was that in one case that you were able to attribute $8,100 of cryptocurrency that were generated by stealing
$430,000 of cloud resources to do it. And oh my God, we now have a number and a ratio and I can
talk intelligently and sound four times smarter. So ignoring anything else in this entire report,
congratulations, you have successfully turned this into what is going to become a talking point of mine.
Value unlocked.
Good work.
Tell me more.
Oh, thank you.
Crypto mining is kind of like a virus is an old on-prem environment.
Normally, it's just cleaned up and never thought of again.
The antivirus software does its thing.
Life goes on.
And I think crypto miners are kind of treated like that.
Oh, there's a miner, let's rebuild the instance or bring a new container online or something like that.
So it's often considered a nuisance rather than a serious threat.
It also doesn't have the dangerous ransomware connotation to it.
So a lot of people generally just
think of it as a nuisance, as I said. So what we wanted to show was it's not really a nuisance,
and it can cost you a lot of money if you don't take it seriously.
And what we found was for every dollar that they make, it costs you $53. And, you know, as you mentioned,
it really puts into view
of what it could cost you
by not taking it seriously.
And that number can scale very quickly,
just like your cloud environment
can scale very quickly.
They say this cloud scales infinitely,
and that is not true.
First, tried it, didn't work.
Secondly, it scales, but there is an
inherent limit, which is your budget on some level. I promise they can add hard drives to S3 faster
than you can stuff data into it. I've checked. One thing that I've seen recently was, speaking of S3,
I had someone reach out in what I will charitably refer to as a blind panic because they were using AWS to do something.
Their bill was largely $4 a month in S3 charges.
Very reasonable.
That carries you surprisingly far.
And then they had a credential leak and they had a threat actor spin up all the Lambda functions in all of the regions.
And it went from $4 a month to $60,000 a day.
And it wasn't caught for six days.
And then AWS, as they tend to do, very straight face, says, yeah, we would like our $360,000,
please.
At which point people start panicking.
Because a lot of the people who experience this are not themselves sophisticated customers. They're students. They're learning how this stuff works.
And when I'm paying $4 a month for something, it is logical and intuitive for me to think that,
well, if I wind up being sloppy with their credentials, they could run that bill up to
possibly $25 a month and that wouldn't be great. So I should keep an eye on it. Yeah, you dropped a whole bunch
of zeros off the end of that. Here you go. And as AWS spins up more and more regions, and as they
spin up more and more services, the ability to exploit this becomes greater and greater.
This problem is not getting better. It is only getting worse by a lot.
Oh, yeah, absolutely. And I feel really bad for those students who do have that happen to them.
I've heard on occasion that the cloud providers will forgive some debts, but there's no guarantee of that happening from breaches.
And the more that breaches happen, the less likely they are going to forgive it because they still have to pay for it.
Someone's paying for it in the end. And if you don't improve and fix your environment
and it keeps happening, one day they're just going to stick you with the bill.
To my understanding, they've always done the right thing. When I've highlighted something to them,
I don't have intimate visibility into it. And of course, they have a threat model themselves of,
OK, I'm going to spin up a bunch of stuff, mine cryptocurrency for a month,
cry and scream and pretend I got hacked because fraud is very much a thing. There is a financial
incentive attached to this. And they mostly seem to get it right. But the danger that I see for
the cloud provider is not that they're going to stop being nice and giving money away, but
assume you're a student
who just winds up getting
more than your entire college tuition
as a surprise bill for this month
from a cloud provider.
Even assuming at the end of that,
everything gets wiped
and you don't owe anything.
I don't know about you,
but I'd never use that cloud provider again
because I've just gotten the firsthand lesson
in exactly what those risks are.
It's bad for the brand.
Yeah, it really does scare people off of that.
Now, some cloud providers try to offer
more proactive protections against this,
try to shut down instances really quick.
And, you know, you can take advantage of limits
and other things,
but they don't make that really easy to do.
And setting those up is critical for everybody.
The one cloud provider that I've seen get this right, of all things, has been Oracle Cloud, where they have an always free
tier until you affirmatively upgrade your account to chargeable, they will not charge you a penny.
And I have experimented with this extensively, and they're right. They will not charge you a
penny. They do have warnings plastered
on the site, as they should, that until you upgrade your account, do understand that if you exceed a
threshold, we will stop serving traffic. We will stop servicing your workload. And yeah, for a
student learner, that's absolutely what I want. For a big enterprise gearing up for a giant Super Bowl
commercial or whatnot, it's, yeah, I don't care what it costs. Just a big enterprise gearing up for a giant Super Bowl commercial or whatnot, it's,
yeah, I don't care what it costs. Just make sure you continue serving traffic. We don't get a redo
on this. And without understanding exactly which profile a given customer falls into,
whenever the cloud provider tries to make an assumption and a default in either direction,
they're wrong. Yeah, I'm surprised that Oracle Cloud of all clouds, that's good to hear that they actually have a free tier.
Now, we've seen attackers abuse free tiers quite a bit.
It all depends on how people set it up.
And this actually, it's a little outside of threat report,
but the CID, CD pipelines, and DevOps,
anywhere there's free compute,
attackers will try to get their miners in
because it's all about scale and not quality.
Well, that is something I'd be curious to know, because you talk about focusing specifically on
cloud and containers as a company, which puts you in a position to be authoritative on this.
That Lambda story that I mentioned about surprise, $60,000 a day in crypto mining,
what struck me about that and caught me by surprise was not what I think would catch most people who didn't swim in this world by surprise of you can spend that much.
In my case, what I'm wondering about is, well, hang on a minute.
I did an article a year or two ago, 17 ways to run containers on AWS and listed 17 AWS services that you could use to run containers. And a few months later, I wrote
another article called 17 More Ways to Run Containers on AWS. And people thought I was
belaboring the point and making a silly joke. And on some level, of course I was. But I was also
highlighting very clearly that every one of those containers running in a service could be mining
cryptocurrency. So if you get access to
someone else's AWS account, when you see those breaches happen, are people using just the one
or two services they have things ready to go for, or are they proliferating as many containers as
they can through every service that Borderline supports it? From what we've seen, they usually
just go after a compute, like EC2, for example.
As it's the most well understood, it gets the job done. It's very easy to use and then get your
miner set up. So if they happen to compromise your credentials versus the other method,
the crypto miners or cryptojackers do is exploitation, then they'll try to spread
throughout their other EC2
they can and spin up as much as they can.
But the other interesting thing is if they get
into your system, maybe via an exploit
or some other misconfiguration,
they'll look for the IAM metadata service.
As soon as they get in to try to get your IAM credentials
and see if they can leverage them
to also spin up things through the API.
So they'll spin up one on the thing they compromise
and then actively look for other ways to get even more.
Restricting the permissions that anything has
in your cloud environment is important.
I mean, from my perspective,
if I were to have my account breached,
yes, they're going to cost me a giant pile of money, but I know the magic incantations to say
to AWS and worst case, everyone has a pet or something they don't want to see unfortunate
things happen to. So they'll waive my fee. That's fine. The bigger concern I've got in seriousness,
and I think most companies do, is the data. It is the access to things in the account. In my case, I have a number of my
clients' AWS bills, given that that is what they pay me to work on. And I'm not trying to undersell
the value of security here, but on the plus side that helps me sleep at night, that's only money.
There are data sets that are far more damaging and valuable about that. The worst sleep
I ever had in my career came during a very brief stint I had about 12 years ago when I was the
director of tech ops at Grindr, the dating site. At that scenario, if the data had been breached,
people could very well have died. They live in countries where that winds up not being something
that is allowed or their family now winds up shunning them and whatnot.
And it's that's the stuff that keeps me up at night compared to that is, well, you cost us some money and embarrassed a company.
It doesn't really rank on the same scale to me.
Yeah, I guess the interesting part is data requires a lot of work to do something with for a lot of attackers. It may be opportunistic and come across interesting data, but they need to do something with it.
There's a lot more risk once they start trying to sell the data.
Or like you said, if it turns into something very unfortunate, then there's a lot more
risk from law enforcement coming after them.
Whereas with crypto mining, there's very little risk from being
chased down by the authorities. Like you said, people, they rebuild things and ask AWS for credit
or whoever and move on with their lives. So that's one reason I think crypto mining is so popular
among threat actors right now. It's just a low risk compared to other ways of doing things. It feels like it's a nuisance. One thing that I was dreading when I got this copy of the report
was that there was going to be what I see so often, which is let's talk about ransomware in the cloud,
where people talk about encrypting data in S3 buckets and sneakily polluting the backups that
go into different accounts and how you're air gapping and the rest. And I don't see that in the wild. I see that in the fear-driven marketing from companies that have a thing that they say will fix that. their corporate network, it is on-premises environments, it is servers perhaps running
in AWS, but they're being treated like servers would be on-prem, and that is what winds up
getting encrypted.
I just don't see the attacks that everyone is warning about.
But again, I am not primarily in the security space.
What do you see in that area?
You're absolutely right.
We don't see that
at all either it's certainly theoretically possible and it may have happened but there
just doesn't seem to be that appetite to do that now the reasoning i'm not 100 sure why but i think
it's easier to make money with crypto mining even with the the crypto markets the way they are, it's essentially free money,
no expenses on your part. So maybe they're not looking, because again, that requires more effort
to understand, especially if it's not targeted, what data is important. And then it's not exactly
the same method to do the attack. There's versioning, there's all this other hoops you
have to jump through to do an
extortion attack with buckets and things like that. Oh, it's high risk and it feels dirty too.
Whereas you just, I guess on some level psychologically, if you're just going to
spin up a bunch of coin mining somewhere and then some company finds it and turns it off,
whatever. You're not, as in some cases, shaking down a children's hospital. Like that's one of
those, great. I can't imagine how you deal with that as
a human being, but I guess it takes all types. This does get us to sort of the second tentpole
of the report that you've put together specifically around the idea of supply chain attacks against
containers. There have been such a tremendous number of think pieces, thought pieces, whatever
they're called these days, talking about a software bill
of materials and supply chain threats. Break it down for me. What are you seeing? Sure. So
containers are very fun because you can define things as code about what gets put on it.
And they become so popular that sharing sites have popped up, like Docker Hub and other public registries, where you can easily share your container.
It has everything built, set up, so other people can use it.
But attackers have kind of taken notice of this, too, where anything's easy, an attacker will be.
So we've seen a lot of malicious containers be uploaded to these systems. A lot of times they're just hoping for a developer or user to come along and use them because
Docker Hub does have the official designation.
So while they can try to pretend to be like Ubuntu, they won't be the official.
But instead, they may try to see theirs and links and things like that to entice people
to use theirs instead. And then when they do, it's already preloaded with a miner or other malware.
So we see quite a bit of these containers in Docker Hub.
And they're disguised as many different popular packages.
They don't stand up to too much scrutiny, but enough that a casual looker, even Dockerfile, may not see it.
So yeah, we see a lot of that.
And embedded credentials is another big part that we see in these containers.
Well, that could be an organizational issue, like just a leaked credential, but you can put malicious credentials into Dockerfiles too, like say an SSH private key that if they
start this up, the attacker can now just log SSH in,
or other API keys, or other AWS changing commands you can put in there. You can put really anything
in there, wherever you load it, it's going to run. So you have to be really careful.
This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve and
occasionally create problems, but not when it's an on-call fire drill at four in the morning.
Software problems should drive innovation and collaboration, not stress and sleeplessness and threats of violence.
That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature flags let developers push code to production,
but hide that feature from customers
so that the developers can release their feature
when it's ready.
This practice allows for safe, fast,
and convenient software development.
You can seamlessly incorporate AppConfig feature flags
into your AWS or cloud environment
and ship your features with excitement, not
trepidation and fear. To get started, go to snark.cloud slash appconfig. That's snark.cloud
slash appconfig. Years ago, I gave a talk out in the conference circuit called Terrible Ideas in
Git that purported to teach people how Git worked
through hilarious examples of misadventure.
And the demos that I did on that were,
well, this was fun and great,
but it was really annoying resetting them
every time I gave the talk.
So I stuffed them all into a Docker image
and then pushed that up to Docker Hub.
Great, it was awesome.
I didn't publicize it and talk about it,
but I also just left it as an open repository there
because what are you going to do?
It's just a few directories in the root
that have very specific contrived scenarios
with Git set up and ready to go.
There's nothing sensitive there.
The thing is called terrible ideas.
And I just kept watching the download numbers
continue to increment week over week.
And I took it down because it's,
I don't know what people are going to do with that. Like you see something on there and says terrible ideas for all I know,
some bank is like, and that's what we're running in production now. So who knows? But the idea of
not that there was necessarily anything wrong with that, but the fact that there's this theoretical
possibility, someone could use that or put the wrong string in. If I give an example and then
wind up running something that is fairly comprom wrong string in, if I give an example, and then wind up running
something that is fairly compromisable in a serious environment was just something I didn't
want to be a part of. And you see that again and again and again. This idea of what Docker unlocks
is amazing, but there's such a tremendous risk to it. I mean, I never understood 15 years ago
how you're going to go and spin up a Linux server on top of EC2 and just
grab a community AMI and use that. Yeah, I used to take provisioning hardware very seriously to
make sure that I wasn't inadvertently using something compromised. Here, it's like, oh,
just grab whatever seems plausible from the catalog and go ahead and run that.
But it feels like there's so much of that turtles all the way down. Yeah. And I mean, even if you look
at the Docker file with all the dependencies that are the things you
download, it really gets to be difficult. So
I mean, to protect yourself, it really becomes about like, you know, you can do the static
scanning of it, looking for bad strings in it or bad
version numbers for vulnerabilities.
But it really comes down to runtime analysis.
So when you start the Docker container, you really need the tools to have visibility to
what's going on in the container.
That's the only real way to know if it's safe or not in the end, because you can't eyeball
it and really see all that.
And there could be a binary stored in one of the layers, too.
They'll get run and things like that.
Hell is other people's workflows, as I'm sure everyone's experienced themselves.
But one of mine has always been that if I'm doing something as a proof of concept to build
it up on a developer box, and I do keep my developer environments for these sorts of
things isolated, I will absolutely go and grab something that is plausible looking from Docker Hub
as I go down that process.
But when it comes time to wind up
putting it into a production environment,
okay, now we're going to build our own resources.
Yeah, I'm sure the Postgres container
or whatever it is that you're using is probably fine,
but just so I can sleep at night,
I'm going to take the public Docker file they have
and I'm going to go ahead and build that myself. And just, I feel better about doing that rather than trusting
some rando user out there and whatever it is that they've put up there, which on the one hand feels
like a somewhat responsible thing to do. But on the other, it feels like I'm only fooling myself
because some rando putting things up there is kind of what the entire open source world is to a point. Yeah, that's very true. At
some point, you have to trust some product or some foundation to have done the right thing.
But what's also about containers is they're attacked and used for attacks, but they're also
used to conduct attacks quite a bit. And we saw a lot of that with the Russian-Ukrainian conflict this year.
Containers were released that were preloaded
with denial of service software
that automatically collected target lists
from, I think, GitHub they were hosted on.
So all a user to get involved had to do
was really just get the container and run it.
That's it.
And now they're participating
in this cyber war kind of activity.
And they could also use this to put on a botnet, or if they compromise an organization, they can
spin up all these instances with that Docker container on it. And now that company is implicated
in that cyber war. So they can also be used for evil. Yeah, this gets to the third point of your
report. Geopolitical conflict influences attacker behaviors.
Something that happened in the early days
of the Russian invasion
was that a bunch of open source maintainers
would wind up either disabling what their software did
or subverting it into something actively harmful
if it detected it was running in the Russian language
and or in a Russian time zone.
And I understand the desire
to do that. Truly, I do. I am no Russian apologist, let's be clear. But the counterpoint to that as
well is that, well, to make a reference I made earlier, Russia has children's hospitals too.
And you don't necessarily know the impact of fallout like that, not to mention that you have
completely made it untenable to use anything you're doing for a regulated industry or anyone else who gets caught
in that and discovers that that is now in their production environment, it really sets a lot of
stuff back. I've never been a believer in that particular form of vigilantism, for lack of a
better term. I'm not sure that I have a better answer, let's be clear. I always knew that on some level
the risk of opening that Pandora's box
were significant.
Even if you're doing it for
the right reasons, it still erodes trust.
Especially, it erodes trust
throughout open source.
Not just one project, because you'll start thinking,
oh, how many other projects might do this?
Wait, maybe those dirty hippies
did something in our,
like, I don't know.
Would they have let those people anywhere near this operating system Linux thing that we use?
I don't think they would have done that.
Red Hat seems trustworthy and reliable.
And it's, yo, someone needs to crack open
a history book on some level.
It's a sticky situation.
I do want to call out something here
that it might be easy to get the wrong idea
from the summary that we just gave. Very few things wind up raising my hackles quite like companies using
tragedy to wind up shilling whatever it is they're trying to sell. And I'll admit when I first got
this report and I saw, oh, you're talking about geopolitical conflict. Great. I'm not super proud
of this, but I was prepared to read you the riot act more or less when I inevitably got to that.
And I never did. And nothing in this entire report even hits in that direction.
Was it you never got to it or?
Oh, no, I read the whole thing. Let's be clear. You're not using that to sell things in the way
that I was afraid you were. And simultaneously, I want to just point that out because that is laudable.
At the same time, I am deeply and bitterly resentful that that even is laudable.
That should be the common state.
Capitalizing on tragedy is just not something that ever leaves any customer feeling good
about one of their vendors.
And you've stayed away from that.
I just want to call that out as doing the right thing.
Thank you.
It was actually a big topic about how we should broach this.
But we had a good data point on right after it started, there was a huge spike in denial
service installs.
And we have a bunch of data collection technology, honeypots and other things. And we saw the day after crypto mining started going down and denial service installs started going up.
So it was just interesting how that community changed their behaviors, at least for a time, to participate in whatever you want to call the hacktivism. Over time, though, it kind of has come back to the norm where
maybe they've gotten bored or something or run out of funds, but they're starting crypto mining
again. But these events can cause big changes in the hacktivism community. And like I mentioned,
it's very easy to get involved. We saw over 150,000 downloads of those pre-canned denial service containers. So it's definitely
something that a lot of people participated in. It's a truism that war drives innovation and
different ways of thinking about things. It's a driver of progress, which says something deeply
troubling about us. But it's also clear that it serves a driver for change, even in this space,
where we start to see
different applications of things we see different threat patterns start to emerge
i mean one thing i do want to call out here that i think often gets overlooked in the larger
ecosystem and industry as a whole is well no one's going to bother to hack my nonsense i don't have
anything interesting for them to look at it's on's, on some level, an awful lot of people running tools like this aren't sophisticated enough themselves to determine that. And combined with your first
point in the report as well, that, well, you have an AWS account, don't you? Congratulations. You
suddenly have enormous piles of money, from their perspective, sitting there relatively unguarded.
Yay! Security has now become everyone's problem once again.
Right.
And it's just easier now.
I mean, it was always everyone's problem,
but now it's even easier for attackers to leverage almost everybody.
Like before you had to get something on your PC,
you had to download something.
Now you're a search of GitHub,
can find API keys,
and then that's it.
You know, things like that will make it game over or your account gets compromised and
big bills get run up.
And yeah, it's very easy for all that to happen.
I do want to ask at some point, and I know you asked me not to do it, but I'm going to
do it anyway, because I have this sneaking suspicion that given that you've spent this much time on studying this problem space,
that you probably as a company have some answers around how to address the pain that lives in these
problems. What exactly at a high level is it that Sysdig does? How would you describe that in an
elevator without sabotaging the elevator for 45 minutes
to explain it in depth to someone?
So I would describe it as threat detection and response
for cloud containers and workloads in general
and all the other kind of acronyms for cloud,
like CSPM, CIEM.
They're inventing new and exciting acronyms all the time.
And I honestly, at this point,
I want to have almost an acronym challenge of,
is this a cybersecurity acronym or is it an audio cable?
Which is it?
Because it winds up going down that path super easily.
I was at RSA walking the expo floor
and I had, I think, 15 different companies I counted
pitching XDR without a single one bothering to explain what that meant.
And OK, I guess it's just a thing we've all decided we need.
It feels like security people selling to security people on some level.
I was a Gartner analyst.
Yeah. Oh, that would do it then. Terrific.
So it's partially your fault then.
No, I was going to say, I don't know what it means either.
Yeah.
So I have no idea. I couldn't tell you.
I'm only half kidding when I say, in many cases, from the vendor perspective, it seems
like what it means is whatever it is they're trying to shoehorn the thing that they built
into filling.
It's kind of like observability.
Observability means what we've been doing for 10 years already, just repurposed to catch
the next hype wave.
Yeah.
The only thing I really understand is detection and response.
Those are very clear.
Detect things and respond to things.
So that's a lot of what we do.
It's got to beat the default detection mechanism
for an awful lot of companies
who in years past have found out
that they have gotten breached
in the headline of the New York Times.
It's always fun when that,
wait, what, what, that's odd.
What, how did we not know this was coming? It's when a third party tells you that you've been breached, it's never as
positive, not that it's a positive experience anyway, than discovering it yourself internally.
And this stuff is complicated. The entire space is fraught and it always feels like
no matter how far you go, you could always go further. But left to its inevitable conclusion,
you'll burn
through the entire company budget purely on security without advancing the other things
that company does. Yeah. It's a balance. It's tough because it's a lot to know in the security
discipline. So you have to balance how much you're spending with how much your people actually know
and can use the things you've spent money on. I really want to thank you for taking the time to go through the findings of
the report for me.
I had skimmed it before we spoke,
but talking to you about this in significantly more depth,
every time I start going to cite something from it,
I find myself coming away more impressed.
This is now actively going on my calendar to see what the 2023 version looks
like.
Congratulations.
You've gotten me hooked.
If people want to download a copy of the report
for themselves, where should they go to do that?
They can just go to assistig.com slash threat report.
And thank you for having me.
It's a lot of fun.
No, thank you for coming.
Thanks for taking so much time to go through this.
And thanks for keeping it to the high road,
which I did not expect to discover
because no one ever seems to.
Thanks again for your time. I really
appreciate it. Thanks. Have a great day. Mike Clark, Director of Threat Research at Sysdig.
I'm cloud economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this
podcast, please leave a five-star review on your podcast platform of choice. Whereas if you've
hated this podcast, please leave a five-star review on your podcast platform of choice. Whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment pointing out that I didn't disclose the
biggest security risk at all to your AWS bill, an AWS solutions architect who's working on commission.
If your AWS bill keeps rising and your blood pressure is doing the same, then you need the Duck Bill Group.
We help companies fix their AWS bill by making it smaller and less horrifying.
The Duck Bill Group works for you, not AWS.
We tailor recommendations to your business and we get to the point.
Visit duckbillgroup.com to get started.
This has been a HumblePod production.
Stay humble.