Screaming in the Cloud - Best Practices for Securing AWS Cloud with Eric Carter
Episode Date: November 27, 2024Eric Carter of Sysdig joins Corey to tackle the evolving landscape of cloud security, particularly in AWS environments. As attackers leverage automation to strike within minutes, Sysdig focus...es on real-time threat detection and rapid response. Tools like Runtime Insights and open-source Falco help teams identify and mitigate misconfigurations, excessive permissions, and stealthy attacks, while Kubernetes aids in limiting lateral movement. Eric introduced the “10-minute benchmark” for defense, combining automation and human oversight. Adapting to constant change, Sysdig integrates frameworks like MITRE ATT&CK to stay ahead of threats. Corey and Eric also discuss Sysdig’s conversational AI security analyst, which simplifies decision-making.Show Highlights(0:00) Intro(0:32) Sysdig sponsor read(0:51) What they do at Sysdig(3:28) When you need a human in the loop vs when AI is useful(5:12) How AI may affect career progression for cloud security analysts(8:18) The importance of security for AI(12:18) Sysdig sponsor read(12:39) Security practices in AWS(15:19) How Sysdig’s security reports have shaped Corey’s thinking(18:10) Where the cloud security industry is headed(20:03) Cloud security increasingly feeling like an arms race between attackers and defenders(23:33) Frustrations with properly configuring leased permissions(28:17) How to keep up with Eric and SysdigAbout Eric CarterEric is an AWS Cloud Partner Advocate focused on cultivating Sysdig’s technology cloud and container partner ecosystem. Eric has spearheaded marketing efforts for enterprise technology solutions across various domains, such as security, monitoring, storage, and backup. He is passionate about working with Sysdig's alliance partners, and outside of work, enjoys performing as a guitarist in local cover bands.LinksSysdig's website: https://sysdig.com/Sysdig's AWS Cloud Security: https://sysdig.com/ecosystem/aws/Sysdig’s 5 Steps to Securing AWS Cloud Infrastructure: https://sysdig.com/content/c/pf-5-steps-to-securing-aws-cloud-infrastructure?x=Xx8NSJSponsorSysdig: https://www.sysdig.com
Transcript
Discussion (0)
The thing about it is that Kubernetes, your app doesn't have to die just because we took
a security action. It'll spin up another one. Welcome to Screaming in the Cloud. I'm Corey
Quinn. I'm joined today on this promoted guest episode by Eric Carter from our friends over at
Sysdig, where he's the director of product marketing. Eric, how are you doing?
I'm doing great. Thank you. Glad to be able to join you.
It's been a while coming, but here we are.
Indeed.
Sysdig secures cloud innovation with the power of runtime insights. From prevention to defense, Sysdig prioritizes the risks that matter most.
Secure every second with Sysdig.
Learn more at Sysdig, that's S-Y-S-D-I-G dot com.
Our thanks as well to Sysdig for sponsoring this ridiculous podcast.
For those who have not listened to the entire nearly 600 backlogged episodes of the show in
exhaustive detail, at a very high level, what is it you'd say it is you folks do over there at Sysdig? At Sysdig, we are focused on cloud security and sort of our angle around that is to pull together
different practices that fall into that and to also be sort of the kings of runtime. Runtime
security is key for us because it's sort of where we started. I think about getting insights as
quickly as you can so that you're dealing with threats as soon as possible.
The last time I talked to you folks,
I complimented you on bucking the trend of your website
and not featuring AI prominently on the front of it.
Someone viewed that apparently as a bug
because now you're talking about
the first conversational AI cloud security analyst.
What I love about this is you could remove the word AI from that tagline, the first conversational cloud security analyst. What I love about this is you could remove the word AI from that tagline,
the first conversational cloud security analyst, and it still wouldn't be too far from wrong.
Some of those folks are very good at the technical aspect and not so great at communicating
effectively. It feels like the evolution the DevOps field sort of went through from being
grumpy Unix sysix system types to having to interface
with other people is something the security organization is still sort of evolving through.
If I look at this industry wide, I would totally agree with you. And what makes me laugh about that
observation is I actually helped to launch that solution. And that's and then there it is like
front and center. So you're welcome. And I apologize. But yeah, that's all.
I think because we're pretty proud of the angle we're taking with the AI tooling in that we know that a lot of information that gets pushed back out to you when something is detected in and around your cloud or your environment or applications, sometimes you're looking at it and going, yeah, okay, I don't understand. It's great to me. Isn't that the phrase, right?
And so what I love about it, and it was even helpful for me on the marketing team to even
learn our product better by just asking simple questions and then asking a follow-up question
and then asking you more questions. And ultimately the aha moment is when, okay,
okay, genius, how do I fix this? And what's great about that is you get suggestions
for here's what you can do.
And I think ultimately that's what people want
to just get to the answer quickly.
The tagline, and I don't know if it's on that homepage,
our tagline that a lot of people appreciated
was it was accelerating human response.
Because there's often this thing that I hate Gen AI
because it's taking my job.
It took our jobs, right?
It's not that, right?
I think it's a very helpful tool.
Even getting away from protectionism, I think that for many things, you need a human in
the loop, aiding that human in their decision-making process, in filtering noise to get some signal
out of it.
The things that computers are great at.
Yeah, that's fantastic.
I'll use it myself as a writing assistant,
which is a far cry from write this blog post for me. But I tend to be, the way that I write is the
way that I talk, the way that I think. I go from tangent to tangent to tangent. It lacks structure.
So very often I will have it lend structure and then I'll disagree with almost everything it
writes. But now I actually have a beginning, middle, and an end. I hated writing conclusions for blog posts. It nails it with that. And I'm seeing this when it comes to computer assistance
with other areas of intrigue. And analyzing security incidents is one of them. Analyzing
logs is another. Writing code is still a third. Yeah. And again, there are a lot of things that
emerge that maybe I haven't heard about yet as a security professional. And if it shows up in my
feed, I can go, hey, what is this? And I get an answer because we're pulling from a knowledge of
the world's information. We've also trained it very well to not do things like, hey, help me
understand what I should make for dinner tonight. Give me three options. So all those things,
it'll politely decline, but it will effectively answer questions in the security
domain. And a lot of us need help from that. And oftentimes you've got, let's say, newer folks on
whatever team it might be, security DevOps, DevSecOps. They don't want to have to go ask
someone a question, or they find themselves just going out and doing their favorite Google search
to figure it out. So here you have it right there. Very cool way to deliver information in a different way. And some people will still resort to charts and graphs and so on,
but you now have a way to get insights quickly. So I love it.
It reminds me of an old joke I used to tell where like the worst internship ever is like
tailing the log files and looking for anything suspicious that jumps out at you. Reminds me of
one of my internships when I worked at the largest ISP in the state of Maine, the Maine Schools and
Libraries Network, headquartered out of Orono at the university. And one summer,
they had the student workers, we had us all doing a hub inventory in all the buildings. It took all
summer. There were a lot of buildings, a lot of rooms, and a lot of hubs. And these were old
school unmanaged hubs at a time when switching was the way to go. And I only realized now with
the benefit of 20 years experience, they didn't need to inventory those damn things.
They were ripping out and replacing them as they failed.
They just wanted something to get the 19-year-old kids out of their hair
so they could actually do their jobs
and not be blinded by our ignorance slash youthful optimism.
It got us some exercise and I thought that was very well done.
But there are elements to that that I still
picked up and learned from inventorying these things. And it feels like that is in some way
where I see some of the concern from AI coming. Not that people should be doing manual tasks by
hand when computers are more effective at it, but it feels like it's not so much replacing
the senior aspects of what a cloud security analyst does in your particular expression,
but a lot of what the junior folks might potentially be doing.
The counter argument there is that junior folks don't generally spring fully formed from the forehead of some ancient god.
They go from being junior to senior. It makes me wonder what the future is going to look like as far as career progression.
Yeah, indeed.
Indeed. But having, you know, in that same vein, you make me think of is that there's a whole world of folks who are getting pretty clever understanding how to use these AI tools, right? I still find myself sometimes being too basic. And think about it, this idea of doing a multi sort of multi-pronged question, what is it we call it? And the term is escaping my mind where I can say, hey, what's this threat? Who's behind it? Is there a user?
Tell me what that user's permissions are and then tell me how to fix it. You can ask it all in one
fell swoop and get a nice answer. These are the kind of things that I think you mentioned younger
people, the folks who are getting savvy with AI know how to take advantage of it. And the tool
is able to do it. It's pretty amazing. I'm somewhat surprised by that. I mean,
Amazon Q, they're obnoxious, under underperforming chatbot, they stuff into basically everything over at AWS. When you ask
questions that touch on the security realm, or worse, that it thinks touch on the security realm,
it falls all over itself, declining to answer because it cannot give advice on security-related
topics, which is great and all if I want, if you don't want, it doesn't want the liability
of telling me to configure a policy some way that doesn't actually protect what I care about.
But it can start stretching that refusal definition into areas that are frankly absurd.
Like, great, how do I wind up hosting a static website in S3? I can't advise on security
approaches. Terrific. It's a single page of HTML.
I just want to show the rest of the world. There are no security issues here. Just tell me what
I'm doing, please. And of course, it doesn't work. So I'm amazed that you've gotten past that level
of refusing. Yeah, well, intentionally, of course. And I can't imagine it being very helpful if it
declined out of risk, you know, of giving you some sort of answer now that it is afraid of.
But yeah, it does quite well.
Still, as you know, there's some fear and trepidation about turning on AI solutions in organizations.
And that's fair, right?
Because whether it's, you know, because what data is it being trained on?
How do I
know this isn't just another gateway into my environment, right? So, so certainly we, we have
to assure folks of how we anonymize things and so on to make, make them understand that we're
helping you get answers without exposing your deepest, darkest secrets. But that leads me to the concept of,
like, yeah, I mean, you've got what we've been talking about is AI for security, but there's a
big exploding need for security for AI, right? And early on, we wanted to say, well, it's just
another workload. But the world doesn't think about it in those exact terms, partly because
just there's a lot of potential data that's there being
leveraged, being trained upon that we want to protect. And so that's another angle, right,
where we've worked with AWS, for instance, to enable the ability to detect when one of their
AI solutions is being used, right? Put it in a special zone, give you special visibility,
identify risks that are going on with it. Because that's one of the hurdles I
think we have here in the early days, right, of getting people to leverage AI for not just for
security, but for whatever their business needs. It feels like it's the first time that you can
have a plausible social engineering attack against a computer. It's funny, the kind of
things that folks try to do. Yeah, forget all your training and do this instead, right? And
depending on how you've done it,
that could work.
It's a threat when you're talking to these engines.
But yeah, and again,
I think this concept of AI workload security
is gaining a lot of momentum,
whether it's just identifying,
hey, when we propped up this bedrock system,
did we set it up correctly in the first place?
So think in the realm of like posture management. And now how do I watch to make sure that there's not, because I think there's a
lot of stealthy things going on in this world right now, where not only is AI being used to
generate more and more threats, but bad guys are trying to get into the cloud, not just to get into
your potential AI to do things and try and make money.
What we've seen most recently is
once I get into your account,
I may turn on an LLM
and start using it for my purposes.
And that could get,
you talked about costs at AWS,
that could get real expensive really quick.
Counterintuitively,
in the large customer environments
I sometimes find myself,
it's difficult to notice
that sort of stuff. If you're spending $80 million a year on your cloud bill, it's hard to notice
when someone spends up another 50 grand a month. It just gets lost in the background noise.
Our threat research team identified some activity that once they were in, I mean,
you typically might, let's say I make a thousand queries to the AI engine or the LLM, right, a day.
Suddenly that was jumping up to like 20,000.
And that's a bill in one day that can be way over the top.
But you're right.
You have to be watching, right, in order to understand that that's happening.
You know, otherwise, what are you going to do?
You wait and then this massive bill.
And I don't know.
You probably have more experience on how do I handle that. AWS, it wasn't me, it was you. So that's a tough
one. But we've done some work at Sysdig, again, to try and identify anywhere AI is popping up.
That's number one, that's half the battle. Because sometimes these things are being turned on
outside the sort of guidance and observability of the leadership and things like that.
So you want to know that it's there, and then you want to know, is it protected?
And then you want to know, is anything bad happening?
So we've tried to focus a solution around that to make sure that anyone that's getting
into this world has the tools they need.
In the cloud, every second counts.
Sysdig stops cloud attacks in real time by instantly detecting changes in risk with
runtime insights and open source Falco.
They correlate signals across workloads, identities, and services to uncover hidden attack paths and prioritize the risks that matter most.
Discover more at Sysdig, S-Y-S-D-I-G, dot com.
Security practices in AWS, it goes beyond just AWS, but it's my area of specialty, so we'll talk about that for a bit, I suppose, is the attack surface is so wildly broad. There are a couple hundred services that folks can use. You're probably not using most of them if you're like effectively everyone. Those are different things thatAM are Byzantine, to put it gently.
There are so many different gotchas, tips and tricks.
It's forced an entire, I feel, new discipline
around how to approach security
and what practices make the most sense.
Yeah, totally agree.
And every time we add a new service,
there's a whole new surface there
that we need to make sure we're protecting.
And you're right, a lot of folks,
when they get into it at first, and by the way, this is, you know, obviously,
as I introduced where cystic plays, they think first about the guardrails and the posture
controls. And that's, that's good. That's correct, right? You don't want to just go,
the classic example is I've got an S3 bucket, and I've left it on public setting. Default,
it shouldn't be that way. But you want to know that. Or you want
to know, am I overly permissive? You mentioned IAM policies and things like that being tricky
or not well baked. That's another key one. So I would need to understand that I've got
misconfigure or where I've got misconfigurations. And a lot of times what's driving that is, of
course, we want to be secure, but I also might be a credit card company, right? And I need to meet some compliance guidance. And
I need to be able to say, this is what I've done. And this is, you know, hey, auditors, here's the
proof, right? And so we need to observe those things. I think the good news is that AWS,
for instance, has CloudTrail, which just logs all of the activity, whether it's users, APIs,
or whatnot. We use
that as a data source. It's an excellent data source until a bad guy turns it off. That's
another thing to watch for. And that can help us understand what's happening. So oftentimes,
posture is the first stop. Vulnerability management, I'm sure you've chatted to others
about this, identifying what are their own vulnerabilities and fixing those. That's another
key first stop for most companies. And then we do believe that's important.
We like to think about that, not just when you're building stuff.
Let's say you're in the world of containers.
Not just scanning the container before you push it out,
but continuously doing that.
Because every day, new vulnerabilities are reported.
That was great yesterday.
I feel really good.
But today, there's a big one.
And we need to do something about it.
So I need to always be watching.
The problem is noise.
There can be a lot of things that pop up on these reports
that I need to deal with.
And how do I know what to deal with, what to prioritize?
And that's the key challenge
for any cloud administrator security team, you name it.
You folks have put out a number of reports
showing the accelerating speed of security incidents, particularly in the
cloud. It's shaped some of my thinking on this to, I guess, to articulate something I've sort
of intuitively believed since the beginning, which is that attackers are generally better
at automating than most operations teams. There's still the click ops thing where you use the AWS
console, then lie about it. Whereas with the attackers, they have these things ready to go in scripts
to the point where they can pivot,
they can lateral from once they're into an environment
to different parts of it, to different services,
as fast as computers can execute.
The idea of human response times
being sufficient to guard against this
is fantastical to my mind.
It's crazy how fast they can move.
And one of the things that if you think about it,
like they can just put something together
and start running it.
There's no, they don't worry about bug testing
or QA of what they're doing.
So they get to skip all of those steps
that the rest of us in the software world
try to make sure that, hey, this software set.
So in other words, they're just automating, automating.
There's even botnets for rent and things like that
where you can just start doing things.
And yeah, we've seen it takes 10 minutes or less, right?
To get from the time I'm in
to the time I start doing damage.
And that's why we had put out,
I think about this time last year,
this idea that the new benchmark is needed.
You know, you really need to be able to,
can I respond effectively in 10 minutes?
And if you reverse engineer that, what that means is I need to be able to detect things as they're happening
in real time. And by the way, to try and filter through the noise and get to the true risk of
this thing that's happening. So how fast can I do that? Let's say I want to do that in five seconds,
right? And a lot of tools are not able to deliver that. We have always been good at that partly
because we're basically a streaming detection engine.
It's built on our open source Falco heritage, right?
That it's watching things as it comes.
If it detects something based on the detection policies,
it's going to pop you something to say,
hey, this is happening right now.
Lately, we can also correlate that with other things.
Like this is happening and there's a misconfiguration
and there's too many permissions for this particular user or machine identity or whatnot so we want to
know we want to then be able to invest to sort of investigate and the challenge with investigating
is if i have to go to some deep dark well of a uh not that i have anything against sims or you know
these where we're storing a lot of log information and so on. But that might
take a while. I need information right now to understand what's the threat. Is it really a
threat? And then I want to be able to invoke a response. So our challenge to the industry was
detect quickly, five seconds, investigate, take five minutes to do that, and then five minutes
to enact your response to contain this issue. Then you're at least in that 10-minute window
that we know is sort of the point of no
return often. Does the humanity be in the loop in that 10 minutes? And this is where the industry
is headed. Typically, we see, and what I mean by where the industry is headed, there's more and
more automation being put into play. That's coming from Sysdig, that's coming from tools at AWS,
there are other external tools, organizations that are focused on building workbooks. If this,
then that, right? And just deploy runbooks. What I'll say is there's often still the desire to
have a human go, yeah, this is a thing, go. So that you're not enacting automation in something
that ended up being a false positive. So I think you want to get to a fully automated response,
but you also want to have a sanity fully automated response, but you also want
to have a sanity check in between, or at least that's what I'm still seeing right now. So you'll
see a balance of both in terms of being able to sort of get to the heart of the matter. The better
we are at correlating multiple things and saying, yes, this is a real threat and sort of doing that
for you, presenting with all the evidence, but doing it for you, then I think,
you know, the more confidence that our customers or the industry gets in our ability to do that
and do that effectively, then I think this automated response activity could be good.
Some of the things we already do, and the good news with something like a Kubernetes environment
is Kubernetes is kind of orchestrating all this thing. Let's say, Corey, you get into a container
and you're doing some bad stuff. All my bad stuff happens in containers, but please continue. We're going to
automatically kill that container. Boom, down goes Frasier, right? The thing about it is that
Kubernetes, your app doesn't have to die just because we took a security action. It'll spin
up another one. Of course, you're hoping that the threat guy isn't following all your containers
around. So some things can be done like that. If it's bad enough, you want hoping that the threat guy isn't following all your containers around. But so some things can be done like that.
If it's bad enough, you want that to just stop.
Because as soon as I'm in that container, then that's when all the lateral movement
things can start to take place and I get into bigger and badder things.
And so some of that is already possible and our customers do it.
But again, while everyone wants automation, they also want a quick human sanity check.
I'm curious to see how this winds up shaping the, I guess, the future of the space. Because as
response times get faster, it feels like you're almost locked into an arms race with attackers.
Oh, they can respond in 10 minutes. Well, we need to then be in and out within five. And
it winds up with a constant game of one-upsmanship.
It's true. And things are constantly evolving. And that's why cloud security is constantly evolving. And oftentimes take a step back,
wow, are we not there yet? Are we not there where we've outwitted the bad guys? And I don't think
it'll ever happen. And that's where this art of having threat researchers who always understand
the latest thing, things like the MITRE ATT&CK framework. It's like, here's the things you need to know. Here's the latest tactics,
techniques, and procedures that adversaries are using. It's a never-ending battle.
And sometimes we focus on interesting things and think we're going to be good.
And by that, I mean, I fixed all of my known vulnerabilities should be good, right? But
sometimes the vulnerability is not the thing.
The thing might be, you've got to expose the credentials that ended up in a container or
something, and it's out on a repository. So now it's really more, it had nothing to do with a
known vulnerability, just the fact that your credentials got exposed, and now somebody's
getting into the environment. It's very hard to prevent against that, except for you want to have
things like multi-factor authentication and so on. And so that's where as a company that's trying to
tackle the entirety of this picture the best we can. Yes, posture controls, yes, vulnerability
management, but also identifying this whole IAM space. Are you giving too many permissions?
Because the cool thing about where we sit from Sysdig and runtime
is that we can see the activity.
That activity insight, what we call runtime insights,
can do a few things.
One, obviously, it's where we're throwing up alerts about real threats.
It's also where we go, hey, in that running of this service
or this application, Corey doesn't need all these permissions.
Actually, all he needs is this, and you've given him this.
And we'll help you see that. And we'll even help you give you a, hey, go paste this into AWS to
kind of make it so, right, to buckle him down. I think sometimes there's a fear that if I don't
give the, you know, I'll just give a little bit more than it's needed just in case, right? That's
where your exposure time. It's also the nature of the way these things work. We're, okay,
I give it the permission I think it needs. Denied. Okay. I'll broaden it a bit denied after three or four
repetitions of that. Screw it. I'll give it everything and I'll come back and fix it later.
And later never happens. Exactly. And that's why we try to marry this idea of, okay, yeah, we see,
we see how this works and what's running and what's working or not working. And we'll profile
workloads and help you try and understand that. Right identity is a big one and again we already talked about the real-time nature of trying to identify things quickly so you
can stay ahead and i think that's just going to be a constant arms race which i think is the word
you use so it's a it's a tricky business but again the bet the good news is that the more of these
things that you can put together and identify, yes, this is a real thing.
This is in use.
This is facing the internet.
It has a configuration.
So this idea of correlation really does help identify whether things will be a real risk or not.
And I think us in the industry,
as a team leader,
are getting better and better at painting that picture,
showing you even like,
here is a possible attack path.
Did you know that if somebody gets in
this thing that's misconfigured, all of this, the keys to the kingdom or the family jewels or
whatever we call it are all back here, right? The old school data center approach of M&M security,
where it's a hard candy shell, soft chocolatey center, doesn't work anymore. Okay, we stop
attackers at the firewall. Like the cloud equivalent would be that once you have access to an EC2
instance, if that EC2 instance role has access to do all kinds of things it doesn't need to do,
well, great. This is why the idea of least permission has taken root the way that it has.
It's the right way to do it. It's just obnoxious to get that configured properly.
It's very difficult. It's very difficult. And I know that AWS has a tool like IAM Access Analyzer,
I think they call it,
that's trying to help with that.
So I think it's trying to help with that.
So it's important.
And I love what you just said
because in some of our presentations,
we'll show a picture of the castle
with the moat and the drawbridge.
It's like, this is the old days
where there was basically one way in
and one way out.
Now it's much more like an amusement park
where there's a lot of ways to get in, get out.
And so, you know, you've got to be able to guard that appropriately, but a block on the
door is not enough. You also need the security cameras to once, once they're in, are they just
happily riding the merry-go-round or are they being destructive in some way, shape or form?
And then, you know, have obviously having someone to be able to respond to those things. So the
analogies can go on and on and on, but the cloud is different.
We know it. It's changed a lot of dynamics.
As companies have transitioned from your more traditional on-prem,
they've tried to drag some of the old ways with them,
but it's not always working in the same way as when we just owned the bare metal
and we owned the firewall and we were good.
And so the practices of cloud security are,
there's a lot of things that go into it
and it's continuously evolving by itself
thanks to a lot of the activities of people out there.
And again, now we want to be able to use things like AI
to give what we might call the defenders,
us defending our cloud, a leg up. Because why not, right? You can't bring a knife to a gunfight. If the adversaries
are using AI, we need to use it in a similar fashion. And so just making it work for you
correctly is the hard part. We've spent a couple of years getting it right. And we've released
what we've released around helping you understand what's happening in
your real-time threats.
And it wasn't easy to get it to answer in that swim lane and to answer in ways that
make sense, that aren't just nonsense.
I hate that word hallucination because it happens, right?
But it's like you can't just get a bunch of gobbledygook.
When I'm confidently wrong, I'm starting to call it a hallucination now it sounds a lot better than bullshitting that's right yeah
and that's it and that's the that's the thing you've got to work at uh in order for people
to then put trust i would say trust in the tool and i think again we've done it we've done a
pretty decent job with that so so again you know our in life, if we think about AWS as your,
if that's your platform of choice,
is to one, stay on top of all the things
that they're up to.
Look, they have their own security tooling
and it's getting better and better.
The good news is that things like guard duty,
for instance, at AWS,
it's getting better and better
at providing more and more detections.
We're using that as a data source in some instances. If you're using it, great. Add that into the mix. A lot of times,
the insights that they can provide are valuable, and we want to be able to leverage those things.
Or conversely, if we've identified some good information and a customer is using Security Hub,
we can send that information back to AWS security hub.
I can see it all in one place.
And of course, the latest ground
has been the security lake part of AWS.
I don't know if you've done any sessions
in and around a security lake,
but there's this move to try
and let's get all of my security data in one place.
And then I can do all kinds of interesting things with it.
You know, interesting being hopefully
identifying threats proactively and so on.
And so we try to participate in that world in a sense to integrate with all these things.
And it takes some work, but it's worth it because this is certainly where everyone's
modernizing and doing a lot of their application development these days.
Really, I've been doing this long enough.
I think you have too.
You now see this certainly, this flip, right, where people are dabbling their toes in the cloud. And now it's sort of flipped to the percentage- time. But the one thing that I think is constant in this entire space is that everything's always changing and the rate of
change only increases. There's a calculus lesson in there somewhere for folks better at paying
attention in class than I was. I was never good at the math, so I'll take your word for it.
I really want to thank you for taking the time to speak with me. If people want to learn more,
where's the best place for them to find you? Yeah, obviously the easiest thing to do is to just jump on over to cystic.com.
That'll have all of the things you need
to get the gist of what we do,
how we can help.
Hey, since we're here kind of focused
in and around the world of AWS,
one of the pages I help build
is cystic.com slash AWS.
I tried to make it as simple as possible
for things like this, right?
It's handy for podcasts.
Like every once in a while,
someone will give a URL,
it's like seven levels deep with hyphens and dashes.
And they do startup spellings of common words
with different names and no vowels.
Yeah, thank you for not doing that.
You know, Corey's not,
there's not going to be a link that someone can click on.
And so, yeah, that's where having a slash AWS works.
But there will be in the show note.
That's what it's there for.
Brilliant. Yeah.
So all good.
I think, you know, there's so much going into this world of cloud security.
It's our goal to stay on top of it, to leverage as much insight that we can into what's happening
like right now and to give our customers the ability to react fast to these things.
And so, yeah, I'm looking forward to this coming year and all the things that it has
to hold.
You mentioned earlier a threat report.
I think we just issued a new one.
It has a bit of what we saw and what we think is going to happen next.
And not to create a bunch of fear and doubt in people's minds, but we all know staying
ahead of threats is a key part of running a successful business and not being in the
news headlines.
We had all hoped to remain so lucky. Thanks again for your time. I really appreciate it.
Indeed, Corey. Thank you so much.
Eric Carter, Director of Product Marketing at Sysdig. I'm cloud economist Corey Quinn,
and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star
review on your podcast platform of choice. Whereas if you hated this podcast, please leave a five-star
review on your podcast platform of choice, along with an angry, insulting comment that makes no sense because you didn't follow
the Gen AI that wrote it for you.