Screaming in the Cloud - Exposing the Latest Cloud Threats with Anna Belak

Episode Date: August 3, 2023

Anna Belak, Director of The Office of Cybersecurity Strategy at Sysdig, joins Corey on Screaming in the Cloud to discuss the findings in this year’s newly-released Sysdig Global Cloud Threa...t Report. Anna explains the challenges that teams face in ensuring their cloud is truly secure, including quantity of data versus quality, automation, and more. Corey and Anna also discuss how much faster attacks are able to occur, and Anna gives practical insights into what can be done to make your cloud environment more secure. About AnnaAnna has nearly ten years of experience researching and advising organizations on cloud adoption with a focus on security best practices. As a Gartner Analyst, Anna spent six years helping more than 500 enterprises with vulnerability management, security monitoring, and DevSecOps initiatives. Anna's research and talks have been used to transform organizations' IT strategies and her research agenda helped to shape markets. Anna is the Director of The Office of Cybersecurity Strategy at Sysdig, using her deep understanding of the security industry to help IT professionals succeed in their cloud-native journey.Anna holds a PhD in Materials Engineering from the University of Michigan, where she developed computational methods to study solar cells and rechargeable batteries.Links Referenced:Sysdig: https://sysdig.com/Sysdig Global Cloud Threat Report: https://www.sysdig.com/2023threatreportduckbillgroup.com: https://duckbillgroup.com

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud. Welcome to Screaming in the Cloud. I'm Corey Quinn.
Starting point is 00:00:33 This promoted guest episode is brought to us by our friends over at Sysdig. And once again, I am pleased to welcome Anna Bellick, whose title has changed since last we spoke to Director of the Office of Cybersecurity Strategy at Sysdig. Anna, welcome back and congratulations on all the adjectives. Thank you so much. It's always a pleasure to hang out with you. So we are here today to talk about a thing that has been written and we're in that weird time thing where while we're discussing it at the moment, it's not yet public, but will be when this releases. The Sysdig Global Cloud Threat Report, which I am a fan of. I like quite a bit the things it talks about and the ways it gets me thinking.
Starting point is 00:01:18 There are things that I wind up agreeing with. There are things I wind up disagreeing with. And honestly, that makes it an awful lot of fun. But let's start with the whole, I guess, executive summary version of this. What is a global cloud threat report? And because for me, it seems like there's an argument to be made for just putting all three of the big hyperscale clouds on it and calling it a day because they're all threats to somebody. To be fair, we didn't think of the cloud providers themselves as the threats, but that's a hot take. Well, an even hotter one is what I've seen out of Azure lately with their complete lack of security issues. And the attacker just somehow got a Microsoft signing key and the rest. I mean, at this point, I feel like Charlie Bell was brought in from Amazon to head cybersecurity and spent the last two years trapped in the executive washroom or something.
Starting point is 00:02:03 But I can't prove it, of course. No, you target the idea of threats in a different direction toward what people more commonly think of as threats. Yeah, the bad guys. I mean, I would say that this is the reason you need a third-party security solution. Buy my thing, blah, blah, blah. Yeah, so we have a threat research team, like I think most self-respecting security vendors these days do. Ours, of course, is the best of them all.
Starting point is 00:02:27 And they do all kinds of proactive and reactive research of what the bad guys are up to so that we can help our customers detect the bad guys should they become their victims. So there was a previous version of this report, and then you've, in longstanding tradition, decided to go ahead and update it. Unlike many of the terrible professors I've had in years past, it's not just slap a new version number, change the answers to some things, and force all the students to buy a new copy of the book every year because that's your retirement plan. You actually have updated data. What are the big changes you've seen since the previous incarnation of this? That is true. In fact, we start from scratch more or less every year, so all the data in this. That is true. In fact, we start from scratch more or less every year. So all the data in this report is brand new. Obviously, it builds on our prior research. I'll say one clearly connected piece of data is last year we did a supply chain
Starting point is 00:03:17 story that talked about the bad stuff you can find in Docker Hub. This time we up-leveled that and we actually looked deeper into the nature of said bad stuff and how one might identify that an image is bad. And we found that 10% of the malware scary things inside images actually can't be detected by most of your static tools. So if you're thinking like static analysis of any kind, SCA, vulnerability scanning, just like looking at the artifact itself before it's deployed, you actually wouldn't know it was bad. So that's a pretty cool change, I would say.
Starting point is 00:03:50 It is. I also say what's going to probably sound like a throwaway joke, but I assure you it's not, where you're right, there is a lot of bad stuff on Docker Hub, and part of the challenge is disambiguating malicious bad and shitty bad. But there are serious security concerns to code that is not intended to be awful, but it is anyway. And as a result, it leads to something that this report gets into a fair bit, which is the ideas of effectively lateraling from one vulnerability to another vulnerability to another vulnerability to the actual story. I mean, Capital One was a great example of this. They didn't do anything that was outright
Starting point is 00:04:29 negligent, like leaving an S3 bucket open. It was a determined, sophisticated attacker who went from one mistake to one mistake to one mistake to, boom, keys to the kingdom. And that, at least, is a little bit more understandable, even if it's not great when it's your bank. Yeah, I will point out that in the 10% of these things are really bad department, it was 10% of all things that were actually really bad. So there were many things that were just shitty, but we had pared it down to the things that were definitely malicious. And then 10% of those things you could only identify if you had some sort of runtime analysis.
Starting point is 00:05:02 Now, runtime analysis can be a lot of different things. It's just that if you're relying on preventive controls, you might have a bad time, like one times out of 10, at least. But to your point about the kind of chaining things together, I think that's actually the key, right? Like that's the most interesting moment is like, which things can they grab onto? And then where can they pivot? Because it's not like you barge in, open the door and like you've won.
Starting point is 00:05:26 Like there's multiple steps to this process that are sometimes actually quite nuanced. And I'll call out that like one of the other findings we got this year that was pretty cool is that the time it takes to get through those steps is very short. There's a data point from Mandiant that says that the average dwell time
Starting point is 00:05:41 for an attacker is 16 days. So like two weeks maybe. And in our data, the average dwell time for an attacker is 16 days, so like two weeks maybe. And in our data, the average dwell time for the attacks we saw was more like 10 minutes. And that is going to be notable for folks. Like there are times where I have in years past, not recently, mind you, I have, oh, I'm trying to set something up. I'm just going to open this port to the internet so I can access it from where I am right now. And I'll go back and shut it in a couple hours.
Starting point is 00:06:08 There was a time that that was generally okay. These days, everything happens so rapidly. I mean, I've sat there with a stopwatch after intentionally committing AWS credentials to Jithub. Yes, that's how it's pronounced. And 22 seconds until the first probing attempt started hitting, which was basically impressively fast. Like the last thing in the entire sequence was, and then I got an alert from Amazon that something might have been up.
Starting point is 00:06:31 At which point it is too late. But it's a hard problem and I get it. People don't really appreciate just how quickly some of these things can evolve. Yeah, and I think the main reason from at least what we see is that the bad guys are into the cloud thing, right? Like we good guys love the automation. We love the programmability. We love the immutable infrastructure. Like all this stuff is awesome
Starting point is 00:06:53 and it's enabling us to deliver cool products faster to our customers and make more money. But the bad guys are using all the same benefits to perpetrate their evil crimes. So they're building automation. They're stringing cool things together. So they're building automation, they're stringing cool things together. Like they have scripts that they've run that basically just scan whatever's out there to see what new things have shown up. And they also have scripts for reconnaissance
Starting point is 00:07:14 that will just send a message back to them through Telegram or WhatsApp, letting them know like, hey, I've been running, you know, for however long and I see a cool thing you may be able to use. Then the human being shows up and they're like, all right, let's see what I can do with this credential or with this misconfiguration or what have you. So a lot of their initial kind of discovery into what they can get at is heavily automated, which is why it's so fast. I feel like on some level, this is an unpleasant, sharp shock for an awful lot of executives, because wait, what do you mean? Attackers can move that quickly. Our crap ass engineering teams can't get anything released in less than three sprints.
Starting point is 00:07:50 What gives? And I don't think people have a real conception of just how fast bad actors are capable of moving. I think we said actually something like this last year, but this is a business for them, right? They're trying to make money and it's a little bleak to think about it, but these guys have a day job and this is it. Like our guys have a day job that's shipping code. And then they're supposed to also do security. The bad guys just have a day job of breaking your code and stealing your stuff. And on some level, it feels like you have a choice to make in which side you go at. It's like, which one of those do I spend more time in meetings with? And maybe that's not the most legitimate way to pick a job. Ethics do come into play. But yeah, it takes a certain similar mindset on some level to be able to understand just how the security landscape looks from an attacker's point of view.
Starting point is 00:08:35 I bet the bad guys have meetings too, actually. You know, you're probably right. Can you imagine the actual corporate life of a criminal syndicate? That's a sitcom in there that just needs to happen. But again, I'm sorry, I shouldn't talk about that. We're on a writer's strike this week. So there's that. One thing that came out of the report that makes perfect sense, and I've heard about it, but haven't seen it myself, and I wanted to dive into on this, specifically that automation has been weaponized in the cloud. Now, it's easy to misinterpret that the first time you read it, like I did, as, oh, you mean the bad guys have discovered the magic of shell scripts? No, no kidding.
Starting point is 00:09:12 It's more than that. You have reports of people using things like Clownformation to stand up resources that are then used to attack the rest of the infrastructure. And it's, yeah, it makes perfect sense. Like, back in the data center days, it was a very determined attacker that went through the process of getting an evil server stuffed into a rack somewhere. But it's an API call away in cloud. I'm surprised we haven't seen this before.
Starting point is 00:09:37 Yeah, we probably have. I don't know if we've documented it before. And sometimes it's hard to know that that's what's happening, right? I will say that both of those things are true, right? Like, the shell scripts are definitely there. And to your point about how long it takes, you know that that's what's happening, right? I will say that both of those things are true, right? Like the shell scripts are definitely there. And to your point about how long it takes, you know, to stopwatch these things, on the short end of our dwell time data set, it's zero seconds. It's zero seconds from like A to B because it's just a script.
Starting point is 00:09:57 And that's not surprising. But the comment about CloudFormation specifically, right, is we're talking about people kind of figuring out how to create policy in cloud to prevent bad stuff from happening because they're reading all the best practices, ebooks and whatever, watching the YouTube videos. And so you understand that you can say write policy to prevent users from doing certain things. But sometimes we forget that, like, if you don't want a user to be able to attach user policy to something, if you didn't write the rule that says you also can't do that in CloudFormation, then suddenly you can't do it in command line, but you can do it in CloudFormation. that you have security policy to prevent those same tools from being used against you and deploying evil things
Starting point is 00:10:45 because you didn't explicitly say that you can't deploy evil things with this tool and that tool and that other tool and this other way because there are so many ways to do things, right? That's part of the weird thing too
Starting point is 00:10:54 is that back when I was doing the sysadmin dance, it was a matter of taking a bunch of tools that did one thing well or, you know, aspirationally well and then chaining them together to achieve things.
Starting point is 00:11:04 Increasingly, it feels like that's what cloud providers have become, where they have all these different services with different capabilities. One of the reasons that I now have a three-part article series, each one titled 17 Ways to Run Containers on AWS, adding up for a grand total of 51 different AWS services you can use to run containers with, is not just there to make fun of the duplication of efforts,
Starting point is 00:11:25 because they're not all like that, but rather each container can have bad acting behaviors inside of it. And are you monitoring what's going on across that entire threat landscape? People were caught flat-footed to discover that, wait, Lambda functions can run malware? Wow. Yes, effectively anything that can bang two bits together and return a result is capable of running a lot of these malware packages. It's something that I'm not sure a
Starting point is 00:11:52 number of, shall we say, non-forward-looking security teams have really wrapped their heads around yet. Yeah, I think that's fair. And I mean, I always want to be a little sympathetic to the folks in the trenches, because it's really hard to know all the 51 ways to run containers in the cloud. And then to be like, oh, 51 ways to run malicious containers in the cloud. How do I prevent all of them when you have a day job? One point that it makes in the report here is that about who the attacks seem to be targeting. And this is my own level of confusion that I imagine we can probably wind up eviscerating neatly. Back when I was running random servers for me, for various
Starting point is 00:12:32 projects I was working on, or working at small companies, there was a school of thought in some quarters that, well, security is not that important to us. We don't have any interesting secrets. Nobody actually cares. This was untrue because a lot of these things are running on autopilot. They don't have enough insight to know that you're boring and you have to defend just like everyone else does. But then you see what can only be described as dumb attacks. Like there was the attack on Twitter a few years ago where a bunch of influential accounts tweeted about some Bitcoin scam. It's like you realize with the access you had, you had so many other opportunities to make orders of magnitude more money if you want to go down that path or to start geopolitical conflict or all kinds of other stuff. I have to wonder how much these days
Starting point is 00:13:18 are attacks targeted versus, well, we found an endpoint that doesn't seem to be very well secured. We're going to just exploit it. Yeah, so that's a correct intuition, I think. We see tons of opportunistic attacks, like nonstop. But it's just like hitting everything. Honeypots, real accounts, our accounts, your accounts, like everything. Many of them are pretty easy to prevent, honestly, because it's like just mundane stuff, whatever.
Starting point is 00:13:44 So if you have decent security hygiene, it's not a big deal. So I wouldn't say that you're safe if you're not special because none of us are safe and none of us are that special. But what we've done here is we actually deliberately wanted to see what would be attacked as a fraction, right? So we deployed a honey net that was indicative of what a financial org would look like or what a healthcare org would look like to see who would bite, right? And what we expected to see is that we probably, we thought the finance would be high because obviously that's always top tier. But for example, we thought that people would go for defense more or for healthcare. And we didn't
Starting point is 00:14:17 see that. We only saw like 5%, I think, very small numbers for healthcare and defense and very high numbers for financial services and telcos, like around 30% apiece, right? And so it's a little curious, right? Because I can theorize as to why this is. Like telcos and finance, obviously that's where the money is, like great bivots for fraud and all this other stuff, right? Defense, again, maybe people don't think defense is in cloud. Healthcare arguably isn't that much in cloud, right? Like a lot of healthcare stuff is on premise. So if you see healthcare in cloud, maybe you like think it's a honeypot
Starting point is 00:14:49 or you don't think it's worth your time, you know, whatever. Attacker logic is also weird. But yeah, we were deliberately trying to see which verticals were the most attractive to these folks. So these attacks are in fact targeted because the victim looked like the kind of thing they should be looking for,
Starting point is 00:15:02 or if they were into that. And how does it look in that context? I mean, part of me secretly suspects that an awful lot of terrible startup names where they're so frugal they don't buy vowels is a defense mechanism because you wind up with something that looks like a cat falling on a keyboard as a company name. No attacker's going to know what the hell your company does, so therefore they're not going to target you specifically. Clearly, that's not quite how it works, but what are those signals that someone gets into an environment that says, ah, this is clearly healthcare versus telco versus something else? Right. I think you would be right if you had like HHH, IJK as your company name. You probably wouldn't see a lot of targeted attacks.
Starting point is 00:15:37 But where we're saying either the company name looks like a provider of that kind and slash, that actually contains some sort of credential or data inside the honeypot that appears to be like a credential for a certain kind of thing. So really just creatively naming things so that they look delicious. For a long time, it felt like, at least from a cloud perspective, because this is how it manifested,
Starting point is 00:15:58 the primary purpose of exploiting a company's cloud environment was to attempt to mine cryptocurrency within it. And I'm not sure if that was ever the actual primary approach, or rather that was just the approach that people noticed because suddenly their AWS bill looks a lot more like a telephone number than it did yesterday. So they can, as a result, see that it's happening. Are these attacks these days effectively just to mine Bitcoin, if you'll
Starting point is 00:16:25 pardon the oversimplification? Or are they focused more on doing more damage in different ways? The analyst answer, it depends. So again, to your point about how no one's safe, I think most attacks by volume are going to be opportunistic attacks for people who just want money. So the easiest way right now to get money is to mine coins and then sell those coins, right? Obviously, if you have the infrastructure as a bad guy to get money in other ways, like you could do extortion through ransomware, you might pursue that. But the overhead on ransomware is like really high. So most people would rather not if they can get money other ways. Now, because by volume, APTs or advanced persistent threats are much smaller than all the opportunistic guys, they may seem like they're not there or we don't see them.
Starting point is 00:17:09 They're also usually better at attacking people than the opportunistic guys who will just spam everybody and see what they get, right? But even folks who are not necessarily nation states, right? Like, we see a lot of attacks that probably aren't nation states, but they're quite sophisticated because we see them moving through the environment and pivoting and creating things and leveraging things that are quite interesting, right? So one example is that they might go for a vulnerable EC2 instance, right? Because maybe you have block4j or whatever you have exposed. And then once they're there, they look around to see what else they can get. So they'll pivot to the cloud control plane, if it's possible, or they'll try to. And then in a real scenario
Starting point is 00:17:47 we actually saw in an attack, they found a Terraform state file. So somebody was using Terraform for provisioning whatever and it requires an access key, and this access key was just sitting in a S3 bucket somewhere, and I guess the victim didn't know or didn't think it was an issue. And so the state file was extracted
Starting point is 00:18:03 by the attacker, and they found some key and they logged into whatever. And they're basically able to access a bunch of information they shouldn't have been able to see. And this turned into a data exfiltration scenario. And some of that data was in such a property. So maybe that wasn't useful. Maybe that wasn't their target.
Starting point is 00:18:18 I don't know. Maybe they sold it. It's hard to say. But we increasingly see these patterns that are indicative of very sophisticated individuals who understand cloud deeply and who are trying to do intentionally malicious things other than just like I pop a potato, throw a mine, or I'm happy. This episode is brought to us in part by our friends at Callisti. Introducing Callisti. With integrated observability, Callisti provides a single pane of glass for accelerated root cause analysis and remediation. It can set, track, and ensure compliance with service level objectives. Calisti provides
Starting point is 00:18:51 secure application connectivity and management from data center to cloud, making it the perfect solution for businesses adopting cloud native microservice based architectures. If you're running Apache Kafka, Calisti offers a turnkey solution with automated operations, seamless integrated security, high availability, disaster recovery, and observability. So you can easily standardize and simplify microservice security, observability, and traffic management. Simplify your cloud-native operations with Callisti. Learn more about Callisti at callisti.app. I keep thinking of ransomware as being a corporate IT side of problems. Learn more about Callisti at callisti.app. an S3. And initially I thought, okay, this sounds like exactly a story. People would talk about something that isn't really happening in order to sell their services to guard against it.
Starting point is 00:19:50 And then AWS did a blog post saying, we have seen this and here is what we have learned. It's, oh, okay. So it is in fact real, but it's still taking me a bit of time to adapt to the new reality. I think part of this is also because back when I was hands-on keyboard, I was unlucky. And as a result, I was kept from taking my aura near anything expensive or long term like a database. And instead, it's like the stateless web servers. I can destroy those and we'll laugh and laugh about it. It'll be fine. But it's not going to destroy the company in the same way. But yeah, there are a lot of important assets in cloud that if you don't have those assets, you will no longer have a company. It's funny you say that because I became a theoretical physicist instead of a experimental physicist, because when I walked into the room, all the
Starting point is 00:20:32 equipment would stop functioning. Oh, I like that quite a bit. It's one of those ideas of, your aura just winds up causing problems. You are under no circumstances to be within 200 feet of the sand. Is that clear? Yeah, same type of approach. One thing that I particularly liked that showed up in the report that has honestly been near and dear to my heart is when you talk about mitigations around compromised credentials at one point,
Starting point is 00:20:56 when GitHub winds up having an AWS credential, AWS has scanners and a service that will catch that and apply a quarantine policy to those IAM credentials. The problem is that that policy goes nowhere near far enough at all. I wound up having a fun thought experiment a while back, not necessarily focusing on attacking the cloud so much as it was a denial of wallet attack. With a quarantined key, how much money can I cost? And I had to give up around the $26 billion mark. And okay, that project can't ever see the light of day
Starting point is 00:21:30 because it'll just cause grief for people. The problem is, is that the mitigations around trying to list the bad things and enumerate them mean that you're forever trying to enumerate something that is innumerable in and of itself. It feels like having a hard policy of once this is compromised, it's not good for anything would be the right answer. But people argue with me on that. I don't think I would argue with you on that. I do think
Starting point is 00:21:57 there are moments here that, again, I have to have sympathy for the folks who are actually trying to be administrators in the cloud. Oh, God, it's hard. I mean, a lot of the things we choose to do as cloud users and cloud admins are things that are very hard to check for security goodness, if you will, right? Like the security quality of the naming convention of your user accounts or something like that, right? One of the things we actually saw in this report, and it almost made me cry, like how visceral my reaction was to this thing,
Starting point is 00:22:29 is there were basically admin accounts in this cloud environment, and they were named according to a specific convention, right? So if you were like admin Corey and admin Anna, like that, if you're an admin, you got an admin Anna account, right? And then there was a bunch of rules that were written, like policies that would prevent you're an admin, you've got an admin account, right? And then there was a bunch of rules that were written, like policies that would prevent
Starting point is 00:22:46 you from doing things to those accounts so that they couldn't be compromised. Root is my user account. What are you talking about? Yeah, totally. Yeah, they didn't. They did the thing. They did the good accounts. They didn't do just use root everybody.
Starting point is 00:22:57 So everyone had their own account. It was very neat. And all that happened is like one person barely screwed up the naming of their account, right? Instead of a lowercase admin, they used an uppercase admin. And so all the policy written for lowercase admin didn't apply to them. And so the bad guy was able to attach all kinds of policies and basically create a key for themselves to then go have a field day with this admin account that they just found laying around. Now, they did nothing wrong. This was like a very small mistake, but the attacker
Starting point is 00:23:26 knew what to do, right? The attacker went and enumerated all these accounts or whatever, like they see what's in the environment, they see the different one, and they go, oh, these suckers created a convention, and like this joker didn't follow it, and I've won, right? So they know to check with that stuff, but our guys have so much going on that they might forget, or they might just, you know, typo, like whatever. Who cares? Is this case sensitive? I don't know. Is it not case sensitive? Like some policies are, some policies aren't. Do you remember which ones are and which ones aren't? And so it's a little hopeless and painful as a, like a cloud defender to be faced with that. But that's sort of the reality. And right now we're in kind of like a
Starting point is 00:24:01 preventive security is the way to save yourself in cloud mode. And these things just like they don't come up on like the benchmarks and like the configuration checks and all this other stuff that's just going, you know, canned. Did you put MFA on your user account? Like, yeah, they did, but like they gave it a wrong name and now it's a bad name. So it's a little bleak. There's too much data. Filtering it becomes nightmarish. I mean, I have what I think of as the Dependabot problem, where every week I get this giant list of Dependabot freaking out about every repository I have on GitHub and every dependency thereof. And some of this stuff hasn't been deployed in years, and I don't care. Other stuff is, okay, I can see how that Markdown parser
Starting point is 00:24:43 could have malicious input passed to it, but it's for an internal project that only ever has very defined things allowed to talk to it, so it doesn't actually matter to me. And then at some point, it's like, you expect to read like three quarters of the way down the list of a thousand things, like, oh, and by the way, the basement's on fire. And then just have it keep going on, where it's filtering the signal from noise is such a problem that it feels like people only discover the warning signs after they're doing forensics when something has already happened rather than when it's early enough to be able to fix things how do you get around that problem it's brutal i mean i'm going to give you like
Starting point is 00:25:20 my cute vendor answer of it's just easy just do just do what we said. But I think, in all honesty, you do need to have some sort of risk prioritization. I'm not going to say I know the answer to what your algorithm has to be, but our approach of like, oh, let's just look at the CPSS score of vulnerabilities. Oh, look, 600,000 criticals. You know, you have to be able to filter past that too. Like, is this being used by the application? Like, has this thing recently been accessed? Like, does this user have permissions? Have they used those permissions? Like these kinds of questions that we know to ask, but you really have to kind of like
Starting point is 00:25:53 force the security team, if you will, or the DevOps team or whatever team you have to actually, instead of looking at the list and crying, being like, how can we pare this list down? Like anything at all, just anything at all. And do that iteratively. Right. And then on the other side, I mean, it's so, I don't know, defense in depth, like, right. I know that's, I'm not supposed to say that because it's like not cool anymore, but it's still true in cloud. Like you have to assume that all these controls will fail. And so you have to come up with some- People will fail, processes
Starting point is 00:26:20 will fail, controls will fail. And great. How do you make sure that one of those things failing isn't winner take all? Yeah. And so you make sure that one of those things failing isn't winner-take-all? Yeah, so you need some detection mechanism to see when something's failed and then you have to have a resilience plan because if you can detect that it's failed, but you can't do anything about it, I mean, big deal. Good job. That was helpful.
Starting point is 00:26:38 And response. And response. Actually, mostly response. Otherwise, it's, hey, guess what? You're not going to believe this, but... And it goes downhill from there rapidly. You're just like, how shall we write the news headline for you? I have to ask, given that you have just completed this report and are absolutely in a place now where you have a sort of bird's eye view on the industry at just the right time. Over the past year, we've seen significant macro changes affect an awful lot of different areas.
Starting point is 00:27:09 The hiring markets, the VC funding markets, the stock markets. How has the, I guess, the threat space evolved, if at all, during that same time frame? I'm guessing the bad guys are paying more than the good guys. Well, there is part of that. And I have to imagine also crypto miners are less popular since sanity seems to have returned to an awful lot of people's perspective on money. I don't know if they are.
Starting point is 00:27:33 Because even fractions of cents are still cents once you add up enough of them. So I don't think they're going to stop mining. It remains perfectly economical to mine Bitcoin in the cloud as long as you use someone else's account to do it. Exactly. Someone else's money is the best kind of money. That's the VC motto and then some. Right. I think it's tough, right? I don't want to be cliche and say like, oh, automate more stuff. I do think that if you're in the security space
Starting point is 00:27:59 on the blue team and you're like afraid of losing your job, you probably shouldn't be afraid if you do your job at all, because there's a huge lack of talent and that pool's not growing quick enough. You might be out of work for dozens of minutes. Yeah, maybe even an hour if you spend that hour not emailing people asking for work. So yeah, I mean, blah, blah,
Starting point is 00:28:20 skill up in cloud, automate, et cetera. I think what I said earlier is actually the more important piece, right? We have all these really talented people sitting behind these dashboards, just trying to do the right thing. And we're not giving them good data, right? We're giving them too much data and it's not good quality data. So whatever team you're on or whatever your business is, like you have to try to pare down that list of impossible tasks for all of your cloud adjacent IT teams to a list of things
Starting point is 00:28:46 that are actually going to reduce risk to your business. And I know that's really hard to do because you're asking now folks who are very technical to communicate with folks who are very non-technical to figure out how to like save the business money and keep the business running. And we've never been good at this, but there's no time like the present to actually get good at it. So what is it? The best time to plant a tree was 20 years ago. The second best time is now. Same sort of approach. I think that I'm seeing less of the obnoxious whining that I saw for years about how there's a complete shortage of security professionals out there. Okay, have you considered taking promising people and training them to do cybersecurity? No, that will take six months to get them productive,
Starting point is 00:29:29 and they sit there for two years with the job rec open. It's, hmm. Now, I'm not a professor here, but I also sort of feel like there might be a solution that benefits everyone. At least that rhetoric seems to have tamped down. I think you're probably right. There's a lot of awesome training out there, too. So there's like folks giving stuff away for free. That's super resources. So I think we are doing a good job of training up security folks
Starting point is 00:29:51 and everybody wants to be in security because it's so cool. But yeah, I think the data problem is out. These decades struggle more so than any other decades. I really want to thank you for taking the time to speak with me. If people want to learn more,
Starting point is 00:30:07 where can they go to get their own copy of the report? It's been an absolute pleasure, Corey, and thanks as always for having us. If you would like to check out the report, which you absolutely should, you can find it on Gated at www.cystic.com slash 2023threatreport. You had me on Gated. Thank you so much for taking the time today. It's appreciated. Anna Bellick, Director of the Office of Cybersecurity Strategy at Sysdig. This promoted guest episode has been brought to us by our friends at Sysdig, and I'm cloud economist Corey Quinn. If you've enjoyed this podcast, please leave a five-star review on your
Starting point is 00:30:41 podcast platform of choice. Whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice. Whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an insulting comment that no doubt will compile into a malicious binary that I can grab off of Docker Hub. If your AWS bill keeps rising and your blood pressure is doing the same, then you need the Duck Bill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duck Bill Group works for you, not AWS. We tailor recommendations to your business and we get to the point.
Starting point is 00:31:21 Visit duckbillgroup.com to get started.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.