CyberWire Daily - Cloud Architect vs Detection Engineer: Mutual benefit. [CyberWire-X]

Episode Date: April 21, 2024

In this episode of CyberWire-X, N2K CyberWire’s Podcast host Dave Bittner is joined by Brian Davis, Principal Software Engineer, and Thomas Gardner, Senior Detection Engineer, both from Red Canary. ...They engage in a cloud architect vs. detection engineer discussion. Through the conversation, they illustrate how one person benefits the other's work and how they work together. Red Canary is our CyberWire-X episode sponsor. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to the Cyber WireX, a series of specials where we highlight important topics affecting security professionals around the world. I'm Dave Bittner. In today's program, we delve into the dynamic and increasingly critical fields of cloud architecture and cybersecurity detection. Our focus today bridges the nuanced roles of cloud architects and detection engineers, two vital cogs in the machinery of modern digital infrastructure and security. We're joined by Brian Davis, principal software engineer
Starting point is 00:00:52 with a wealth of experience in cloud architecture, and Thomas Gardner, a senior detection engineer known for his expertise in identifying and mitigating cyber threats. Brian and Thomas are both from Red Canary, our show sponsor. Together, they'll shed light on the symbiotic relationship between their roles.
Starting point is 00:01:11 We'll dive into how detection engineers distinguish normal administrative activity from potential intrusions and what behaviors and patterns they vigilantly monitor in customer environments. Bringing Brian and Thomas together offers a unique perspective on how these roles interact, challenge, and ultimately support each other's objectives in the digital world.
Starting point is 00:01:32 Stay with us. And now a word from our sponsor, Red Canary. And now a word from our sponsor, Red Canary. Red Canary stops cyber threats no one else does, so organizations can fearlessly pursue their missions. They do it by delivering managed detection and response across enterprise endpoints, cloud workloads, network, identities, and SaaS apps. As a security ally, they define MDR in their own terms with unlimited 24-7 support, deep threat expertise, hands-on remediation, and by doing what's right for customers and partners.
Starting point is 00:02:14 We thank Red Canary for sponsoring our show. So today we are talking about kind of contrasting this notion of cloud architects versus detection engineers. And we want to start off with some definitions here. Why don't we go through these one by one? Can we start off with a cloud architect? For folks who aren't familiar with that, how do you describe it? Oh, that's a fantastic question. And I always struggle to answer that actual question. So in my mind, a cloud architect is someone that knows how to use the tools of the cloud, whatever cloud platform is your favorite, to build the applications, to build the things that you want to build.
Starting point is 00:03:06 And so what I do is I work a lot with the other engineers that we have on our team to help them to build the system in such a way that it will scale well as we grow, in such a way that it's resilient and kind of knowing the landscape of what the different tools are that are in our toolbox. and kind of knowing the landscape of what the different tools are that are in our toolbox. And so my focus is looking across scalability, looking across resiliency, and making sure that what we're building can withstand all of that. And the cloud part of that is just to use those cloud-based tools to enable those features. So in your estimation, I mean, what's the background that goes into somebody being a successful cloud architect? That's another great question.
Starting point is 00:03:48 I think, at least for me, a lot of it is I've built a lot of stuff over a lot of time. I built them without using the cloud, so I know the ways to do it and the on-prem concept. And I've also built a lot of these things within the cloud. And so I think a lot of it is battle scars and lessons learned from either doing it the wrong way or doing it a bad way to know that there are better ways to do it. And so I think a lot of it has to do with learning, again, the tools that are available within the cloud platform. So understanding the tools quite a bit, but also a lot of experience in building previous systems and knowing ways to do it and ways not to do it. Well, and Thomas, in this corner, we have a detection engineer. Let's do
Starting point is 00:04:31 the same thing with that job title. How do you describe that to someone who might not be familiar with it? So, yeah, as a detection engineer, I'm really responsible for researching attacker behavior, breaking it down into manageable pieces, and then communicating it to people on my own team, people on another team, customers. At Red Canary, we built our own detection engines, built a few detection engines, in fact. There's many ways to be a detection engineer. A lot of companies will use their own SIMs or build on top of custom rules in their EDRs to do it. I think the core of detection engineering is really understanding attacker behavior and breaking it down into manageable pieces that can then be essentially detected
Starting point is 00:05:20 later on. There's some overlap with like threat hunting. It's pretty common to take threat hunts as outputs and turn them into automated detection rules. There's some overlap with like incident response. Once you have an incident and you've understood kind of what happens, how an attacker got in, what behavior they went into afterward, then you really want to make sure that doesn't happen again. And so you might build automatic detection rules after that. And detection engineering is really
Starting point is 00:05:49 focused on that sort of taking output from these other sort of disciplines in cybersecurity and trying to scale it and ensure that bad things don't happen again, or you get ahead of adversaries before they get into your network. The relationship between these two positions, you've got your cloud architect, you've got your detection engineer, is this by nature an adversarial relationship? It's funny you ask that. We were actually talking about that before we started talking with you. No, I don't think it's adversary at all. I think what we can do together is understand how each of us do our job. And that's really critical, right? Because
Starting point is 00:06:31 Thomas and detection engineers are out there looking for threats in the cloud landscape, in the cyber landscape. And some of the actions that folks on the engineering team, such as cloud architects and software engineers are doing, can look like threats. And so what you need to do is have a regular conversation to understand, oh, this is normal behavior. This isn't something that an adversary is necessarily doing. They might do something that looks like that. But that conversation enables us both to understand kind of each other's space a little bit more effectively. I don't know, Thomas, if you feel the same way.
Starting point is 00:07:11 Absolutely, I do. I think one of the differences between detection engineering and just sort of rule creation is being able to put actions into a wider context. It's really important as a detection engineer for me to understand sort of the full attacker lifecycle of how they break into things, how they persist in environments, how they escalate privileges and sort of what that chain of events looks like. It's very rare to see cloud architects do exactly all of that in exactly that order. Not saying it doesn't happen, but understanding
Starting point is 00:07:45 how Brian does his job, why he does certain things. I think the example word that I like to give is interactively logging into a Kubernetes pod and then running recon-looking commands. Turns out cloud architects
Starting point is 00:08:01 love doing that. We love doing it. It's sometimes a necessity. But there's a good reason for why they would do that. Typically, like troubleshooting during an incident or trying to set up some sort of finicky application or something. Like having a good relationship with our cloud architects really helps us put that sort of stuff into context. And like I can go up to Brian and
Starting point is 00:08:25 ask him, you know, hey, we saw you do this. Why'd you do this? You know, what were the things you were after? And then we can go back and compare it to known adversary behavior that looks similar and just try and identify the differences so that we can put our own detections into better context and really improve the product we give. How much of this is just kind of keeping in regular touch with each other to give each other a heads up and say, hey, listen, you know, we're going to be doing such and such today. So if you see something, that's probably what it is. But having those lines of communication open. I think the more you have those lines of communication, the less chance you're going to have a false alarm in that respect. But with as many engineers as any organization has, it's really easy to miss that communication and send someone off on a wild goose chase
Starting point is 00:09:23 because you forgot to say, oh, hey, by the way, I'm going to go open up permissions on this bucket because I'm testing something out. It's really easy to forget that. And anything having to do with a human notifying another human, it's going to get missed. And so I think where you can have that communication,
Starting point is 00:09:40 it's critical, but it's not always there, unfortunately. It's actually nice that it's not always there, unfortunately. It's actually nice that it's not always there, too. Being able to sort of test some assumptions that we have about our own detections and doing so without sort of knowing ahead of time what cloud engineers are up to and having to work our way back sort of from our detection and put ourselves in our customers' shoes to really have to analyze our own work output
Starting point is 00:10:13 is a really helpful exercise for us to make sure that we are challenging our assumptions about what's truly attacker behavior and what's just sort of general cloud behavior. You know, there's a reason you can open buckets up to the entire internet. Like there's legitimate reasons to do that. It's not only a bad thing.
Starting point is 00:10:34 It's not often a bad thing. And so sometimes not having that heads up and being forced to challenge our own assumptions about that can be a really helpful exercise. Some accidental red teaming? Yeah. Great way to put it. Right. Opportunistic red teaming. Right, right.
Starting point is 00:10:51 I'm curious, I mean, how do you strike that balance between needing to keep up with what I think is fair to say an ever-increasing cadence, right? I mean, nobody's going to claim that the attackers are slowing down, right? I think the opposite is true. But also, Thomas, from your point of view, you don't want to be the department that's always crying wolf, you know, or saying, you don't want to be pestering the cloud engineers, as you say, with false alarms. How do you strike that balance between the two? that balance between the two? Oh, that is a big question.
Starting point is 00:11:25 That is a great question. We always strive for more specificity in areas like the cloud where kind of we're all learning new things about it. Even the cloud architects are learning new things about it. We tend to start pretty broad with some assumptions.
Starting point is 00:11:46 And as we learn things, we just, we constantly try and revisit those assumptions like I was saying in the previous answer. This is where putting sort of that behavior into context really comes in handy because if we can say, you know, a certain action happened, but you need these kind of three other things around it
Starting point is 00:12:07 for it to really be bad. And if we can translate that into our detector logic so that we quiet that idea down ahead of time without requiring a human to validate those things, we will tend to be faster, we'll tend to be able to communicate specific threats better, and we'll just generally be happier
Starting point is 00:12:29 because we're not constantly dealing with a bunch of manual labor trying to validate our own work. Well, I think to expand on that, I think to what Thomas said, the context is really key. You know, we've spent a lot of our time at Red Canary working on EDR,
Starting point is 00:12:43 which is endpoint focused. And in an endpoint, you're working on a single computer somewhere. And granted, there's lateral movement between machines and things of that nature. But at the end of the day, you're looking at processes that are executing on a single computer. And the context is what's going on on that computer. There's more to be gained there. But just looking at the activity on that computer can give you a lot of insight is what's going on on that computer. There's more to be gained there, but just looking at the activity on that computer
Starting point is 00:13:06 can give you a lot of insight into what's happening because there's certain patterns that adversaries will follow. When you step back to the cloud, you're almost never dealing with a single computer and you're probably not dealing with a single cloud service. And so now you can't go with one piece of information
Starting point is 00:13:24 because that one piece of information might be an engineer or a cloud architect or someone else with privileged access doing something that they're supposed to be doing. So you have to gain more context in order to say, well, is this a false alarm or is this something that I care about? And I know that's one of the things
Starting point is 00:13:40 that we've really worked hard at is to trying to assemble more of that context for the detection engineering team so that they have all of the information to say, oh, well, they did A and then B and then C. That's not something that our engineering team usually does. That's probably an adversarial behavior. And so the context, I think, has been one of the biggest challenges that we've had of providing that insight so that we don't cry wolf all the time, so that we know what's really dangerous behavior versus normal behavior, because they can look really same if you're looking through a small aperture. I want to wrap up with you guys with this question, and I'm curious in an answer from each of you, from your individual perspectives, what's your recommendation to somebody who is starting down this journey?
Starting point is 00:14:33 It's going to be having this relationship, the cloud architect detection engineer relationship within an organization. Let me start with you, Brian, from the cloud architect side. Any tips or words of wisdom for how to get the most out of this relationship? That's a fantastic question. I think it starts with assuming good intent on all parties. And that's a good thing to go for in any relationship that you have. But knowing that everyone has a job to do, and there's also so much information and so much stuff to learn that not everyone has a full understanding of all the activities
Starting point is 00:15:15 that are going on. And so if anything comes off as confrontational, if anything comes off as accusatory or sounds that way, assume it's not and have the conversation and establish that relationship. Because if you start with good intent and you assume a good intent on the opposite party, you can find out that they have a difficult challenge, a difficult job to achieve as well. And you'll start to build more bridges that way. Thomas, how about your perspective? I think it's very easy as like a security practitioner to sort of say no
Starting point is 00:15:53 or sort of invalidate the actions of other people a lot and say like, you know, it's not the most secure way of doing things or it's not the recommended way of doing things. And trying to avoid that habit,
Starting point is 00:16:07 trying to basically view your coworkers' actions as valid, even if they maybe don't make sense to you, understanding their intent and treating them as a normal way of operating is the best place as a detection engineer to start. There are so many times where we get confused looking at certain behavior,
Starting point is 00:16:34 thinking, why would you do that? You know, this is what we know attackers do. This is how you sort of, I don't know, misconfigure systems or something. And especially in the cloud, when cloud providers give all kinds of APIs and build them for legitimate reasons, I think it's really important to view the use of any of these
Starting point is 00:17:01 APIs or actions as legitimate legitimate valid ways of operating. And so as a detection engineer, you sort of need to be able to separate those valid things that like a cloud architect is going to do, like logging into a Kubernetes pod interactively, opening a bucket publicly, creating some sort of access key
Starting point is 00:17:27 for a service account in the cloud or something. You need to view those as legitimate business operations and not just assume ill intent, essentially. Yeah, I mean, it's that classic, you know, practically a stereotype to not be the department of no. Exactly. And that wraps up our episode of Cyber Wire X.
Starting point is 00:17:59 Our thanks to Brian Davis and Thomas Gardner from our show sponsor, Red Canary, for joining us. And thanks to you for listening. I'm Dave Bittner. We'll see you back here next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.