CyberWire Daily - Complementary colors: teaming tactics in cybersecurity. [Research Saturday]
Episode Date: April 19, 2020We often hear cybersecurity professionals talking about red teams, blue teams, and purple teams. In this episode of CyberWire-X, we investigate what those terms mean, how security teaming approaches h...ave changed over time, and the value of teaming for organizations large and small. Join us for a lively conversation with our experts Austin Scott from Dragos, and Caleb Barlow, from Cynergistek in part one. In part 2, we’ll also hear from Dan DeCloss from Plextrac, the sponsor of today’s episode. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the CyberWire Network, powered by N2K. affecting organizations around the world. We often hear cybersecurity professionals talking about red teams, blue teams, and purple teams.
In this episode of CyberWireX, we explore what those terms mean,
how security teaming approaches have changed over time,
and the value of teaming for organizations large and small.
The first part of our show features a lively conversation with our experts,
Austin Scott from Dragos and Caleb Barlow from Synergist Tech.
In part two, we'll hear from Dan DeClos from PlexTrack, the sponsor of today's episode.
Stay with us.
And now a word from our sponsor, PlexTrack.
PlexTrack is the ultimate purple teaming platform,
guiding the healthy collaboration
of your red and blue teams through
a single web-based interface.
PlexTrack does this by first elevating
red teams, eliminating the
struggle of reporting, and allowing the
team to focus on what's important,
identifying security issues.
Red teams are provided with an easy-to-use
platform that allows reports to be
created and then exported with a click of a button, saving the team valuable time.
PlexTrack also powers up blue teams by providing them with a platform to consolidate findings and then remediate them in an efficient and timely manner. Thank you.
You can visit their website at plextrack.com slash demo to learn more.
That's P-L-E-X-T-R-A-C dot com slash demo.
And we thank PlexTrack for sponsoring our show.
And we thank Plextrack for sponsoring our show.
Red teaming, I believe it comes from military jargon, where you are doing adversary simulation.
That's Austin Scott. He's a principal penetration tester at Dragos.
And of course, within cybersecurity, the definition of that changes depending on who you ask.
And I mean, I don't want to start a religious war or anything like that here, but I can talk about my definition of red teaming.
So a red team, of course, is an adversary simulation where you're trying to identify risk in the network.
You're looking at a network from an adversary's perspective. So at the end of the day, you're trying to identify if an adversary, when they're faced with your
network, if it's an insurmountable, unclimbable marble wall, or if they can cut through it like
a hot knife through butter. So this can really help you identify ways you can protect, detect, and respond to cyber threats through this sort of simulated adversary.
Can the red team get around that two-factor authentication in some way?
That's Caleb Barlow, CEO at Synergistech.
Or can they validate that it's working, or can they find a place where maybe a system that is not in place and it should be? So, you know, a lot of the resourcing on this is going to be directly derived from either
controls that you want to validate and test, or it can be looking at areas where you know you have
a vulnerability. Because I'll tell you, in most security assessments, there are large numbers of vulnerabilities or known issues that get identified.
And nobody has the budget to fix all of them.
But what you can do is you can say, well, what are the odds of this being exploited?
Now, how about blue teaming? What's involved there?
Blue teaming is the opposite side of the fence as red teaming.
That's Austin Scott from Dragos.
Blue teams are the network defenders.
They're the guys who are trying to protect and detect and respond to the adversary.
With a red team exercise, there may be a blue team present,
or maybe it would just be the normal SOC operations team.
They may be aware or they may be unaware that the red team will be doing an exercise or is working to compromise their network.
So red plus blue equals purple.
How do you define purple teaming for folks?
So a purple team is much more of a collaborative approach. It's where the red team and blue team are working together
to identify gaps in cyber defenses. The difference with a purple team is you're informing yourself
of the vulnerabilities and the knowledge on the inside. That's Caleb Barlow from Synergistech.
So you're giving that team the opportunity to say, hey, we know we might have
some problems in this area. Can you expose them? And that might be to prove positive that a defense
is working or prove negative that it's not working. So in a purple team exercise, you might
have the red team and the blue team all in the same boardroom. And the red team
will be going through their exercise in a very transparent way. It'll be a much more of a white
box approach. The red team is sharing information with the blue team and the blue team is sharing
information with the red team. So as they're working through a network, a red team member
might tell the blue team, we're going to try this particular attack.
We're going to run this script and tell me what you see. And the blue team will look,
they'll check their security operations center and try to understand what's going on,
see if they can detect and respond to that. And in my experience, in these purple team exercises,
the blue team can even tune their detections in real time. They may get us to fire and attack multiple times
so they can adjust their detection mechanisms
to better defend their network.
So it's a much more collaborative approach
to penetration testing.
And I suppose it sort of takes away
that even friendly adversarial relationship that I imagine could form between
the two teams. Yes, that's very true. When you take an adversarial approach, you're inherently
kind of working against each other. So the opportunities to share knowledge and to grow
kind of as a team aren't really there. When you're working together, collaborating in the purple team,
you know, the blue team gets to learn a lot about the red team tactics
and the red team learns a lot from the blue team tactics as well.
So there's a lot more opportunity for cross-sharing
and team building as well.
When you're doing these sorts of exercises,
is there an opportunity to gain some insights
on where you should be putting your resources.
In other words, hey, it seems like our defenses are good or maybe not,
or our defenses are really good and we need to focus more on trying to chip away at them.
Absolutely.
Any one of these red team or purple team engagement or type exercise
will identify those gaps in the network.
And this is something that you can't really necessarily get from a vulnerability assessment or a Nessus scan or something like that,
because tools like that or approaches like that that are just identifying vulnerabilities don't really take into consideration the human factor
or misconfigurations in the network or weaknesses
in network architecture, it doesn't take the bigger picture into account. Whereas when you're
doing these exercises and trying to move through a network as an adversary would, you're really
taking all things into account and trying to leverage any weaknesses you can find, including
the people, which are usually the weakest link in any security paradigm.
including the people, which are usually the weakest link in any security paradigm.
What sort of recommendations do you have for organizations that are looking to put together their red teams, their blue teams, and then foster this communication to make purple itself?
Well, I do recommend taking a purple team approach where possible,
especially if you're just kind of getting started.
I think the biggest opportunities to grow and learn and to expand knowledge come from purple team exercises,
whereas a pure adversarial assessment like a red team,
you don't really have many of those opportunities.
You'll definitely get a report at the end of the day,
but you don't really see what the attacker
was necessarily doing during the entire engagement.
And you won't have opportunities to tune or learn as you go.
And also, I've found that the red team
won't find as many things
because usually the blue team or the local resources
know where all the skeletons are buried
and they know where to look for a lot of these vulnerabilities. So if you can enlist the blue team into your red team,
you get a lot more value. Often the blue team is looking for resources. Often they want you to find
these things. They want to identify the risks that they already are well aware of so that they can
justify the expense of solving these problems. And at the end of the day, we're all on the same team.
That's right, yes.
When you're coming at these sort of exercises, how much of it is sort of episodic, where
you spin up, you go through this exercise, then you're done, and you evaluate what has
happened versus an ongoing effort that's sort of continuous?
Where I've seen the more episodic approach is as a consultant, we usually have a plan,
a daily plan of where we want to be or how we want to progress through the network.
And we can move a lot faster with this approach, with a purple team or a more white box approach.
You know, if we don't get to where we need to be on day one,
then we'll move to the next episode.
We'll move to the next network segment
and kind of see what the network looks like from that perspective.
So this way we can, even if we can't advance,
even if we run into issues where we're unable to move through the network
because the security posture is really strong,
we can still jump around the network and sort of see,
shine our flashlight into some of the darkest areas of these networks and see what they might look like from an adversary perspective.
But for a continuous program, I've seen organizations that have dedicated red teams that will continuously run operations and run programs,
and usually keeping the SOC in the loop as well, letting them know that these
things are ongoing, and to look for activities in various networks. So certainly it's something that
an organization could benefit from having that run continuously. There's always been this sort of
inherent challenge between the IT folks and cybersecurity folks and the operational technology
folks and the engineering folks. Any opportunities to build bridges between them and to get everyone
in a room to work together is a real win for industrial cybersecurity. So we find these
exercises are a great opportunity to get all these folks together that don't normally work
together, that don't necessarily trust each other to address these issues and identify these risks.
that don't necessarily trust each other to address these issues and identify these risks.
At the end of the day, what you want to do is you want to incent people like an adversary to be incented. You want a little bit of a prize if they can pull it off.
Caleb Barlow from Synergistech.
And, you know, that prize might be driven by a little bit of bravado or, you know,
maybe it's a box of donuts. I don't know. But either way, I do think there's an opportunity
there. And, you know, there's a good example of, many of your listeners may have heard of this
concept used in development called the chaos monkey. And I think this was, I think this was
originally pioneered by Netflix.
I know they use it pretty heavily.
And the idea was a development team is going to infuse chaos into their operational environment every so often and see if the tools, the hardware, and the operational team can deal with that chaos.
So a certain amount of downtime, if you will, in production was going to be driven by events you caused yourself. Maybe you're going to take down a data center or take down a server
and make sure your resiliency efforts worked. You know, one of the things you see security teams
doing now is using this same concept of a security monkey internally to effectively cause their own
security issues and force their teams in production to respond to them.
And of course, the team that's responding
has no idea if it's real or if it's chaos monkey.
And it's an example of a purple teaming type activity
that I think can be very effective
because it gives you the ability
to constantly exercise those SOC teams to know, hey, can we detect this certain vulnerability or this certain type of malware?
Do we get the right response to it?
But more than what you'd have on paper, how fast was the response?
Did it get escalated in the right way?
Did it get dispositioned properly?
Was it documented properly?
All those things you can ask to really make sure you've got that muscle memory in place. Right. So it can provide justification for spending or focusing
more resources on that issue. Exactly. So my point, Dave, is it's not just about the people.
It's about what's your overall spend. Now, I think the other thing we have to keep in mind is
everybody, based on the last few weeks, should pull their security assessments and say, okay, what just changed as the world changed with coronavirus? and kind of phishing emails around coronavirus. But also, I now have my entire workforce
outside the firewall and working from home
on who knows what device.
How does that change this assessment?
Are there new risks here that weren't here before
that I need to think about in a new way?
Maybe I should purple team that?
Or worst case, if I can't fix the issue, I should at least go build
a runbook so that if it is exercised, I'm prepared to respond to it. I think purple teaming is the
next thing for people to start to really look at. Because if you think about it, we've all stepped
forward in front of our CEO or board and said, hey, I need to buy this really expensive security product. Well, okay, how are we going to measure
the return on that? Well, I don't know.
If it works, nothing will happen. I mean, that's a
really hard sell. One of the other
advantages of purple teaming is now you're able to go forward and go,
hey, boss, we have this
vulnerability. We kind of pointed this to our purple team who went and tried to exercise it.
It took them 15 minutes and they were able to get into everything. I think we really need to go buy
this tool that's going to stop that. That's now a totally different value proposition.
Next up, we hear from Dan DeClos, founder and CEO at PlexTrack. They're the sponsors of this show.
So the blue team being those folks that are responsible for defending the organization,
protecting the crown jewels of an organization,
and responsible for deploying the entire security program within that organization. That's what we would say is traditionally considered the blue team.
The red team, in the traditional sense, is very focused on penetration testing and even
deeper diving into the techniques and tactics that an attacker may have.
But for our perspective, we abstract that out to be anything that's proactive, anything
that is somehow related to the proactive assessment of security control.
So not just the penetration testing activities, but
anybody that's conducting a risk assessment or a questionnaire or a gap analysis. The proactive
assessment is anybody that's doing something to identify a security hole in the infrastructure.
Now, are you on board with this notion that purple teams aren't necessarily
teams, that it's more of a concept rather than actual people assigned to things?
Yes, absolutely.
I mean, we actually call it the purple paradigm, right?
Or that it's a paradigm, you know,
that purple teaming itself is not a specific job function,
but it's a role that everybody plays.
So it's really meant to be that mindset
that we're all on the same team
and we're all collaborating
to improve the security posture and be able to collaborate more quickly and effectively and truly
identify the major risks that should be focused on on a daily basis. So it's not a specific role,
it's not a specific job function, but something that is more everybody's a part of the purple team.
You know, I can imagine that your red teams and your blue teams might have a healthy amount of competitiveness between them. But I suppose part of what you're after here is that you don't want
that to turn adversarial. You need to put the tools in front of them so they can remain collaborative.
Yeah, exactly. I mean, at the end of the day, we're all on the same team
trying to achieve the same mission,
and that's to protect ourselves
against the adversary,
the true adversary.
So it's not meant,
you know, purple teaming itself
is not meant to be this pitting
the red team against the blue team
and no one sharing the techniques
that they're using.
And then at the end of the engagement,
you know, they plop down
a 300-page report
that shows all the weaknesses
that you have. And now, you know, good luck going to try and fix that, right? That's engagement, you know, they plop down a 300-page report that shows all the weaknesses that you have, and now, you know, good luck going to try and fix that, right?
That's the, you know, that's potentially, you know, some people kind of used to do it that way.
And so we want to avoid that by saying, hey, you know, we're on the same team.
We're all trying to collaborate.
And at the end of the day, we want to identify compromise as quickly as possible and as early in that attack life cycle as possible.
So making sure that everybody understands that this is the true mission and this is how we make progress
instead of having to just get more tools that hopefully automate.
I mean, obviously, we're always kind of focused on automation,
but there's a lot of manual effort and a lot of efficiencies that get lost.
And so we really want to focus on having those teams collaborate together in order to streamline the process and make sure that things are getting fixed quicker and that time is spent on the actual cybersecurity work getting done.
Yeah, and it seems to me like by having it be dynamic in that way, that information is being shared back and forth in real time, I mean, that tester and hated writing reports and you get to the end of an engagement and it takes so long just to get all that information correlated and then put into like
a Word document where you're, you know, you're spending a lot of time dealing with different
formatting issues and all that jazz. And then you deliver it to a customer or, you know, even I was
on an internal team. So, you know, we're delivering it to the other departments and then you just
don't know what's going to happen to it, right? A lot of times it gets put on a shelf or, you know, even I was on an internal team, so, you know, we're delivering it to the other departments. And then you just don't know what's going to happen to it, right?
A lot of times it gets put on a shelf or, you know, some of the things get extracted and put into a spreadsheet or some other kind of tracking system.
So you lose a lot of work and you lose a lot of visibility.
And so then from the blue team's perspective, you know, I built out a blue team and helped build up that capability for an organization.
I built out a blue team and helped build up that capability for an organization.
And what are you supposed to do with a really big Word document that's a pen test report or some other kind of security assessment, whether it's a PCI audit or something like that?
So being able to have all that data in one spot and being able to quickly remediate those issues speeds up the process so much and really keeps the focus on here are the things that we know we have to work on.
And then also, you don't lose that data over time, right?
So you can start to identify trends like,
hey, we're consistently finding issues related to SQL injection or lateral movement. And how do we start to identify what areas we need to invest in
to improve our security posture and our maturity over time.
It's interesting to me that with each of these teams, your red team, your blue team, or your purple team,
at some point you're going to have to do reporting.
And it strikes me that particularly when you're reporting to the board,
there could be a certain translation layer that has to take place there.
Yeah, so I mean, when I was a security director, or the equivalent of a chief security officer today,
or information security officer,
one of the challenges I had was being able to draw metrics from all the different tools
and being able to present a clear picture of the progress that we've made.
And you can categorize them based on business unit or industry, all those types of
things so that you can really slice and dice the data the way that you want to be able to show the
picture of like, hey, you know, maybe our business units, we want to have a little bit of a comparison
between here's who's trending in the right direction or here's who's not. And so that
helps tell the story to the board of where that investment is going and where you feel the investment needs to get made for the next quarter or the next year.
You can also break it down across different clients or different constituents. you're involved in mergers and acquisitions. You could do some security assessments of the companies that you're acquiring and have a good idea of their security posture before you flip
the switch to connect them or before you actually proceed with the merger. Same with on the insurance
side. Insurance companies can get a decent view of a variety of their clients or all of their
clients from a central place and be able to start benchmarking and comparing. What's your advice to an organization that may be at the beginning of their journey when it comes
to these things? Maybe not just red teaming and blue teaming, but this notion of purple teaming.
They want to do a better job fostering that sense of communication. How do they begin? What sort of
tips would you have for them? Yeah, good question. I think that definitely, you know, setting the mindset is important, right?
That we're all on the same team and we're all here to collaborate.
So even setting up some small exercises where you can say, hey, we're going to just test one small thing.
You know, I think some people start to get overwhelmed with that concept of like, we've got to test the entire security posture.
with that concept of like, we've got to test the entire security posture. And my approach is to really take it in a much more iterative perspective so that, you know, it's kind of a,
it's a marathon when building out a security program and improving your security posture.
And so breaking that into little small, small chunks. You know, we love to reference the MITRE
ATT&CK framework because
it breaks down everything based on the ATT&CK lifecycle, which at the end of the day, that's
what we're really trying to, you know, identify issues that are cropped up in each one of those
different tactics. And so if you start there, you can actually focus on like just one small
technique and say like, hey, you know, we're going to test this out and then we're going to work with you and collaborate closely on whether or not you identified that and what do you need to do to fix it.
So you kind of start to get in this quick feedback loop, quick iterative lifecycle for just testing small things and making that become the norm rather than breaking things into large assessments that take months at a time.
I mean, you're always going to have that aspect, especially when we're talking about like an
external assessment, but like within your enterprise and within your organization,
breaking these things down and just identifying who's going to do the testing piece, who's going
to do the remediation piece, how are you going to collaborate? And it doesn't have to be a specified
role for either of those, right? You could have people that don't even have as much experience in it, and it starts to
get them more exposure to the offensive techniques as well as the defensive.
I'm curious about this notion of what I would describe as kind of load balancing of by using
purple teaming to essentially connect your red team and your blue team to be able to
look at the resources that are being assigned to each of those groups and the results that are
coming out of them. Can you then take those results and use that to inform how you assign
resources to each of them? Oh, yeah. No, that's a good point.
I mean, yeah, you can definitely start to kind of get ideas for who's reporting more
of different issues and those kinds of things.
With people that are conducting red team assessments, they can immediately report those.
And this happens a lot, but it's hard to capture, right?
In terms of like, hey, we've identified something early in
the engagement that we want to let you guys know about. They let them know about it. They fix it.
Can you retest it even before the engagement window is completed? And that's actually what
we feel should be the norm, right, is that as soon as things are identified, you can collaborate.
You've captured when it was reported, so you still have all those metrics and you're not losing the data as to, say it's a contracted assessment, did they find anything or not?
I mean, that's going to be one of the big questions and what did they do to conduct
the assessment?
You can show that in real time.
So you're not only fixing the issues faster, but you're still capturing all the metrics
that you want from when it was reported,
how you guys collaborated, and actually showing that feedback is much more important because
that's the value that somebody is going to provide from both sides of the fence.
I think the important thing is for us, the purple team paradigm is really an abstraction.
So a lot of times people think of purple teaming as
you have this advanced red team that's going to sit down and maybe even in the same room as the
blue team, and they're going to just start running through things, which is still good. And that's
still, you know, that's an important exercise. But abstracting the concept out to be much more collaborative at the general level from the red team, being able to be anybody that's conducting any kind of assessment that identifies a security issue, that's one of the key things that we really try to hammer on.
Our thanks to Caleb Barlow from Synergistech, Austin Scott from Dragos, and Dan DeClos from our show sponsors, PlexTrack, for joining us and sharing their expertise.
CyberWireX is a production of the CyberWire and is proudly produced in Maryland at the startup studios of DataTribe, with their co-building the next generation of cybersecurity startups and technologies. Our coordinating producer is Jennifer Iben, executive editor is Peter Kilby, and I'm Dave Bittner. Thanks for listening.