CyberWire Daily - Pentest reporting and the remediation cycle: Why aren’t we making progress? [CyberWire-X]

Episode Date: October 9, 2022

The age-old battle between offensive and defensive security practitioners is most often played out in the penetration testing cycle. Pentesters ask, “Is it our fault if they don’t fix things?” W...hile defenders drown in a sea of unprioritized findings and legacy issues wondering where to even start. But the real battle shouldn’t be between the teams; it should be against the real adversaries. So why do pentesters routinely come back and find the same things they reported a year ago? Do the defenders just not care or does the onus fall on the report? Everyone really wants the same thing: better security. To get there, the primary communication tool between consultant and client, offensive and defensive teams — the pentest report — must be consumable and actionable and tailored to the audience who receives it. In the first half of this episode of Cyberwire-X, the CyberWire's CSO, Chief Analyst, and Senior Fellow, Rick Howard, is joined by Hash Table members Amanda Fennell, the CIO and CSO of Relativity, and William MacMillan, the SVP of Security Product and Program Management at Salesforce. In the second half of the episode, Dan DeCloss, the Founder and CEO of episode sponsor PlexTrac, joins Dave Bittner discuss the politics around pentest reporting and how better reports can support real progress. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to the Cyber Wire Network, powered by N2K. Hey, everyone. Welcome to Cyber Wire X, a series of specials where we highlight important security topics affecting security professionals worldwide. I'm Rick Howard, the Chief Security Officer, Chief Analyst, and Senior Fellow at the Cyber Wire. And today's episode is called Pen Test Reporting and the Remediation Cycle. Why are we making progress? At Program Note, each Cyber Wire X special features two segments. In the first part, we'll hear from a couple of industry experts on the topic at hand.
Starting point is 00:00:48 And in the second part, we'll hear from our show's sponsor for their point of view. And since I brought it up, here's a word from today's sponsor, PlexTrack. The cyber war is never ending. PlexTrack, the proactive security management platform, helps teams win the right battles by boosting efficiency and effectiveness and cutting reporting time in half. PlexTrack clients report an average 20% time savings and 30% increase in efficiency.
Starting point is 00:01:24 PlexTrack streamlines and automates workflows through the full cybersecurity lifecycle. Thank you. A library of finding write-ups and custom templating facilitate efficient, consistent reporting. Remediation tracking ensures measurable progress. All in all, PlexTrack provides a single source of truth for all stakeholders. PlexTrack can help your team aggregate your data, gain visibility into your security posture, and track progress over time, assuring your organization is always prepared for the next threat. Visit Plextrack.com slash TheCyberWire to learn more. And we thank Plextrack for sponsoring our show. I'm joined by Amanda Fennell, the CIO and CSO of Relativity.
Starting point is 00:02:26 She's also the host of her own podcast called Security Sandbox right here at the Cyber Wire. Amanda, thanks for coming on the show. I'm happy to be here. Thanks, Rick. So we're talking about pin testing today. And for those aren't familiar, can you take a stab at telling everybody what a pin test is? Oh, it's kicking the tires at something. Yeah, that's true. It's when you want to know if there's anything
Starting point is 00:02:50 that might be vulnerable that you should have caught, but you haven't caught. And there's two ways to approach this, right? So a penetration test, we want to do some kind of a test either on your process or your people or your tech to determine if it's A, running as should be, running as intended, but B, running as should be, running as
Starting point is 00:03:05 intended, but B, anything we should have been doing differently or configured differently to be more secure and lessen your risk. And there's two types of CISOs I've noticed out there. The kind that really don't want you to find anything because it proves how great they are in their program. Or the ones that get mad if you don't find anything because they figure you're not good enough to find their stuff. There has to be something, right? And I'm the latter. I think or the ones that get mad if you don't find anything because they figure you're not good enough to find their stuff. Like there has to be something, right? And I'm the latter.
Starting point is 00:03:29 I think you should be finding something always, but yes. I think you're right. I'm in that latter camp too because I don't really reach for a generic pen test as one of my go-to first things because they should be able to find something, right? I mean, that's just the way it is. Now, if I have a very specific thing in mind,
Starting point is 00:03:47 like I spent some money or we improved some feature of our defenses, I might direct a pen test at that to see if it was actually successful. I don't know. What do you think about that? It's a good question. And it's actually one of the things that I love the most
Starting point is 00:04:00 is whenever I do pen testing, because that's your next question, do we do pen testing? We do. When we do that, we bypass a couple things that after a few years, I know you can get past a turnstile. I don't need you to do that. So I will give you the credential, like the badge and say, okay, can you just get to work now? I don't need you to do the physical too much because that part you get used to after a while. But I really need to not waste my money for pen testing on you getting through a turnstile. I need to know what happens when you get to a laptop. Yeah, I don't want to
Starting point is 00:04:33 know if you can, because like you and I were just talking about, you better be able to. That's kind of your job. But what I really want to know is, can you get to this thing that I'm trying to protect? I guess we're doing most pen testers are contracted, right? Or do you have your own internal team? We have both. And that was actually because we had some rioting internally by the teams that if they did not get to test things. And the first part is we're a software company too, right?
Starting point is 00:04:58 So we have to naturally pen test our code. It can't go to production unless we've tried to test it and make sure that there's nothing wrong, the 10 OASP, et cetera, nothing super vulnerable. So pen testing is naturally a part of the product security realm. But specific to what we're talking about here, the cyber side, we had people on the team that were like, if you don't let me get those skills under my hat, I'm not going to want to stay here. And I said, a pen test team is born. Here we go. So that's how we came about is to make sure the talent stayed happy. Well, you guys are a software house. And what you just described before is a little bit beyond what
Starting point is 00:05:35 most CISOs deal with. Most of us don't deal with software penetration testing. Tell us what the difference is. What's the difference between the typical thing that was invented back in the 70s about testing your network versus what you guys are doing today in a modern software development house? I always loved that movie, Sneakers. Have you seen that? I was just watching it. Just watching it this last weekend. The part when he has the flowers and he walks over and distracts the guy. He's like, I got to get these delivered right now. The social networking that's going on there and the hacking. For those of you that don't know, the 1992 movie Sneakers is one of the all-time great hacker movies. And by the way, the movie was written by the same guys who wrote another all-time great hacker movie, War Games.
Starting point is 00:06:19 Lawrence Lasker and Walter Parks. In this scene that Amanda was talking about, Robert Redford, probably best known to this audience for Avengers Endgame and Captain America Winter Soldier, and River Phoenix, probably best known for Indiana Jones and the Last Crusade, he played the young Indiana Jones, are trying to get past a security guard and an electronic lock. Two factors. The scene opens with River Phoenix dressed as a delivery man standing in front of the security guard with a stack of Drano boxes, claiming that he has a work order to deliver them to the top floor. The security guard doesn't have him on the access list and is having
Starting point is 00:06:55 none of it. The two get into a heated argument. That's when Redford walks up to the counter with some lame story about his wife delivering the birthday cake and the balloons. Listen, I'm sorry. They didn't have anything on record. Hold on a second. I got this. Did my wife drop the cake off for me? What cake? There's no cake back there. Surprise party for Marsha on the second floor. She was supposed to drop a cake off. I don't know any better.
Starting point is 00:07:17 There she is. Late as usual. Okay, well, it states right here very clearly that I am to deliver 36 boxes of liquid drainage to this here address. I don't care what that says. You're not on the list. You can't get in. I do have a problem with it. You can't get in. I might lose my job.
Starting point is 00:07:29 That's not my problem, kid. I'll beat it, all right? That's when Reffert walks past the guard up to the electronic door that's locked, carrying a bundle of helium balloons and a birthday cake box, and starts yelling at the guard to let him in. I can't reach my car. I can't reach my car. I can't reach my car. Wait one minute.
Starting point is 00:07:48 Is the buzzer okay? We're late for the party on the second floor. Push the goddamn buzzer, will you? Thanks. I love this movie. And if you're a security professional and haven't seen it yet, consider this your homework assignment. It's a cybersecurity classic.
Starting point is 00:08:04 So, yeah, there are two parts to this. There is the general way of pen testing that we're used to. And it's the really exciting thing where we all pop out those lock picking kits that we're excited to have. And our attempt here is to simulate in some way an attack that would use a tool, a technique, or a process that an attacker would use to exploit a weakness. And that could be that turnstile. That could be the person who's stressed out and going to buzz you through with the badge or et cetera. And then what? If you get to that laptop and can you hack
Starting point is 00:08:34 it? Is it bricked? Is there endpoint protection? So on. Is it going to be seen on the network when somebody does get into it? Is there noise, traffic, et cetera? Is it noisy? Is there lateral movement? Those are normal pen testing. I think what most people think of. Ours is the same, but also includes that product side. So when the code is created, there's an entire application security team that they
Starting point is 00:08:57 think their job is, quote-unquote, to break things. That's what they put in their Slack, what I do, I break things. It's so sad. I'm like, you know there's I do, I break things. It's so, so sad. I'm like, you know, there's more to your job, right? Not just that. You should fix things too. But they do the same exact thing.
Starting point is 00:09:13 We just do it in code. And so we use the same tools, techniques, and processes that people who are doing things for nefarious purposes, but we use it against our code. And we're constantly scanning with that dynamic and static analysis and so on. Anything vulnerable, anything we should be fixing, and then every once in a while on that code, we have a person who tries to break something. Amanda mentioned the OWASP top 10. Let me explain what that is. Back in 2003, Dave Wickers and Jeff Williams, working for Aspect Security at the time, a software consultancy company,
Starting point is 00:09:45 published an education piece on the top software security coding issues of the day. That eventually turned into the OWASP Top 10, a reference document describing the most critical security concerns for web applications. Today, OWASP is an international volunteer team of security professionals led by the foundation executive director and top 10 project leader, Andrew Vanderstock. OWASP is dedicated to enabling organizations to develop, purchase, and maintain applications and APIs that can be trusted. Today, there are tens of thousands of members in hundreds of chapters worldwide. So just like a network penetration tester will have a bag of tricks that they will try to, you know, walk themselves through the intrusion kill chain, let's say.
Starting point is 00:10:30 When you're doing software penetration testing, you have not an equivalent set, but a set of tools that you're talking about. Like you were talking about using the OWASP top 10 to check your code. Is that what you're talking about there? Yeah, we use the same things just differently. And so we scan just like what you would do with like a web application, right? First, you have to have some kind of connection to it, then you scan it. And I'm making all these movements on video that's not going to translate, right? This is the Italian background of me coming out here to talk with my hands. So we do the same thing.
Starting point is 00:11:07 We first have to make sure we have a connection to something. So if we have code before it's in production, it's in any kind of analysis stage. So we take that code, we connect with it, we touch with it, we feel it out, how big is the bread box, how long is it? So relativity as a product is like three, four million lines of code. I can't exploit that much code all the time if I wanted to pen test it. So we have to do what you mentioned earlier, focusing on the parts that have the most access to the crown jewels,
Starting point is 00:11:36 authentication, any kind of identification access management. Those are the parts that we spend more times doing our pen testing against to say, okay, what happens if I did authenticate? What happens if I did get access to this? What could I get? And then when could I exfiltrate? So it's super similar to the network style. It just happens to also be that we're protecting crown jewels, which includes our code. William McMillan is the Senior Vice President of Security Product and Program Management at Salesforce and one of our most recent additions to the CyberWire's subject matter experts who Macmillan is the Senior Vice President of Security Product and Program Management at Salesforce, and one of our most recent additions to the CyberWire's subject matter experts who routinely visit us here at the Hash Table. I asked him how Salesforce uses pen tests to protect their
Starting point is 00:12:15 enterprise. We consider it to be just one part of a broad spectrum of activities focused on securing our customers' data, but we do think it's an important one. We use a wide variety of offensive security capabilities, including both in-house and third-party pen testing. These are well-managed programs that we use deliberately. In other words, we find them to be of significant value and we resource them accordingly. Pen testing absolutely helps our company and our customers meet various compliance obligations, but we view pentesting as much more than checking a box. It adds unique value to our overall security program in numerous ways. The key to using pentesting resources wisely, in my opinion, is to make sure they're focused on the front end and that the results that come out of the back end of the process are
Starting point is 00:13:04 timely, relevant, and easy to quickly operational come out of the back end of the process are timely, relevant, and easy to quickly operationalize as a regular matter of course. In other words, if pen test reports just sit around in a pile or become focused solely on meeting compliance requirements, the value diminishes dramatically. One area that stands out a bit in my mind is that we do a lot of pen testing around M&A activity. As an innovative and fast-growing company, it's really important for us to make sure we understand any sort of cyber risk we might acquire. We operationalize the results of this kind of pen testing in a number of ways throughout the M&A cycle. One thing that Salesforce includes in its penetration team operations is a serious bug bounty program.
Starting point is 00:13:46 In other words, they pay somebody else to find the bugs in their code. We also have a highly successful bug bounty program, which is a great way for us to tap into a broad talent pool. Our program has been underway for several years now, and we pay out millions of dollars on an annual basis. We really feel we derive tremendous value from this program. Well, let's go back to the typical network stuff. Do you distinguish between typical pin testing and say red teaming or purple teaming? Is that a different movement arm or is it all the same? It's became like the big movement and the hot buzzword of purple teaming. And we actually, we did like a whole, I guess it would be like a workshop with our entire team
Starting point is 00:14:27 and explained what blue team and what red team was in terms of are you offense or defense? This rolls right into my fantasy football style right now, by the way. So offense and defense. For me, it's Dungeons and Dragons, okay? But go ahead, Laura, I'm with you. I'm also good at Dungeons and Dragons.
Starting point is 00:14:44 I am chaotic evil, so we can do this. So we had blue and we had red, and then we had to shock and awe everyone on the team to explain purple, the blend of both where you have to be able to play on both sides, right? So you do offense and defense, and you have to have the skill sets for both. So we maintain that you should be purple, that all people should have the capability to be purple. With potentially a few people who are straight blue, our incident response team, right, they really feel strongly they're straight blue. And then straight red, our APSEC people I mentioned, they feel really confident that they're straight red. But the majority of our team falls into the bell curve of being purple. And they're a little bit of both. They have to be able to defend. They have to be able to also attack. Well, I take it one step further too, right? And when I do purple team exercises, maybe we should just back up and explain what those three colors mean. You said it before, but blue team means what? Blue team, I'm going to defend something. I'm going to make sure I know when somebody does something they shouldn't be
Starting point is 00:15:41 doing, I can see it and I can respond to it. So this is your security operations center, your intel analyst. They're watching all the telemetry and they're trying to figure out if something's going on. That's the blue team, right? Yes. I argue that about that intel one, but yes, it's fine. Yeah. I would debate that one with you. We'll come back to that.
Starting point is 00:16:00 All right. The red teamers are the team trying to penetrate. They're trying to get in through some way, correct? That's what a red team is. That's what I'm with. I'm with you on that one. So a purple team then is when you combine the two exercises, I think, right? So the red team tries, you know, something along the kill chain. They try 10 or 15 steps. And if they are successful, they go to the blue team and say, this is what we did. What did you see, right? And what did you do once you saw that, right? And so it's a learning exercise for both sides of the defense offensive situation. That's where the learning happens. That's what I think. What do
Starting point is 00:16:36 you think? Yeah, but it can be, it's such a difficult thing. And I put this as this caveat, okay? So the common goal of we want to improve security, I believe while people think this is the purple team goal, I think that's everybody's team goal. You should all be wanting to improve the posture of your organization. But the thing I'm most careful about is the adversarial relationship. I hate that moment when the blue team has to sit there on a call and get the readout from the red team. And it becomes, I saw that, though. Like, I don't want to, look, I'm not your mom. I don't want to argue about this.
Starting point is 00:17:12 Yeah. So you have to be, you're saying you have to be very careful not to create that animosity between the two teams. They should be purple teaming. It's together they're doing that. And that's why I focus on the purple with, like, maybe only one person at end spectrums. And the together they're doing that. And that's why I focus on the purple with like maybe only one person at N Spectrums. And the reason why is just that I just, it's the same thing as like punitive anything. It's never going to be great feelings for human beings. And I say this because I've been the person in incident response that had to hear the readout
Starting point is 00:17:40 from a pen tester contractor and they were adamant that they had caught something. And I was like, no, no, you didn't. I saw you. And it just, it became adversarial. Well, I take red teaming one step further too. I don't want, I'm not going to pay for a generic pen test, a red teamer. If I want to hire somebody or have my own team do it, I'm saying, I need you to emulate one of the known adversaries that we know are going to come after us. And so instead of them making it up on the fly, I want them to emulate, say, Cozy Bear or Fancy Bear or something like that. So at the end of it, I can say, you know, we're protected pretty much against that kind of adversary. Would you take it that far at relativity? That is why I argue about the threat intelligence team being on blue. That's exactly my point, actually. Because you should always know your threat modeling
Starting point is 00:18:30 and your risk modeling every year. Your threat intel team should be telling you these are your top adversaries, whether it's commodity or not. These are the top risks that we have and so on. And so typically there's one advanced threat out there. And I'd say, so any of the bears are a good one. The bears are typically, any of the bears, you know, or, you know, any, any panda, like we're, we got a lot of stuff going on in the activity in the realm here. But some of those TTPs, they kind of cross around all of them. They all have similar ones. They're, they're hodgepodge
Starting point is 00:18:59 from each other. But I would say that that's exactly where the threat intelligence team comes in to shed light. Hey, these are our big adversaries that could come after us. You should be testing on these. And so that's why I think they kind of are a little bit of both. They're going to tell you what to go after and how to do it as an adversary, but they're going to also be able to educate the blue team about how they should defend or see it or see the activity. So I think it's fair to say that you're an advocate for penetration testing, but everything can be improved.
Starting point is 00:19:29 How would you improve it? You've done this for a while. What would make it better for you to use? How would it make it more practical or more useful? You know, besides you stealing my thunder about using relevant TTPs and threat actors, I think about being, you know, that would be the big one. And I think that we probably just need to make sure that we're doing some kind of, I don't know, I don't want to say the word homogenization, but like all reports don't look the same. And so some amount of standardization could go a long way in this industry. I would really love to see something
Starting point is 00:20:03 like that in the years to come. And so that's one of the reasons why you keep the same contractors potentially. But if you have these teams inside, there should be something that you're seeing as an annual, like this is the report, this is what we're getting and stuff, and that stuff's great. This is also why we sometimes change our external pen testers is because you got too cozy, not to be cozy bear, but you got too cozy. You know, you already know how to get through everything. I need to see somebody who's never seen it before. So I think some level of standardization could go a long way to getting people to understand some real value from the contract side and on the internal side. So I'm talking like a cybersecurity framework kind of thing, but for pen testing. Yeah, I agree. Like
Starting point is 00:20:44 we were talking about at the beginning of the show, it's not enough that you got in, okay? Did you exercise the thing I was trying to, you know, the thing I was trying to exercise and how well did we do? And the other thing I would ask for too is relatively, because if you're going to do this with an outside contractor, you know, compared to other organizations, how are we doing, right? Or what are those other organizations doing that make them better than us? I would love that part of their report. Yeah, that is a really good point.
Starting point is 00:21:13 So I'm going to steal that in case anybody asks me that question again. I'm going to say and, but yeah, I think that would be awesome. It's always the question, how are we doing comparatively? It's a wonderful one. Well, this is good stuff, Amanda, but we're going to have to leave it there. That's Amanda Fennell. She's the CIO and CSO of Relativity
Starting point is 00:21:31 and a host of Security Sandbox. Amanda, thank you very much. Thank you, sir. Next up is Dave Bittner's conversation with Dan DeClos, the founder and CEO of PlexDirect, our show's sponsor. So let's start by getting a little bit of the backstory here. I mean, in terms of the history of people gathering security reports and pen testing and all that sort of thing, can you give us a little bit of the legacy way that people used to deal with this and kind
Starting point is 00:22:12 of what led us to where we find ourselves today? You know, the legacy way is still a very popular paradigm, right? And that's where you will go and you will do an assessment. My experience was penetration testing specifically, where you're spending a lot of time documenting evidence, collecting screenshots and trying to put them into the right format and using different style templates. As a pen tester, there's never a shortage of work. I had a day job, but then I would also potentially be doing side gigs and moonlighting for different folks.
Starting point is 00:22:49 So you're always exposed to everybody's different reporting style and templates. So you're spending a lot of time just documenting the findings. And my experience obviously was pen testing, but this could be for any kind of security assessment or security audit in general, right? Where you're going and you're doing this assessment or even like an incident response exercise where you're doing all this work and then you're collecting that evidence into a document. And so you're spending a lot of time just writing the document and trying to stay consistent and then delivering it with
Starting point is 00:23:19 the kind of this, you're either doing it via a secure file share or just having different ways of trying to encrypt it and password protect it and having just lots of, I would say, friction around that whole process of getting the document together, getting it reported accurately, getting it reviewed and correct, then getting it into the hands of the people
Starting point is 00:23:45 that need to do something with it. And then from there is really where I started to feel a lot more pain after having been on the front end of it, where the back end is like, what am I supposed to do with this potentially 300-page document? And so what people end up doing is they tend to copy and paste the relevant information that they feel is relevant out of the report into some other form of a ticketing system. And that's really where some of the breakdown starts to happen as well,
Starting point is 00:24:12 is that, hey, there was all this time spent on this report, and then a small percentage of it actually makes it somewhere where it can be actionable. And so I would say that's kind of the legacy issue and still a very popular paradigm today, even in 2022. I mean, to what degree are we dealing with just the basic reality that folks who do pen testing are very good at pen testing, but they may not be the best graphic designers in the world? Yeah, yeah. No, you're exactly right. I mean, I've definitely worked for firms where they've actually hired outside marketing agencies to put together a style guide and a template for them.
Starting point is 00:24:53 So they're not, you know, and so these folks recognize that this isn't my area of expertise. And I think a lot of people, some people will put kind put their stake in the ground that this is their secret sauce of how they write up their reports and things like that. But I think most people recognize that, hey, I'm getting paid to utilize the skills
Starting point is 00:25:16 and techniques that I know of how to attack a network or my security knowledge and experience and how I exude that into the organization that I'm doing testing for rather than the way the document looks. Not to say that that's not important. It is important to be able to convey information correctly. But that's so much more a factor of what did you actually do and what do you know and what are you actually being paid to do?
Starting point is 00:25:48 I suppose there's a risk of it sort of being like a game of telephone also where the number of times that something is translated by one person to another person or reworded or there could be a loss of clarity or a loss of emphasis, those sorts of things. Yeah, most definitely. That's why we get into this situation where if companies are getting, say they're going for their SOC 2 certification or potentially even FedRAMP or something like that, they get asked, show us a copy of your latest pen test report to be able to at least say, of your latest pen test report to be able to at least say, here's what was originally reported versus what did you actually do with these results? And I experienced this. There was the notion of, I'd come back and rewrite the same report
Starting point is 00:26:37 because nothing got done. And that could be a variety of factors. It got lost, it got put into a spreadsheet that that person that was doing that work is now no longer at the organization. So there's a variety of factors that could play into that, but there's definitely a loss of fidelity of that telephone kind of game where,
Starting point is 00:26:55 well, I thought this one was important, but the pen tester said it was really important. I didn't feel like that, so I'm going to move it into this low informational type status where nobody's going to look at it. So what is the paradigm that you're recommending here? How do you suppose people can come at this in a better way? Yeah, yeah. I mean, I think continuing to move away from the notion
Starting point is 00:27:20 that we need to, basically, and this is why I started PlexTrack, was let's start moving away from the document is the final deliverable and let's get away from that as the form of delivery where we can have a dynamic platform that facilitates better reporting and more consistent reporting.
Starting point is 00:27:42 The document is still there, it's an artifact of the engagement, So it's still that point in time, but it really facilitates deeper collaboration, not only from the teams that are doing the testing and the reporting, speeds up that process, but also the collaboration between the people that are responsible for fixing these issues and being able to say, what is important for our organization? Can I provide more context and then have visibility into the actual remediation of these issues?
Starting point is 00:28:09 Because at the end of the day, we're all on the same mission. We're all on the same team as to reduce risk to the organization. How we categorize that, how we prioritize it is really important. And that's really what the crux of the matter is, is being able to have deeper visibility, better collaboration so that you can actually show progress. Can you walk me through a potential use case here? I mean, rather than plopping down on a desk, as you say, a 300-page report with everything that I've found, it sounds to me like what you're suggesting is much more dynamic and fluid.
Starting point is 00:28:46 Yeah, yeah. I think probably a good use case, outside of just being able to write up a full report, is an important scenario that happens a lot when we're talking about a security assessment of any type. But we'll talk about a penetration test where during the course of any type, but we'll talk about a penetration test where during the course of the engagement, those engagements can be scoped anywhere from a few days to several months. So as they're doing their assessment, they may come across something that is actually very
Starting point is 00:29:16 critical that they were like, we need to report this right away. That paradigm would actually take quite a bit of time to get done because they don't have the final report. They got to get the information into the hands of the people that should fix it right away. So what I've actually seen happen is people document things in a text document, encrypt it with the screenshots in either a zip file or put it on a Dropbox or something so that there is an escalated way
Starting point is 00:29:44 to get that information into the people's hands. And then people have to go grab it, they have to find out where do I put this. And so that is a very cumbersome process just to get a highly critical vulnerability reported during the course of an engagement.
Starting point is 00:29:59 So a better solution would be, hey, I'm just going to publish this finding in this platform that says, I've already written up everything because a lot of testers like to report as they go anyways. They're just documenting their work. So I'm going to publish this because this is really important for the end user
Starting point is 00:30:18 to go identify and resolve. And so that speeds up that whole process because it's just right there. So that's kind of a notion of, or a scenario of how you would not be so focused on a document because then you get into these binds at time of like, well, I don't have the full document.
Starting point is 00:30:37 And the engagement's not supposed to be done for another three weeks or whatever, but I really need to make sure they have this information. Right, and first it has to go out to the design team and all that stuff. Yeah, exactly. What about for the end user themselves, the people who are consuming these reports? How does it change the way that they approach this?
Starting point is 00:30:57 Yeah, so I think it allows them to have deeper visibility into all of the issues as well as better collaboration, not only with the people doing the testing, because they can have more clarity as to when these things are getting reported, how often, what are the techniques that the proactive side of the house is using, but also who's working on remediating these issues? That starts to get lost in that whole process. So having more visibility into where are we at in terms of the remediation cycle for these,
Starting point is 00:31:35 can I have more analytics on how often these types of issues are getting resolved? And it also facilitates a much more continuous mindset, which is where the industry is continuing to focus, is that you want to be doing this form of testing in a more continuous basis. And so a solution like this really helps facilitate that more continuous mindset, deeper visibility,
Starting point is 00:31:56 and truly being able to report up the chain what kind of progress is being made and what's working and what isn't. What about for folks who are working under regulatory regimes? I'm imagining the necessity to capture a moment in time to have that snapshot. Can you do that as well? Yeah, absolutely. I mean, I mean, and, you know, like specific to PlexTrack, you know, we have we have multiple
Starting point is 00:32:23 ways to do that. You can in your analytics dashboard or whatnot, you can filter based on specific what things look like in time. You can also do comparisons as to here was a report from last year versus this year. And then, like I said before, you can also have the original document of when the report was finalized, you can export that out as a standalone, here was this point in time. But even when you view the report in PlexTrack specifically,
Starting point is 00:32:53 you can see when this issue was reported and you could filter based on those dates and be able to identify what was going on at that point versus now. And how about security itself, the security of the documents themselves and being able to, you know, sling these things back and forth between the various folks who have an interest in it. I mean, I suppose that's a concern as well. Yeah, you know, I mean, it's, you know, I think this is, you know, the fewer documents
Starting point is 00:33:23 that are getting shared as attachments in emails, the better. And so this facilitates a much more secure environment to be able to access these results and pulling these results down if you need to, but you don't have to. So you have a more secure platform that you can be accessing these results and providing information on how we're fixing them
Starting point is 00:33:44 as opposed to just shipping around a document and then probably having another document that you're using for tracking it like a spreadsheet. What sort of feedback have you gotten from folks who were used to doing it the old way and now have moved over to this platform-based approach? How do they feel on the other side of it? You know, it's interesting. based approach? How do they feel on the other side of it? It's interesting. We get a ton of commentary simply like improved morale.
Starting point is 00:34:11 Because if you think about the security testing side of the house, they want to just be testing. They want to report. They're getting paid to write these things up and to deliver a good report, but they're saving so much time and energy on the mundane aspects of that process that they have more time to actually focus on the testing
Starting point is 00:34:37 and getting either their clients or their organization better at these security issues and resolving them. So one huge benefit is improved morale that we hear a lot. In addition to the efficiency gains where we are in an industry where it's hard to find good talent. And so it truly saves them time to be able to not have to go hire more people. They can just make their team more efficient. And that's one huge benefit on the testing side of the house.
Starting point is 00:35:07 And then another thing that we hear a lot about in terms of benefits is the people that are actually receiving these, they have a more centralized way to manage the results. And so it not only speeds up their process, but they have deeper visibility into who's reporting what, what are areas of the organization, whether it's an app or a business unit or a subnet, what are the areas that are actually higher risk? And it gives them a deeper sense of where they should be focusing and where they should be prioritizing their work.
Starting point is 00:35:39 So overall, not only does it provide deeper efficiencies and improvement in morale, but also better, you know, better visibility of their security posture and where they should be prioritizing the remediation cycle. We'd like to thank Dan DeClos, the founder and CEO of PlexDirect, Amanda Fennell, the CIO and CSO of Relativity, and William McMillan, the SVP of Security Product and Program Management
Starting point is 00:36:09 at Salesforce, for helping us get some clarity about pen testing and making it work for us. And we'd like to thank PlexDRAC for sponsoring the show. CyberWire X is a production of the CyberWire and is proudly produced in Maryland at the startup studios of DataTribe, where they are co-building the startup studios of DataTribe, where they
Starting point is 00:36:25 are co-building the next generation of cybersecurity startups and technologies. Our senior producer is Jennifer Eidman, our executive editor is Peter Kilpie, and on behalf of my colleague Dave Bittner, this is Rick Howard signing off. Thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.