Risky Business - Risky Business #813 -- FFmpeg has a point

Episode Date: November 5, 2025

In this week’s show Patrick Gray and Adam Boileau discuss the week’s cybersecurity news, including: We love some good vulnerability reporting drama, this time FF...mpeg’s got beef with Google OpenAI announces its Aardvark bug-gobbling system Two US ransomware responders get arrested for… ransomware Memento (nee HackingTeam) CEO says: Sì, those are totally our tools getting snapped in Russia Hackers help freight theft gangs steal shipments to resell A second Jabber Zeus mastermind gets his comeuppance 15 years on This week’s episode is sponsored by Nucleus Security, who make a vulnerability information management system. Co-founder Scott Kuffer says that approaches for triaging vulnerabilities have started to fall apart, given there are just. So. Many. And they’re all important! This episode is also available on Youtube. Show notes vx-underground on X: "Yeah, so pretty much this entire drama thing is FFmpeg are a bunch of nerds…" FFmpeg on X: "@DavidEGrayson It's someone's hobby project of an obscure 1990s decoder…" Halvar Flake on X: "Given the extremely big role ffmpeg has played historically..." thaddeus e. grugq on X: "Current drama: Plucky security researcher Google takes on volunteer open source behemoth FFmpeg." Robert Graham on X: "Current status: There's a conflict between Google…" Introducing Aardvark: OpenAI’s agentic security researcher | OpenAI Bugcrowd acquires Mayhem Security to advance AI-powered security testing | CyberScoop Prosecutors allege incident response pros used ALPHV/BlackCat to commit string of ransomware attacks | CyberScoop Former Trenchant Exec Sold Stolen Code to Russian Buyer Even After Learning that Other Code He Sold Was Being "Utilized" by Different Broker in South Korea How an ex-L3Harris Trenchant boss stole and sold cyber exploits to Russia | TechCrunch Operation Zero — A Zero-Day Vulnerability Platform John Scott-Railton on X: "7/ There's a push to scale up America's offensive industry right now…" CEO of spyware maker Memento Labs confirms one of its government customers was caught using its malware | TechCrunch Exploiting Microsoft Teams: Impersonation and Spoofing Vulnerabilities Exposed Microsoft Teams Vulnerabilities Uncovered Cargo theft gets a boost from hackers using remote monitoring tools | The Record from Recorded Future News Remote access, real cargo: cybercriminals targeting trucking and logistics | Proofpoint US Alleged Conti ransomware gang affiliate appears in Tennessee court after Ireland extradition | The Record from Recorded Future News Three suspected developers of Meduza Stealer malware arrested in Russia | The Record from Recorded Future News Alleged Jabber Zeus Coder ‘MrICQ’ in U.S. Custody – Krebs on Security Windows Server Update Service exploitation ensnares at least 50 victims | Cybersecurity Dive Post by @paulschnack.bsky.social — Bluesky

Transcript
Discussion (0)
Starting point is 00:00:00 Hey everyone and welcome to risky business. My name's Patrick Gray. We're going to be hearing from Adam Boiloh, and we'll be talking through all the week's news in just a moment, and then we'll be hearing from this week's sponsor, and this week's show is brought to you by Nucleus Security, who you have heard on the show over the last six or seven years. They make a platform that helps you ingest, normalize, triage,
Starting point is 00:00:26 vulnerability information in your organization. that can be anything that's coming out of your SaaS right through to like stuff that's coming out of Tenable and whatnot like a master console for vulnerability information and Nuclius's co-founder Scott Kufa is joining us for this week's sponsor interview and we're talking about how the whole approach of the last five years which is to like just prioritize
Starting point is 00:00:49 which bugs you're going to fix how that is sort of becoming insufficient these days because you know we did that in response to too many bugs being present in our environment so we just focused on the high priority ones he's going to come along and argue that now there's too many high priority ones to really keep up with as well and we kind of need to
Starting point is 00:01:07 rethink that approach. We'll also talk about how AI is changing SaaS and whatnot which is something we're going to touch on in the news as well that one's coming up later but first up yeah Adam let's get into it and look a bit of terrific it felt really old school
Starting point is 00:01:23 old school Twitter infosec drama on X FFMPEG kicked off this huge debate in what's left of the Infosec community on Twitter when Google reported a bug or some bugs to FFMPEG that they discovered with their sort of deep-minded AI bug-finding stuff and FFMPEG were like, hey, submit a patch instead. Like, what are you doing? You know what I mean? We're a small volunteer-led organization. why are you doing this to us? Can't you be more helpful?
Starting point is 00:01:58 And, you know, the response from a lot of people in the security field was predictable where they were saying, it's not our job to patch your software, but then other people were saying, well, hang on, you know, Google is a absolutely gigantic, you know, hundreds of billions,
Starting point is 00:02:13 what is a trillion dollar company or something? You know, maybe they could be a little bit more helpful here. My question for you is, have you been following it and which side of the debate did you land on? I have seen it, it's spilled out beyond Twitter and into some of the other, you know, into blue sky and other places. So I have been following along with the drama. And we do love a good disclosure drama.
Starting point is 00:02:33 Like that's always fun, you know, vuln drama, good times. I guess my feeling is there are many, many ways to do open source software and many different communities with different priorities. And, you know, some open source projects, security is really important to them. I'm thinking, you know, like stuff that came out at the open BSD. world, for example, like OpenSSA, H or SSL, Open SSL, you know, for them, security is super important. They take their real seriously. It's kind of part of their deal.
Starting point is 00:03:01 You know, other projects just kind of like, you're there having a good time, and they're there for fun. You're there for community. They have other priorities. And for them, I can imagine that, you know, interacting with the modern security research community or the security world, especially in the AI environment, you know, probably could be a little frustrating. And I think, you know, in the end where I land was, you know, it's just a kind of.
Starting point is 00:03:23 kind of like you do you thing like you know if you don't want to receive bug reports for your open source software that's fine you can just say that on your bug you know on your how to report security issues page you're like eh just stick them in the bug track i like everything else with some projects that have done that uh and others you know take it a bit more seriously and you know if mpeg i think is an interesting case just because of you know they are such ubiquitous bit of video software and google has such a long history as a user of well i mean this is this is kind of what you are saying which is the that like, oh, well, you know, for projects where security is important,
Starting point is 00:03:56 like FFMBeg is absolutely everywhere and baked into all sorts of stuff. So bugs in it are a big deal, but we're in this situation where the people who are, you know, creating and maintaining this software, like it's not, as you say, it's not like OpenSH or something where, you know, they're thinking of security as being a key requirement. Yeah, exactly, right? And I mean, we are talking about in this case, you know, like really obscure video codex and other stuff
Starting point is 00:04:20 that Google's finding bugs in because, you know, Google's fuzzling infrastructure and AI bun hunting, whatever, you know, is capable of looking at the entire co-base and trying to find bugs. And, you know, it's going to improve the quality of everything. And, you know, as to whether security researchers
Starting point is 00:04:36 are good at writing patches, like, that's a whole other kind of conversation. Well, yeah. Look, look, that's, and that's been the response from a lot of people on the sort of security camp side of this, which is that well, you know, if you're a security researcher, you don't understand the context of everything in a project. And look, that's a fair
Starting point is 00:04:52 argument. But look, I, I, in broad terms, and this might surprise some people listening to this, actually. I think this is the first interesting disclosure debate we've had in like 10 years, like if I'm honest. But I think broadly, I kind of fall on the FFMPEG side of this a little bit more, which is, you know, if you want to do security research that's helpful, you know, just grabbing a bunch of bugs out of an AI model and dumping them onto busy people who don't get paid to fix them. I think is just not helpful, right? Like, now, does this mean that, you know, you would expect Google to write patches for, you know,
Starting point is 00:05:31 huge companies that have open source components? Like, no, of course not, right? But I think in this case, they could have done a little bit more to be helpful. Now, does that mean they need to write the patch? Maybe not. Does that mean they could, you know, work with them, grow some resources there? I don't know, man.
Starting point is 00:05:47 And you'd surely think, given the focus of like the DARPA AI challenge and whatever, which is not just about finding bugs, but about patching them, you would think, hey, maybe if you Google, you wait until that part of it is working properly before you push the button and start spitting out bugs. You know, wait till it can spit out patches as well. You know, surely that's coming soon. I mean, yeah, that is absolutely an avenue of research that people are going down. And I guess, you know, the argument against that is, like, those bugs are there already. People are already finding them. Like, even just by reporting them, you are improving the
Starting point is 00:06:20 situation, even if you're not providing a fully fixed or patch or whatever else. And, you know, I don't, you know, quite a bit of this I feel like is, you know, open source communities are communities, right? And when an outsider comes in, in this case, you know, Google is somewhat the outsider here and starts, you know, doing things in a way that's outside of the norm for that community, then, of course, they, you know, they get rejected. They, you know, there's this tension, there's friction in those communities. And, you know, it's not necessarily. the place for security researchers to have to understand all of those community dynamics, right? I mean, just showing up saying, like, here's a bug we found, you know, I think if you show up
Starting point is 00:06:59 and say, here is a 90-day disclosure timeline, which is what they do, which is what they do, right? That's kind of a different, because you're making demands of them, and you have no relationship with them on which to base those demands, Modulo-Ribble's relationship with FFM-PIC. But, I mean, like, in the general sense, if you show up and start making demands, then that's just kind of rude. And, you know, these communities are ultimately about good neighborness, like good neighborliness, right? And that's, you know, what open source is for. And so, you know, when you show up and, you know, disrespect the community by not following their guidelines, by not, you know, if they expect you to provide patches, not just bug reports or
Starting point is 00:07:34 whatever else, like you kind of show up with their house, you've got to play by their rules a little bit. And, you know, I think, you know, there's a lot of people in Google in deep sleep and you know in deep mind and big sleep the bug research you bet project zero like Google understands the cultural context of this but they're just trying to do it at scale and that has some some rough edges and I think you know ultimately the people at Google doing this work you know it's absolutely good faith work it's not you know a lot of open source projects have been burnt by really low value AI slop bug submissions and things by people who are trying to make a name for themselves you know just start well look I'm going to
Starting point is 00:08:12 just stop you there because I think just because someone thinks they're acting in good faith doesn't necessarily mean they are acting in good faith. And I think that's the problem here is that too many people on the security side of this say, well, look, we're helping. We're helping because we're pointing out your bad, dirty mistakes and we're going to rub your nose in your bad, dirty mistakes because we're so smart and we're helping. And I think, is that helping? We're not very helpful. That's not very helpful. Now, I also think every time this spills into a debate, people want to come up with hard and fast rules. And you and I were actually chatting about this yesterday,
Starting point is 00:08:45 just on a call we had, where it sort of breaks our little antipodean brains somewhat that the Americans want to turn everything into a flow chart and a rigid policy. Now, in certain circumstances, you want to have a crack at that, like the vulnerabilities, equities process within the intelligence community is a great example of where you do want a flow chart
Starting point is 00:09:04 and a bit of a rigid policy. But when it comes to stuff like this, you know, I think giving FFMPEG the same description, disclosure terms as like Fortinet? I don't know. It just feels a little bit dumb. I mean, Gruck put it best, I think, in a tweet that he posted that said the current
Starting point is 00:09:20 drama is, you know, plucky security researcher Google takes on volunteer open source behemoth, FF MPEG, right? I think it would, I think we would all benefit if we just used our brains and common sense a little bit when it comes to this stuff. And I would urge, you know, a lot of the AI companies now to really focus on that patching part, right? Because one of the beautiful things about AI is it should be possible to get AI to at least suggest some patches or come up with some guidance on how to fix these issues. Like that's one of the magical things about it. So why don't we just take a beat, try to get the technology to a point
Starting point is 00:09:59 where we're not just deluging people in bugs. And I'm not saying that's what Google's done to FFMPEG in this case. I think a lot of this is just grumpy FFMPEG Twitter account manager, right? Like I think a lot of that, there's a lot of that to this. But I do think there's something here where, you know, maybe we need to move on from some of these, you know, rigidly held beliefs about what does and does not constitute helping. Yeah. No, no, I agree. This is not a one-size-fits-all thing. And if you're reporting a bug to, say, curl, like, you know that Badger does an amazing job of managing that project and has done an amazing job of documenting its security properties and take security really, you know, seriously.
Starting point is 00:10:37 If you're reporting a bug to him, you're going to handle it differently than if you're reporting it to some, you know, someone's hobbyist, 1990s Dakota for an obscure video format, right? And, you know, that nuance, you know, does kind of get lost at scale. Unfortunately, those nuances are super important to communities. And I think you're right that, you know, the, like, you know, Google's 90-day,
Starting point is 00:10:58 like we are going to disclose bugs after 90 days. Like, that had some very good reasons why they went down that road, but those reasons are not about little open-source projects, right? They are about big corpse. They are about people whose incentives to bury bugs are different than open source projects where it is ultimately about showing up and being a good neighbor and a good community member. Yeah, I mean, I think the funniest one was Tavis Ormandy
Starting point is 00:11:19 like coming out of retirement basically to say, oh, well, you know, FFMPEG better fix this or they'll have to explain why one of their developers got owned with this bug and it's just like, oh, come on, man, come on. You know, and full props actually to Rob Graham, a rat of Rob who is who I don't agree with on everything, right? But he's a wonderful contrarian. It's good to have those people around. And he's currently rolling his sleeves up right now and working on a patch for the FMBeg issue, which I think is just a great example of like someone just going, okay, you know what, I'll do it. That's fine. I'll do it. That's kind of the boss move, right? It's like, you see this debate on Twitter. Like, fine, I'm going to go patch this stuff for you.
Starting point is 00:11:58 You know, I'll see you in a week with a, you know, a pull request, which, yeah, I can't argue with that. Good job. Yeah. Yeah, I mean, there's a time to argue and there's a time just to fire up a bugger, right? And Rob also managed to get into a, it managed to get featured by menswear guy on X this week as well. So hell of a week for Rob Graham. Hello to you too, if you're listening. And look, you know, staying on the theme of AI and bug hunting, open AI has launched, I think it's a beta at this point, but they've got their agentic security researcher that they've called Ardvark. Dave I tell is involved in that. It looks like it's pretty promising stuff. A different approach to the way Google's doing it where they're sort of using, you know, AI to drive other
Starting point is 00:12:42 approaches like fuzzling and whatever where open AI say that their model is much more a reasoning model that is capable of sort of understanding software rather than just throwing existing sort of fuzzing harnesses and whatnot at it. I don't know which is the better approach. I got no idea. But the point is there's a lot of work happening in this space and it is extremely promising. It's something that I mention in this week's sponsor interview as well, but I think it was around six months ago, a founder who I've worked with, I won't name him because I just haven't checked if I can name him. And I told you about this at the time. He was playing around with open source models and trying to get them to substitute for SAST, right? And it worked so well. He's like, I'm not going to
Starting point is 00:13:27 bother trying to raise or build anything around this because it took me a couple of days to put something together with existing models that worked so unbelievably well and so much better than the existing SaaS stuff. So his thoughts were twofold. First of all, there's no money to be made here for anyone outside the AI companies. And his second thought was SaaS is over, right? Which, I mean, you look at what these guys are doing and you sort of think there might be something to do that. Yeah, this, I mean, open air obviously is kind of a giant in terms of research into difficult AI problems and you know AI bug hunting and fixing and so on like you know is you know there are you know there's very real research to be done there and I know when Dave Atel who I used to
Starting point is 00:14:12 work for said he was signing up to go work at Open AI like I was curious to see what he was working on and now we've you know in the last few days we've seen seen that I believe it's in a private beta so you have to ask nicely and they will pick so we haven't really seen you know other people outside of opening I using it they're also their approach to about the sorts of bugs, their finding is different than, say, Google, where, you know, Google is tagging bugs and bug reports. They were found with, you know, Big Sleep or whatever it's called. OpenA are not publicizing the bugs that they are finding other than, you know, cherry picking a few here and there. They did find apparently an off by one in SSH, Open SSH, which, you know, that's a code base that they had a lot of eyes on it over the years.
Starting point is 00:14:51 Well, I'll just say, too, I did notice a conspicuous paragraph in there, in light of what's, of this FFMPEG drama, which is, I'll just read it here. It says, we recently upped out at our outbound coordinated disclosure policy, which takes a developer-friendly stance focused on collaboration and scalable impact rather than rigid disclosure timelines that can pressure developers. And this is, you know, this is something that I think is interesting. I feel like we do need to move on from the idea, you know, from the days where you would sort of disclose things in a punitive way, right? Especially to open source projects.
Starting point is 00:15:29 Look, when it comes to, like, you know, your bigger companies, Microsofts and Palo Alto's and whatever, fair enough. But I think, yeah, as I say, I don't think we can flowchart this anymore and we've got to start using our heads. And it looks like this is actually what Open AI is doing. Yeah, and, you know, it's great to see. And there are, you know, there are people involved there that do understand the social dynamics and all of the kind of complexities. And, you know, there are so many weird incentives around, you know, vulnerability disclosure. It's why we have these disclosure arguments so often. I mean, not so often on the show, but on the internet generally,
Starting point is 00:15:58 because there are so many people's equities and, you know, feelings and, you know, careers and all those kinds of things involved. But yeah, this is really interesting. I'm curious to see what people do with this model. I'm curious to see where Open AI, you know, kind of takes the work on it. And Dave Aetel's been doing a little bit of publicity around it. And he was talking about, you know, the difference between, you know, Google's quite fuzzing heavy, like sort of historical kind of approach to bug hunting.
Starting point is 00:16:25 had all the computer in the world. They had all of the PDF files in the world and they had all of the PDF reading software. Why not, you know, permute all of those? And they had this long history of a fuzzling as a successful approach. Whereas, and I think their AI models build on that legacy, whereas Open AI kind of starting from a clean slate. And Dave in particular has a very long, you know, he has a track record of building good hackers, that immunity, obviously NSA before that. Like, he knows how to turn, you know, aspiring punk kids. into great security researchers. And that's kind of, I imagine, what the process feels like
Starting point is 00:16:59 a little bit at Open AI, right? I've got, as you say, a bunch of midwits and turning them into legitimate hackers who can find interesting bugs and novel bug classes and all that kind of thing. Like, it's just super cool research. Yeah. And I do...
Starting point is 00:17:12 I have infinity work experience kids. Now teach them how to do static analysis, right? Yeah. And I, like, it also, you know, we only ever see the models, like, kind of on the outside with cost constraints in front. Like, you and I are not going to go span to $100,000 on Open AI compute time to ask questions, right?
Starting point is 00:17:29 So we get to the, you know, the dollar-grade answer. Whereas Dave A-Tales, like, we want to have a model where you can just, like, I want to spend half a million dollars with a computer, and I will get a half a million dollar bug out of it. Like that kind of like linear amount of brain you put in as manor quality you get out. And it must be amazing being on the inside of some of these research projects where you don't have that dollar constraint quite so, you know, it's not your personal credit card.
Starting point is 00:17:52 So you're a little bit more yolo with it. And that must be quite a fun. So anyway, I'm looking forward to seeing what people do once, you know, some people get into the beta and then start to play with it and talk about what they're finding. Yeah, and I'll just say, too, for those who aren't Ophai SAST, when I was talking about SaaS being dead earlier, that static application security testing, you know, basically software where you put your code in and it tries to find bugs where, you know, the tools now, like honestly, the coverage isn't perfect.
Starting point is 00:18:17 I mean, it's pretty good. Some of these tools are pretty good, but I think the AI stuff is going to be a lot better. Now, staying with the AI theme, we've actually just seen bug, crowd acquire a company called Mayhem Security and this is a company that won the 2016 DARPA Cyber Grand Challenge. Now, it's interesting though because they're like, yeah, they've acquired this company and all 11 employees are coming over to Bug Crowd. Then you look at the numbers and you realize that these guys have raised $36 million prior to their acquisition and have 11 employees. So there was a $21 million B round in 2022 and a Series A, 15 million Series A and
Starting point is 00:18:55 2019. So clearly things have not gone well for this company. Had a bit of a poke around. You can see if you looked at like LinkedIn trends and whatever, the number of people reporting to work there has just been in decline for years. However, the reason I find it interesting to talk about this is I think not only is AI an existential threat to the SAST companies, I would be very nervous as the shareholder of a bug bounty company right now. And the reason I say that is because when you look at the types of bugs that are most often reported through bug bounties, it's the sort of stuff that AI is really good at finding. Now, there's obviously the exceptions there. There's like the top 20 people who work across the bug bounty platforms
Starting point is 00:19:35 and find exotic hard stuff, right? Like you and I know people who do that, make a killing do that, and I don't think AI will necessarily replace them. But that's not most of the, that's not most of bug bounties, right? Most of bug bounties is, I mean, a lot of it, frankly, is labor arbitrage where you can get people in lower cost locations like India, you know, ripping through code, having a look at websites for simple issues. It's a scale thing. And AI does scale. So I think I absolutely understand why it's appealing for Bug Crowd to buy what is essentially a distressed asset here with expertise in this sort of AI stuff. But geez, I'm real curious to see how the next few years pan out for companies like Bug Crowd and Hacker 1 because I think they're up against
Starting point is 00:20:21 the wall at this point and maybe they'll be able to bring it back by pivoting into AI but then are they bug bounty companies or are they competing with you know others like Horizon 3 is a good example we had them in a snake oilist recently where they're their sort of AI based pen testing so you get this really crazy situation where you've got pen testing vulnerability scanning you know recon sort of like census Shodan style recon and bug Bounty It's all sort of collapsing, I think, into a single type of exposure detection product that's AI driven. It's amazing. It's really cool.
Starting point is 00:20:58 We're a few years into this AI stuff now. You're starting to see how it's unfolding. And it is going to change so much. It really is. I think people need to stop being as skeptical as they are. Yeah. And the more we start to see legitimate, you know, security industry specific uses really start to be. deployed why are they valued widely because like there's so much experimental stuff so many people
Starting point is 00:21:22 kind of working out in the research spaces that you know there's a lot of stuff that's you know trying fast and failing fast but as that kind of settles down and I think you're right the you know the sorts of work that you get out of bug bounty platforms probably is a pretty good candidate for being replaced by small machines so I yeah it's it's going to be interesting to see what happens and I'm sure there are also a great many other problems inside bug crowd as a company that a team of AI people could probably help them with. Like everybody's got interesting problems to solve there. So, yeah, we're going to see how it turns out for them.
Starting point is 00:21:55 Yeah, yeah. I mean, I will say, too, that, like, this doesn't mean, I think, that AI valuations at the moment are at all sane. And I think a lot of the predictions around exactly how useful it's going to be everywhere all the time. Like, we are way too early to know how they're going to bear up. But I'll give you an example. I told you earlier.
Starting point is 00:22:12 I was just really curious. I'm a car guy, right? So you might Google a car fact. And I was trying to look at the laptop. time around the Nuremberg ring of one car versus another car. And it told me car A was the fastest with a time of seven minutes 25.5. And car B was the slowest with a lap time of 725, which is a faster lap time. So it can't, you know, and this is just a Google Gemini result and whatever.
Starting point is 00:22:38 But, you know, it's not impressive. I think I may have mentioned on the show once, but I was Googling to solve a problem with my EV. And, you know, the Google AI told me to check the fuel pump, you know, and I just sort of think, yeah, this is not. You know, that's, you're getting the, you know, the answer that is proportional to the amount of money you paid for those Gemini results, which is nothing. Yeah. Right. You looked at some ad, you know, put your eyeballs on some air and that's how much value you got out of it. So, you know, I think, you know, it must just be wild to not be cost constrained and to see what the possibilities of the stuff are.
Starting point is 00:23:15 And the rest of us are stuck out here using the like, you know, pennies models and seeing trash results. Yeah, I mean, I think it's just, it's got so much potential to do so much cool stuff. I just don't know that it's that whole AGI generic artificial intelligence. Like, will we get there? I don't know. Maybe. Who knows, right? Who can say?
Starting point is 00:23:36 But are we there yet? Are we going to get there in six months? Like, no, that ain't happening. And meanwhile, you know, AI shares are at infinity valuation. So it's going to be an interesting few months. Let's just leave it at that. Now let's talk about malicious insiders. We're going to have a bit of an update on the Peter Williams situation.
Starting point is 00:23:53 We spoke about that last week. But wow, there's been a new DOJ indictment drop where like an incident responder and like a ransomware negotiator in the United States have been arrested for doing ransomware. And it looks like they're facing up to 50 years in federal prison, which I mean, play stupid games, win stupid. crisis. Yeah, exactly. And, you know, people who work in this industry, like, work in, you know, dealing with instant response and dealing with, you know, ransomware and stuff, like,
Starting point is 00:24:24 you would kind of hope, understand that ecosystem a little better than average and the amount of, you know, of finding out that you're going to do if you'd play this game, especially when you're in the US. Like, it just does not seem well thought through. And I guess it probably wasn't. But yeah, these guys are certainly not, you know, they're certainly looking at some pretty serious time and I think it was there's another unnamed so there's two employees and there's another third one that we haven't seen a name or other details of but there was another conspirator involved who was the one that actually set them up with uh like affiliate accounts with alfi blackout ransomware crew uh but like i can't i just can't imagine going to work working on
Starting point is 00:25:02 a ransomware case and thinking you know what i really wish i was in this game making that money too like instant responders get paid well like what are you doing what are you doing the total the total take here is like 1.3 mil because they got one payment uh across three victims which you know 1.3 between two people I mean like it's not worth
Starting point is 00:25:19 50 years you know like that risk it's just you know 1.3 mil seems to be the magic number too because that's how much Peter Williams got paid it's just like a cursed
Starting point is 00:25:28 just criminality number geez anyway so look we've got subsequent you know we've got some follow up reporting too on Peter Williams the trenchant leaker or you know
Starting point is 00:25:40 the spy mole whatever you want to call him Kim Zeta has written up a piece here saying that apparently he was still selling stuff after he knew that stuff that he'd passed on to this broker was being on the sold by a broker in South Korea, which is pretty crazy. We do know, too, I think it was just sort of being discussed at the time we recorded last week that the company that he sold these bugs to was Op Zero, which is a Russian broker, which
Starting point is 00:26:06 like, it's one of those ones that like advertisers on Twitter saying, we'll give you half a million dollars for these sort of bugs. And it really does look like he just emailed these guys and said, yeah, okay. I've got some bugs for you, which is just insane. Like, I do wonder, too, 2022 stock market did badly, crypto got wiped out. I wonder if he hit money trouble. Like, that's my personal pet theory, is that he hit some sort of absolutely colossal money trouble and needed cash quickly, but still nuts.
Starting point is 00:26:35 Lorenzo also has a write-up here at TechCrunch about how Peter Williams was able to steal this stuff. I mean, there's absolutely nothing surprising here. was the general manager which meant he had super user access to everything which is what a general manager does um i mean you know you you would have also been completely unsurprised by this i'm guessing yeah yeah yeah it was totally unsurprised and then the specific details of you know like using uh USB storage devices to move stuff in and out of air gap environments in there that's that's that's how it's done uh and so yeah you can't you know you can't reasonably expect a place like that to protect against as you say the general manager who's got access to everything yeah you put in controls
Starting point is 00:27:14 place that can manage that is not necessarily realistic you just got to trust people and assume that they're not going to go and you know sell your stuff out for pennies on the dollar i think we saw some numbers here on like what the value of the bugs that he sold were it was something like 35 million dollars worth well they say that's the loss so i don't know if that's the development cost of the exploits or the development cost plus the replacement cost or you know legal impairments like you got no idea but yeah that is the number that's been thrown around yeah and that you know at least it's You can compare to the mill and change, he actually managed to get out of the Russian export broker,
Starting point is 00:27:50 although apparently they had promised him more and he hadn't managed to actually get that through or maybe we're not seeing the full extent of the funds. He got shortchanged by some shady Russian brokers. Shocking, you know. I mean, that's unbelievable. Now, look, I also wanted to update you all because, you know, last week I said
Starting point is 00:28:07 that John Scott Raylton from Citizen Lab was questioning the utility of the private sector in this ecosystem. which I think I called it brain dead at the time. John actually got in touch and we wound up having a very long phone conversation. Actually on my Friday evening, we spoke for something like an hour and a half, very pleasant conversation. He says that's not his opinion. Forgive my confusion though, Adam, because I sent you a post of his and asked you,
Starting point is 00:28:34 do you think it's saying that? And you agree with me, which is it says, I'll just read it. His post says there's a push to scale up America's offensive industry right now, but this alleged betrayal raises an urgent question, can these profit maximizers reliably act as trusted stewards of long-term national security and just how well are they overseen and vetted? So look, I think there's an implication there
Starting point is 00:28:58 that a regular person reading that would say this guy doesn't think the private sector should be involved in developing these exploits, but he says, no, that's not what he means. So we did chat for a while. It was an interesting conversation, actually. He does seem really skeptical that this could have happened in the, you know, from a government agency, which I, you know, I don't see that at all. We've seen this happen in government agencies, Volt 7.
Starting point is 00:29:23 It was exactly this, you know. We saw it with shadow brokers as well. We saw it with Edward Snowden. Now, of course, you know, things, I'm guessing it's a little bit trickier to do that sort of thing now. But are you telling me if, like, Rob Joyce, you know, being a GM equivalent when he ran TAO, wanted to walk out with some exploits, he couldn't. Like, he absolutely could. And there's only so far you can go with securing a workforce in those sorts of environments before people start to resign. You know, if you're going to be cavity searching them every time they want to leave to go to the car park, they're going to quit.
Starting point is 00:29:56 They're not going to stick around, right? And I've heard of situations where people have quit in response to these sorts of changes in security environments at various places. Like, I really do see this trenchant leak as something more. more akin to a leak out of the intelligence community because they really do sort of operate as an extension of the IC. They're not like a paragon. They're not like some little shop. They are really a trusted player. You know, John expressed to me a lot of concern about, you know, a lot of rhetoric in the US at the moment about rapidly expanding the the off-sex space. I think there's going to be tricky to do that, to be honest, because there's already a pretty hard limit on the number of people
Starting point is 00:30:40 who can do this work. I also think there's some pretty good controls in there already. A lot of them are contractual. Like these companies sign pretty onerous contracts. Should those terms be moved into a regulation for the purposes of transparency? You know, should there be generic terms? I mean, maybe I don't think there's anything that's going to drive that to happen. I don't think people are going to, I don't think government's going to do it to make Citizen Lab feel better.
Starting point is 00:31:06 Like it's just not something that's going to do. that's going to happen and I don't know. I sort of feel like the off-sex sector now, you know, the real problem is, you know, an agency, say an agency like ICE in the US goes rogue, they can already do infinity damage just with the tools that are currently available. I don't know that the expansion of the sector is really a huge risk there. John thinks that there's a proliferation risk there. I think that's a reasonable thing to think.
Starting point is 00:31:33 But then you look at what happened to NSO when they behaved badly. Here's one thing John and I both agree on, which is that the Biden White House actually handled this quite well by taking a bad actor and just singling them out and absolutely going to town on them. And in fact, John has this terrific chart of the value of NSO bonds of their debt over time marked with like each event along the timeline
Starting point is 00:31:58 and you just saw it crater. So I think, you know, it would be a brave investor who would pump money into a company that's going to be Lucy Goosey with who it sells to. So as I say, it was an interesting conversation. I think he and I agree on most of this stuff. But I don't see them making much progress under the current admin. Let's put it that way.
Starting point is 00:32:23 That does seem pretty unlikely. I mean, if they want more, they're absolutely, if anything, they're going to have to relax who can do business in this kind of space and the kind of controls they face. And, you know, adding more vetting and more oversight. like that's not really what this administration is going to be all about so you know if they want more they're going to have to loosen it up and buy elsewhere and and you know accept more collateral damage more leaks more proliferation you know more bad stuff happening if you want just because like
Starting point is 00:32:50 it's a volume game right i mean there's always going to be people who for whatever personal reason you know ideological money whatever it is you know go off the rails and that's happened you know before cyber right i mean you know the ultra chains and all the other kind of like spy craft their stories, you know, same thing can happen here and, you know, trying to control that. It's kind of a mugged game. There is a degree of you just have to build this stuff, trust that people are going to do, everyone's going to do a good job, you know, put some sensible controls in, but, you know, they're not ever going to be 100%.
Starting point is 00:33:23 No, no, when it comes to leaks, no. When it comes to them proliferating this stuff into places where it shouldn't go, I mean, that's a thornier problem. And again, the Biden approach, the Biden White House approach seemed like a pretty good one because it didn't so much rely on regulation as setting norms. I think it will be an enduring thing at least for a while. I think even under this admin, you know, are you going to go and invest a bunch of money into a spyware company that's going to sell to everyone in the world? I mean, probably not, though.
Starting point is 00:33:50 No, because the government will change eventually, you would think, and you may as well be lighting your money on fire. So I think we're in a better place now, honestly, but, you know, it's always good that there are people out there who are concerned about this, like Citizen Lab and who are who are doing the work. I think that they serve a very important role, even if I'm not in lockstep with them on absolutely everything. Yeah, you've got to have people out on the edges of the debate to kind of move the centre point around, you know, and I think they do amazing work instead of the lab. So, you know, we don't have to see either way on absolutely everything. Yeah, now meanwhile, Memento Labs, which is the company born of the ashes of like hacking team,
Starting point is 00:34:29 we spoke about how Kasperski published a report where some of their stuff got snapped on Russian targets. They've come out now and confirmed that that was the case, which I think is kind of an odd step from a spyly company. But they've also blamed their customer for using some of their own old Windows malware, which they're like, we don't even support that stuff anymore. We don't make that. We do mobile. It's kind of funny. I mean, it's such a wild story. It's like we, you know, so much of this kind of world is so secretive and so quiet normally. And for the CEO, just come in and go, yeah, actually, yeah, that's totally our stuff is kind of, kind of unusual. And then the other little bit,
Starting point is 00:35:03 like you know we asked our customers to stop using this like you mean you don't have good controls over like licensing and you know like you don't know who you who's got this particular version of it like you don't have that kind of level or oversight of your customers like that seems a little you know a little yolo as well well this is this is where guys like john have a point isn't it you know this is where they have a point where you've got like you know you got your top tier operators who are in really restrictive government government contracts with like us agencies and whatever and everybody everybody knows where they can and can't say and like it's all very controlled and you know they need to be very rigorous security requirements in place
Starting point is 00:35:39 and you know again this isn't transparent stuff but people who are you know proximate to people in that business kind of know that this is how it works and then you got these other guys on the edge who don't have those sort of contracts who are just like selling it everywhere you know whatever like licensing you know are these the next the next cobalt strike beacon you know guys like who can say i also do the think like it's kind of a bold move to come out and say to out yourself as a vendor that's selling tooling to an adversary of Russia at the moment, right? So there's, you know, your stuff's popped up all over Russia
Starting point is 00:36:11 and then to come out and say, hey, that's ours. You're kind of drawing a big arrow to yourself at a time when, you know, Russia's going around, you know, Nova-chocking people or whatever else. Like, it just, I don't, I can't imagine he ran this past Corpcoms or, you know, past anyone else. Before he was like, hey, yeah, that was us. Because, like, I wouldn't do that.
Starting point is 00:36:32 my bugs being used against Russia successfully. I don't know that I would come out and say, hey, yeah, that was totally me. Crazy. Anyway, let's move on to some bread and butter infosec now. We have some research at a checkpoint, which looks at some impersonation and spoofing vulnerabilities in teams. I didn't really go over this one too closely, Adam, but I figured you did, so you can tell us all about it. Yeah, so this is some research checkpoint and we're doing like into teams and just kind of looking for the sorts of bugs that you would use if you were going to do fraud on teams or if you're going to do other like social engineering-y kind of things on teams. And we've seen plenty of examples of teams as a vector for social engineering people
Starting point is 00:37:11 into like wire transfers or resetting creds or whatever else. So you can communicate from outside a company into their team's environment and kind of confuse people as the fact that you're not an internal, you know, not an internal user. Anyway, they were looking for bugs where you can make it look like your name is different or make it look like the particular message has been one of the ones they had, you can edit a message and by like fudging the timestamps in the post request, you can edit a message without the edited little tag coming up so people know that their messages have been changed.
Starting point is 00:37:44 They had another one where like if you're in a direct message to chat with someone, it's the same code base as if you have a group chat. And group chats have kind of titles, you know, subjects or whatever else that you can put in the top of the, you know, of the chat. those also exist for direct chats but you can kind of change the title so at that point you can make a chat that looks like it's with somebody else
Starting point is 00:38:05 and then you can use that to impersonate your confusing victim with impersonation anyway they reported some of these bugs to Microsoft middle of last year Microsoft has now patched some of them out and I'm glad that someone is doing this research because Teams is a nasty, thickety, thorny mess and I'm glad I don't have to use it every day now but yeah that those kinds of things are legitimately
Starting point is 00:38:27 useful in quite certain circumstances so i'm glad someone's looking at well i mean this is just the world isn't it as it is today where microsoft you know azure is just a big mainframe and teams is just a big instance of irc that everybody uses and it's all the same server kind of you know it's just yeah yeah it's pretty wild like the corpse of Skype you know reincarnated his necromanced back to life with a bit of sharepoint bolton on the back it's just do not want oh my god God, that's just so horrible. I know. Reanimated corpse of, you know, pieces, limbs,
Starting point is 00:39:02 reanimated limbs of Skype glued to SharePoint. Woof. Very nasty. Now moving on, we're going to chat about some research out of ProofPoint, which looks at cyber-enabled freight theft. It's very interesting stuff. So basically the idea here is that hackers are getting into some of these, you know, freight brokerage.
Starting point is 00:39:27 logistics systems and really just offering to transport cargo figuring out where they can get a bid what's in what's in containers or whatever and figuring out how they can you know put themselves forward as a broker and then just turn up in a truck and pick up the container and then take it wherever they want um which is pretty interesting so funnily enough though the write-up here it's by uh ulla villadsen and selina larsen i actually contacted selina because it wasn't really clear that last mile of this whole thing it wasn't really clear how the attackers were getting their hands on the actual cargo. And Selena told me, look, this is actually how they think it's happening, right?
Starting point is 00:40:05 So all of these Trojans and, you know, remote access tools are sort of popping up through these logistics systems. And meanwhile, fraud is sort of skyrocketing. And there's been various Reddit threads and even some congressional testimony that makes proof point think that that's actually what's happening. Is they're infiltrating this system, figuring out how they can be the ones to pick up a certain amount of cargo, then turning up, you know, load it back on the truck and disappear into the night.
Starting point is 00:40:29 So very interesting. Yeah, yeah. Anytime we see a new mechanism for turning cyber into money, like that's always, you know, that's always a good time because, well, not a good time. It's always an interesting time because, you know, turning hacker skills into money is a thing that, you know, there's only so many ways to do it. And once you figure out one that you can scale up and use, then, you know, it tends to drive, you know, the economics then tends to drive the, you know, the cyber crime and the regular crime around it.
Starting point is 00:40:57 So, yeah, the idea that you can just, like, break into these particular organizations and that there is some way to end up getting valuable goods using a computer, which you can then sell. Yeah, it's good thinking. And it reminded me of, like, back in 2016, we reported on some, like, pirates around the, like, Corn of Africa that had broken into the management system for, like, a container shipping, you know, shipping container management firms. And we're using that to identify which ships had interesting cargo. and in the manifest where on the ship it was loaded and used that to kind of target their piracy more effectively.
Starting point is 00:41:32 So, yeah, I'm always here for innovation and crime. Cyber-enabled freight hijacking. Let's go. We're so back. Now a bit of Law & Order News. John Greig reports that a Conti ransomware gang affiliate, he's appeared in court in Tennessee after being extradited from Ireland.
Starting point is 00:41:53 He's a Ukrainian national, facing up to 25 years in prison, which I, man, I just think it's so crazy that Williams might get away with like 10 years-ish for stealing eight exploit chains from Trenchant. By the way, the rumor is, too, that the exposures there of like various things that, you know, how those bugs were being used. Like, it's a disaster, and he might get 10 years. And meanwhile, this ransomware, you know, Junior is looking at 25.
Starting point is 00:42:18 Just it's a funny old world. We've also got a story as a reporting out of Russia, Dorena Antunuch, who's based in Ukraine. has written up this operation where the Russian police detained three hackers who are suspected of developing and selling the Medusa Steeler malware. It's just sort of unusual to see this sort of law enforcement action in Russia, which I guess is why it's noteworthy. But the law and order story we're going to dive into this week is one, we're diving into it because Brian Krebs did it and it's always fun to go through his stories where he just lays everything out in such meticulous detail. talk to us about this Jabber's Use Coder, Mr. I.CQ, who is now in US custom. That kind of puts a timeline on it because ICQ is a name I haven't heard in a long time. So these, this was one of the guys behind Jabbers Use, which was a botnet slash like Trojan, I suppose you'd call it,
Starting point is 00:43:15 like in the browser Trojan from the early 2010s. And it was one of the real innovators in its world because it was the first. They were the first group to really do, like, man in the browser multi-factor orth hijack. And they built some plumbing based on the Zeus, the original Zeus Trojan, and they used Jabba, the communications protocol to kind of deliver these two-factor orth tokens into, you know, back to the criminals fast enough that they could then use them. And this is one of the guys that develop that particular piece of plumbing. So given how, like it's about 15 years ago now, finally seeing some justice,
Starting point is 00:43:53 I guess, you know, of all the things that have come out of Russia's war in Ukraine, one of them has been pushing a bunch of Ukrainian cyber criminals out to within reach of law enforcement. And this particular guy was picked up in Italy, now in the US, but was previously in Donetsk in Ukraine, which, of course, is in Russian hands at the moment. So, you know, the war has pushed, you know, a whole bunch of people who were previously insulated by that kind of Commonwealth of independent states bubble out into, into the war. the west and yeah i mean crebs has history with this crew one of his contacts had infiltrated their jabber system and was reading their chats and then crebs would go around and like notified people that they were about to have their banking details stolen or their accounts broken into by this crew and he was spending hours a day back in you know back in those days uh notifying people and so i think you know it's a pretty personal thing for brian to see you know this guy actually you know behind
Starting point is 00:44:51 pass and you know that's a fun a fun story and i bet brian's you know probably feels pretty good about it man yeah i i do think too that we could see the russians get their own one day like if there's look russia is a is a country that blows up every now and then politically right you know from the revolution to the collapse of the you know the end of communism there and like Putin's rise and whatever like it is constantly in a state of change and you do wonder like okay, say five, 10 years from now, there's an economic collapse, 15 years from now, say there's some sort of bailout required, man, that's going to come with some strings. And these guys are going to be in a lot of trouble, I think.
Starting point is 00:45:32 But that's a long time from now, I think. Well, but who knows? That's the wonderful thing. Who knows? Now, I just wanted to update everyone on the Windows, the WSOS stuff. Apparently, there's up to 50 victims of that. According to some work out of Sophos, we've linked through to a cybersecurity dive piece written by David Jones on that.
Starting point is 00:45:53 I did have some feedback from a listener, Paul Schnack on B-Sky, on Blue Sky, who said, I love the show, quick update. A lot of business networks, most question mark use Intune or third-party MDM for Windows updates, not W-Suss. I think, Paul, that's probably a little bit optimistic. Like, I think they probably should be using Intune for those sort of patches. But, you know, and then there's, yeah, Win Update for Business Now Auto Patch lets you create rings and groups with the binaries coming from Microsoft, not WSUS.
Starting point is 00:46:23 Hey, that's the, that's the right way to do it. I don't think it is the way that everybody's doing it. He also pointed out that WSUS was deprecated by Microsoft a year ago. I didn't know that, so thanks for letting us know. But, you know, we are seeing a bit of carnage out there with WSUS. Yeah, yeah, exactly. When the proof is kind of somewhat in the pudding there, right, that there are people getting getting owned and, you know, having a quick run around, you know, we talked where the
Starting point is 00:46:49 story originally broke about, you know, some of the things we're seeing on census. Like, there is quite a lot of stuff out there on the internet that really does look like it's still WSUS and, yeah, people are having a bad time. But, yeah, in tune, a lot of people struggle with deploying that. It's hard. It's hard. That's why device exists, which is an Australian company that sort of helps people manage it, right? Because, like, it's hard.
Starting point is 00:47:10 It's good plumbing. But, you know, they put in the pipes. They didn't put in the taps. I think it's the best way you can kind of describe in tune. I'm sure it's better. and I'm sure that for some of these, you know, Microsoft super admins, they find it pretty easy, but most people don't, as you point out. So, yeah.
Starting point is 00:47:25 We also had a YouTube comment from Cal Fear, F-E-H-E-R. So I don't know if that's the correct pronunciation there. I hope it is, Cal. He says, I think your recollection of the Kaminsky bug is a little off. UDP source port randomization was most certainly already available and was the default in bind at that time. Cal, that's actually what we said, mate. We said that there was some source port randomization, but it was insufficient. He continues, however, the port range was limited and it was a common configuration to actually fix the source port to a single value.
Starting point is 00:47:54 Didn't know that. You probably did, Adam. The limited variable range was an oopsie from ISC. The dodgy config was simply evidence that people do dumb things. The true fix, even back then, was DNSSEC. This was also reflected in the cons from IAC at the time, but back then DNSSEC was hard and not considered a reasonable mitigation in a short time frame. Not letting the crisis go to waste, there was finally some energy in the DNS community to improve DNSSEC usability. and improve the protocol as a whole.
Starting point is 00:48:21 Thanks, Dan, he said. Now, the reason I'm talking about this comment is it is hysterical that we received this comment that the true fix was DNS sec. A few days after, look, some people listening to this might have noticed that we had a very, very brief outage over the weekend. It was a planned outage.
Starting point is 00:48:38 Adam, why did we have an outage over the weekend? Well, I'm going to go ahead and tell you that it was a DNS sec. Well, it was that we disabled DNSSEC. We had to disable DNSC, and why did you tell people why we have to disable DNS? We have to disable DNSC because in order to be able to issue let's encrypt certs or other dynamically rolled domain validated certs, we need to be able to have our zone file exposed to changes
Starting point is 00:49:09 from inside our infrastructure provider, which is DigitalOcean. DigitalOcean doesn't support DNSSEC. We had to move our zone. from another provider which does support DNSSEC into DigitalOcean so that we can do automated certificate renewal because browsers are going to, you know, we're not going to be able to, you know, roll a cert once a year by hand anymore.
Starting point is 00:49:26 We have to do it, you know, automatically as this best practice. But to do that, of course, we had to turn off DNS sec. And the process of turning off DNS sec resulted in a, you know, couple two to three hour outage of our DNS. Only in some places. Only if you were the sort of person that's going through a resolver that does actually validate the DNS. which is not very many people numbers are something like 3% of the internet resolves
Starting point is 00:49:52 DNS in a way that actually validates DNS sec so in whatever time this bind you know the Kaminsky cache poisoning bug was which is what like mid 2000s it was certainly a lot less than 3% and now in 2025 there are very very very few people who actually will have their DNS lookups fail because the DNSSEC has misconfigured key material or something with it. So it's the only way we can get a let's encrypt certificate into our CDN
Starting point is 00:50:23 is to be doing this. Like it is the only way we went around and around on ways to do it. So it's just real funny that we get this comment about how the true solution and the future is DNSSEC like literally like the day or a day after like we had to disable DNS sec. We had to disable our encryption so that we
Starting point is 00:50:40 could get encryption basically. And you know I think really we're in a let's encrypt TLS based world and um you know i don't know man dns sec it sort of feels like Linux on the desktop you know what i mean like it's coming next year it's coming next year except that living from desktop is actually usable whereas DNS tech honestly is mostly just about causing outages so yeah yeah well mate that is actually it for the week's news thank you so much for joining me for a fascinating discussion as always and uh yeah look forward to doing it again next week yeah thanks much pat i will see you then I'm Claire aired and three times a week I deliver the biggest and best cyber security news from around the world in one snappy bulletin.
Starting point is 00:51:22 The Risky Bulletin podcast runs every Monday, Wednesday and Friday in the Risky Bulletin podcast feed. You can subscribe by searching for Risky Bulletin in your podcatcher and stay one step ahead. Catch you there. That was Adam Boyle there with a check of the week's security news. thanks to him for that. It was a good one this week. So we are going to hear from Scott Kufa now, who is the chief executive and co-founder of nucleus security, which makes a vulnerability management platform. As I mentioned at the intro, you can get it to ingest all of the vulnerability scan information from your scanners, even if you're using lots of different scanners, you can
Starting point is 00:52:03 pull in information from Run Zero from all of your SaaS stuff. Like it all comes into one place. It gets normalized. You can start slicing and dicing that data, sending it off to the correct teams, there's Slack integrations, all that sort of stuff. But what I wanted to talk to Scott about today, a couple things, really. But the first thing we started speaking about was this idea that prioritizing bug fixes will save us, right? That will take this unmanageable problem and make it a manageable problem. It's not panning out that way. People are now at the point where they can no longer keep up with even the high priority bugs in their organization. And, you know, he thinks we need to start thinking of some different approaches, and he makes some good points.
Starting point is 00:52:43 So here's Scott Kufa from Nuclear Security. What we're seeing is that there's actually just been a shift in how organizations talk about vulnerabilities, and there's more data that's being presented all of the time. And so it's really more of a volume issue. It's not necessarily that we're getting worse at fixing individual vulnerabilities. We're getting worse in aggregate at fixing the new numbers of vulnerabilities that are coming in, if that makes sense, right? So it's really like just a dichotomy.
Starting point is 00:53:08 The deluge is just getting bigger and the ability to respond is not keeping up. Is that kind of the vibe? That's correct. And then the variety is also getting bigger, right? So 10 years ago, the only remediation that a vulnerability analyst had to worry about was really patch management and all those pesky certificate vulnerabilities that came out of your tenable console. And now, well, then you spun up the cloud security teams and you spun up the DevOps teams and development.
Starting point is 00:53:33 And we're seeing a convergence of all of those. And so now individual vulnerability teams are really. responsible for the entire cloud stack, not just individual IT stacks. And so you're really seeing a lot of friction in the process related to the volume of just vulnerabilities in general. I mean, everybody talks all the time about how, hey, we're having seen more new vulnerabilities discovered all the time than we've ever seen. So that is increasing. But I think the bigger issue is actually that we're seeing that across all of the different categories of vulnerabilities. Yeah. So I guess we shouldn't be surprised that scale are in vulnerability.
Starting point is 00:54:08 ability remediation is a problem, right? Because we've just got like this deluge more and more and more and more and everything all the time. You know, what has been the approach over the last five years in terms of trying to speed that up, right? Because it feels like for the past five years, the approach has really been about trying, you know, trying to just figure out where your exposure is instead of trying to fix everything. But like, it almost feels like that can only get you so far as well, because even those genuine exposure numbers are kind of ballooning. Is that roughly like kind of where we, where we are? I would say so. It's, it's what's interesting about it is that when you think about, we call it the remediation economy, right? So there's basically this optimization
Starting point is 00:54:50 from economics that occurs where you're trying to think about, well, what is the optimal amount of risk for me to fix in my business? And we don't really put a lot of stock into that as a operational concept, right? So we generally, you know, look at vulnerabilities in terms of lists, right? Here's my giant number of things that I need to look at. And the approach has been, how do we just whittle that down to the point where we only look at the 5% that matter? And that works really well until the 5% that matter are more than what you can actually physically fix, right? And so that's kind of what I mean, right? Is because for a long time, people have said, ah, the solution is, just forget about these ones where there's no path for someone to exploit them,
Starting point is 00:55:31 just patch the ones that are genuine exposures. And it's like, that works really well until that number also gets unmanageable. Absolutely. And I would say that that's the point that we're approaching, especially when we look at the amount of just, I hate to do this, but to talk about how AI is starting to change the game of what we're seeing with some of these things, right? It's become a, it's become a bigger and bigger topic that we're seeing to the point where folks are even asking questions about how do you audit your AI models. Right. And so eventually those are going to turn into vulnerabilities and exposures that you then have to manage as a vulnerability team, right? And so even it's a whole new category, brave new world of where we're at with
Starting point is 00:56:09 the types of things that we have to manage. And nobody knows how to how to deal with that right now. Well, that was going to be my next question is how do we deal with that, right? Because as I say, it's been the way, right? Is to whittle down, you know, the number of bugs that you actually remediate. Now we're starting to hit some limits there. I mean, obviously there's going to be people proposing like AI as a solution to this somehow. I think that's going to be tough. To be honest, like right now, I think, excuse me, patching is just, you know, it's just one of those things that is very, very hard. I think it's difficult to automate. It's difficult to instrument. So, you know, what are some possible directions here that you might see happening over the next few years?
Starting point is 00:56:49 Yeah, one concept that was pitched to me not that long ago, there was this other startup that was really trying to sell me on the idea of like a CIS benchmarking concept. concept, but for like AI prompts. So effectively like looking at how your AI coding agents are basically set up, especially with the new open AI coding agent, and basically being able to say how can we audit that the thing has been optimized to put out the most high quality and structured code that we can be comfortable with. Well, hang on. When I say, by the way, when I say that AI won't fix patching, I mean, in that, I'm very much speaking about operating systems and applications on the desktop and, you know, server infrastructure.
Starting point is 00:57:28 You know, when it comes to actually in-house apps, like, I do think AI is probably going to solve that problem, right? So that, yeah, that vibes with what you're saying there. I look forward to the day. That would be great for everybody, for sure. But, yeah, it's a good question. As far as it relates to operating systems, that one thing that folks are starting to get really bullish on, which I'm still hesitant to be bullish on, by the way. So I'm saying this as basically just delivering the message versus being a proponent of this. Yeah, here's what people are saying.
Starting point is 00:57:56 Don't agree, but that's what they're saying. Right. So I'm starting to see a bigger push towards, well, before, we couldn't patch everything because we didn't want to automate patches, right? Because it was risky. We needed a change management process, all of these things. And so there is a lot of optimism in the IT space from what I can see that maybe we can make better decisions about what patches to push out automatically because they're low risk and we can use these AI agents to basically make those decisions for us to start to decrease basically the amount that it costs to fix every single vulnerability. It's kind of the idea that folks are really pushing towards. How do we make it cheaper and faster to fix vulnerabilities and without having to have humans in the loop? And, you know, there's probably some validity to that. But as somebody who worked in the federal government and at large enterprises, I hear that. And folks just start to cringe internally, right? Because change management is crazy. Well, but here's my issue with that, right, which is why do you need AI for that part?
Starting point is 00:58:50 Like, that's, that's not, it doesn't seem to me that the missing piece from what you just described is like going to be filled with AI. I mean, that, you know, it's just. just a fiddly, horrible, unpredictable thing, which is why it's hard. 100%. Yeah. No, 100%. Yeah, I view it very much as everybody is super excited right now about the AI hype train. But the reality is most of what you're trying to automate is declarative automation, right?
Starting point is 00:59:14 Like you want that pipeline to just be basically spinning out 50 widgets a minute and you don't want to have to think about it and look at it. And so I view it is very high risk realistically to try to use this thing that's using probabilities to guess what you're trying to do all the time. And so what I think we're going to start to see is these kind of separate models of pipelines. And we're starting to see this in the software development lifecycle as well, which is basically like there are some bugs that we just auto fix, right? And they're like lower risk, lower effort, easier things and systems that already exist versus like, hey, we actually have a whole separate SDLC to have humans look at it, but they're both assessed, right?
Starting point is 00:59:53 So how do you set up those pipelines to audit what's actually happening? And I imagine that IT teams are looking at something very similar to that in certain scenarios. But I don't think there's a good answer. It's interesting what you say, because we were really looking at, you know, just, well, where are the exposures? High impact exposures, let's deal with them. And I suppose another way to measure another measurement you should apply or a metric you should apply to a bug is like how easy and trivial is it to fix. And if it is easy and trivial to fix, you should just go ahead and order. I do it. Right. And those that have worked in engineering before know that that is a really hard
Starting point is 01:00:30 question to answer, right? And so, I mean, that entire estimation process is built around that. And we get it wrong, you know, five times out of, out of 10. So, I mean, just solving that itself is really challenging. But, but yeah, conceivably, you get to a point where it's a, oh, it's a, you know, super easy to fix, super low risk. Just do it. Right. And, you know, it's the volume game over the super precise precision game and I think the reality is we need to do both and how do we optimize a larger program for both is going to be really the name of the game going forward now it would be crazy of me to have you here and not ask you what you think of what is happening in the sort of code code sec space with all of these AI models you know open AI made big waves with its Ardvaq stuff you know
Starting point is 01:01:17 we've just talked about that in the in the news but I wanted to get your your opinion I guess on these On how much of a solved problem, a lot of these, you know, application vulnerabilities are, because I think this stuff is amazing. I think people do underestimate it. I mean, I say that because a guy I know a founder said to me six months ago, he started playing around with using some of the models to do code audits, and he said, it's over. Like, those were his words.
Starting point is 01:01:44 He's like, I'm not even going to try to build a business on this. I'm not going to try to raise on this. It is over. This is a market segment that won't exist in a couple of years. I mean, you would be seeing, you know, basically because you're doing all this vulnerability aggregation and stuff in large organizations, do you see them yet starting to use these models in their applications development and then just bugs disappearing? Like, what are you seeing from your perspective in terms of how these AI models are playing out doing code audits and stuff and fixing bugs? Yeah, I would say, I mean, we see it, but not at a super widespread scale. I do think that, I mean, anybody, again, like, if you've historically worked in the ASEC space and you've tried to discover vulnerabilities, like even just business logic vulnerabilities using SAST, like everybody knows that was a really challenging space because there's so many false positives. It was really hard to understand what's really going on in the code base. And so...
Starting point is 01:02:38 Well, I don't think the AI stuff is very good at logic vulnerabilities either, but certainly in terms of like cleanness of code and whatever, those sort of bugs, like very good. It's, yeah, it's really good at certain things. And I think. think, you know, it's only going to get better, right? I mean, our CTO is really, is really bullish on AI coding agents as well. But I would agree that within a few years, we will see a wide, super widespread adoption of this, because it's going to end up being more efficient and effective to do that. And these things will get better. And I just feel like we're hitting upper limits with some of these traditional tools, and specifically around discovering and assessing the risk of vulnerabilities and exposures in code versus, like, how does this thing trace through
Starting point is 01:03:17 the app, right? I mean, we looked at at nucleus at building, like, tracing information through, like, how code is executing for ourselves so that we could help to try to build some of that intelligence layer around these different and distinct code bases coming together and microservices and all that. And it's like just for us to be able to be able to now build a capability to do that. I mean, it's super easy for us to do something like that now and a lot more accessible to other companies that aren't just like SaaS companies or SDA companies. So I think we're going to see a lot more capabilities become available to broader sets of organizations. Like for all we know, we could see Fortinet dropping some new products to do stuff like that. Like, that's how,
Starting point is 01:03:55 that's how much easier it's gotten to make that happen. And I, and I would be very, it would be very dumb not to say that that's going to make a huge impact in the market kind of over the long term, right? So I would agree with your, your founder friend who would say the same. Yeah. Yeah. He's just like, I'm not going, I'm not raising on this. This is like, no, there's no remote here. All right, Scott Kufa, thank you so much for joining me for this interview. A pleasure to chat to you. Cheers. Use as well. Thanks, ma'am. That was Scott Kufa there from Nuclear Security.
Starting point is 01:04:26 Big thanks to them for that. And that is it for this week's show. I do hope you enjoyed it. I will be back next week with more security news and analysis. But until then, I've been Patrick Gray. Thanks for listening. Hello, I'm Amberley Jack. And every Thursday, I host the series.
Starting point is 01:04:45 Seriously Risky Business podcast, a podcast all about big-pitcher cyber shenanigans like intelligence and cyber policy. You can find that podcast and more in the Risky Bulletin podcast feed. Subscribe today by searching for Risky Bulletin in your podcatcher.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.