Risky Business - Risky Business #826 -- A week of AI mishaps and skulduggery

Episode Date: February 25, 2026

On this week’s show, Patrick Gray, Adam Boileau and James WIlson discuss the week’s cybersecurity news. They cover: Low skill actors compromise 600 Fortinets wit...h AI-generated playbooks Anthropic calls out Chinese AI firms over model distillation Meta’s director of AI safety tells her ClawdBot not to delete her mail… so of course it does Peter Williams cops 7 years in jail for selling L3 Harris Trenchant’s exploits to Russia Ivanti got hacked in 2021 via… bugs in Ivanti This episode is sponsored by line-rate network capture system Corelight. CEO Brian Dye joins to discuss what AI can do for defenders, and what it can’t. This episode is also available on Youtube. Show notes AI-augmented threat actor accesses FortiGate devices at scale "this reads to me like: they ran existing tools.... but with a cool dashboard :D" Anthropic accuses Chinese labs of trying to illicitly take Claude’s capabilities | CyberScoop Detecting and preventing distillation attacks Hegseth warns Anthropic to let the military use the company’s AI tech as it sees fit, AP sources say Anthropic Rolls Out Embedded Security Scanning for Claude AWS's AI Coding Bot Kiro Caused a 13-Hour Outage Running OpenClaw safely: identity, isolation, and runtime risk Former Adobe, Cisco and Salesforce CISO talks AI pentesting History Repeats: Security in the AI Agent Era Meta Director of AI Safety Allows AI Agent to Accidentally Delete Her Inbox Microsoft says Office bug exposed customers' confidential emails to Copilot AI | TechCrunch The (tangential) fix: Microsoft adds Copilot data controls to all storage locations Ex-L3Harris executive sentenced to 87 months in prison for selling zero-day exploits to Russian broker Treasury Sanctions Exploit Broker Network for Theft and Sale of U.S. Government Cyber Tools Risky Bulletin: Russia starts criminal probe of Telegram founder Pavel Durov Ukraine pushes tighter Telegram regulation, citing Russian recruitment of locals The watchers: how openai, the US government, and persona built an identity surveillance machine that files reports on you to the feds Persona emails customers saying they don’t work with ICE or DHS amid ‘surveillance’ claims Inside the Fix: Analysis of In-the-Wild Exploit of CVE-2026-21513 Ivanti hacked in 2021 via its own product Fed agencies ordered to patch Dell bug by Saturday after exploitation warning | The Record from Recorded Future News From BRICKSTORM to GRIMBOLT: UNC6201 Exploiting a Dell RecoverPoint for Virtual Machines Zero-Day

Transcript
Discussion (0)
Starting point is 00:00:03 Hi everyone and welcome to risky business. My name's Patrick Gray. We've got a fantastic show for you this week. We've got some really interesting news to get through this week. A lot of AI stuff in this week's show, but it's all very interesting. We'll be getting into that with James Wilson and Adam Walo in just a moment. And then we'll hear from this week's sponsor. And this week's show is brought to you by Corlight, which of course is the company that maintains Zique. And if you would like a 200 gigabit per second full line rate, you know, network security sensor, Your options are fairly limited, but Corlite can do that for you. Obviously, this hardware software, you know, unified hardware software sort of thing is one of the reasons why Corlite is not really at risk from AI, like some software companies, even in security. But Brian Dye, the chief executive of Corlite joins us this week to have a chat about AI, AI in the sock and about how we've gone from that being a radical kind of risky idea a year ago to it now just being the way things are done. much everywhere. Very interesting conversation and coming up after this week's news. And yeah, as I said, like this is kind of an AI security, security AI edition of the show. The first thing we're going to talk about today is some work out of the AWS security team about a whole bunch of fortinets getting owned by a threat actor who is using, like heavily using AI. What I found
Starting point is 00:01:29 there's a few things I found interesting about this. First of all, I think the reaction from a bunch of people in off-sec on social media really misses the point because they're like, they just used existing tools. They didn't do anything that cool. Again, we've talked about that on the show before. Not really the point. The point is it helps people who are not very capable, become more capable and organized. And that's the second thing that struck me about this is reading through it.
Starting point is 00:01:53 It really seemed like the threat actor in this instance did not really know what they were doing and yet were able to pivot from a fortinet device compromise to, full domain compromise through Mimicats and whatnot, where if any part of their automated chain broke down, they didn't really have the skills to work around it, but it didn't matter because they just would move on to the next target where their chain did work. Adam, what were your impressions here? Is your take here broadly similar to mine? Yeah, it is. I mean, none of the tradecraft here is particularly sophisticated. And as someone who grew up doing off-sick, my gut reaction is like, well, no, you don't know what you're doing,
Starting point is 00:02:34 but on the other hand, managing 600 endpoints that you've shelled and doing that, you know, as a not particularly skilled operator, as a small team, whatever it is that this group is, like that's actually hard work. Like, you know, keeping track of 600 endpoints of a spreadsheet when you're a, you know, a super skilled elite hacker, you know, that's real work. So, you know, the hacking itself low rent,
Starting point is 00:02:58 but the reality is at scale, you can do this with, you know, know, these kinds of tools. And, you know, I remember the first time that, you know, I had to pivot through a network, you know, in a Windows environment and, you know, do the, like, DC sync thing that Mimicats does these days before Mimicats even existed, right? And having to do Kerberos attacks and these kinds of syncing stuff, you know, way, way back before there was tooling. It was legitimately hard, right? So having tools that, you know, empower you to move quickly, you know, when I was doing the stuff, Mimicats came on, made everything a lot easier.
Starting point is 00:03:31 Hey, now you can just ask, you know, ask a bot to do it for you. And yeah, I mean, it gets the job done. It's not done if it works. Well, and, you know, in this case, it looks like a financially motivated threat actor, probably a ransomware crew by the looks of things because they're doing stuff like targeting Vem backups and whatever. But, you know, I just love it that they're going after these Fortigate appliances
Starting point is 00:03:52 with, you know, commonly used credentials, commonly recycle. You know, so I'm guessing it's like admin, admin, admin, firewall or whatever, right? And then these are domain joined appliances, which is how they're going from there to, you know, mimicatsing their way to great glory. James, I mean, surely you would agree that if the low-rent ransomware crews are using admin-admin-admin to own Fortigates and do like this sort of low-rent hacking at scale, you know, we can only imagine what some of the better APTs are cooking up. Yeah, absolutely. Like I totally see Adam's point that from a security perspective, it's like you don't know what you're doing. But when I read this article, I went through this constant loop of, wow, that's actually quite a sophisticated use of AI. And they've crafted multiple agents to do this.
Starting point is 00:04:37 But then I have to keep telling myself, that's late last year's thinking. And some of the developments in the models, particularly around like Claude 4.6 and some of the advancements in Codex, the abstraction layer has gone, or the line of abstraction is gone so high now that you could probably just give something like Claude Code or even a, Open claw bot the task of hey I want to own a bunch of fortinet gear because then I can sell the access to it let me know when you're done and yeah what a world that's enough for it
Starting point is 00:05:04 Can you please go own a bunch of 40 gates for me I mean we're you know I mean obviously I think you're gonna bang into some guardrails there but like we are kind of at the point where that that would be possible if you had a de guard railed you know clawed totally and maybe not even
Starting point is 00:05:21 completely de guardrailed you could imagine this could begin with I've got a bunch of 40 nets and I need to keep them safe. What would it look like if an attacker was going to come after me? And then adjust. Yeah. The fact you have to just social engineer your AI. That's the main skill for being a hacker these days.
Starting point is 00:05:37 It's not a hack and social engineering the bot. What a world. Yeah, I think I needed to generate an image of Donald Trump for like a YouTube thumbnail at some point. And Gemini kept telling me no. So then I just asked it to generate an image of a Donald Trump impersonator. And it did it, right? So, you know, this is socially engineering the AIs can be fun. They're getting better at stopping that stuff, though.
Starting point is 00:06:00 Like that, and it's so weird because they're non-deterministic. Like, that worked once, and I wasn't totally happy with the image. So I asked it to do it again and it wouldn't, right? So, yeah, you never know how these guard rails are going to apply. But look, speaking of guard rails and taken off, this is last week, I think it was, that we spoke about how Google had sounded the alarm on distillation attacks against Gemini. We've got a much more detailed report now from Anthropic, which has accused directly accused three Chinese labs, including Deepseek, of trying to distill Claude. There's some interesting statistics in here.
Starting point is 00:06:34 There were 24,000 fake accounts, right? So they're like distributing these distillation queries across 24,000 accounts. There was something like 16 million queries to the model or prompts to the model. So it is funny, like people are having a bit of a laugh at. at Anthropic here, given the copyright, like how much these models rely on other people's copyright, it is kind of funny that they are having a bit of a complaint about other people using them. That said, I mean, I could understand why policymakers in particular would be quite concerned by this when we're trying to restrict countries like China, for example,
Starting point is 00:07:12 from getting access to these frontier models. You know, if they could just run some distillation attacks and get, you know, 95% of the way there, that's not good if you're America. It's a vicious cycle, right? Why are these distillation attacks happening? Because the access to the models and all the chips that are required to either train or do the inference
Starting point is 00:07:31 has been so restricted. And so it feels like a bit of a loop where, yes, Anthropic and Google are yelling about distillation's a problem and it's going to give adversaries access to the same level of capabilities in the model. But they're going to find a way to do this regardless. And I think the thing that is kind of
Starting point is 00:07:50 of missing from this dialogue is, you know, anthropic bases itself on being the safe AI company. But the more they rally for export controls and policy to prevent this, that's resulting in these distillation attacks which produce less safe models with less guardrails. And therefore, their actions are putting less safe AI into the hands of people that want to use it in unsafe ways. It feels a bit self-defeating at the end of the entire cycle. Well, everybody wants to use Anthropic for unsafe things. We'll get to more of that in a moment.
Starting point is 00:08:26 But Adam, I wanted to quiz you on another aspect of this, which is, you know, I clicked through to the Anthropic Report. I was all excited to settle in for a long read. It's actually pretty short. And the section where they're like, how we're responding, because this is a big thing, is the headline on this blog post is detecting and preventing distillation attacks. And you're like, oh, boy, there's going to be some like alien tech in here. because I'd imagine, you know, preventing distillation attacks. I mean, you're going to have to use alien technology to do that. And then it's like how we're responding down the bottom is like,
Starting point is 00:08:56 oh, we've built some classifiers and some behavioral fingerprinting designs, systems designed to identify distillation attack patterns in API traffic. And you're thinking, oh, I reckon anyone worth their salt can probably sidestep that. And then there's intelligence sharing where there's sharing technical indicators with other AI labs. And you're like, yeah, again. That's how you know you've failed at response when you're reduced to like, well, intelligence sharing. Well, and better, you know, strengthen verification for educational accounts
Starting point is 00:09:26 and security research programs and startup organizations. And you're thinking, oh, that doesn't sound like a particularly effective control. And then their countermeasures, they're developing product API and model level safeguards designed to reduce the efficacy of model outputs for illicit distillation without degrading the experience for legitimate customers. I think that's going to be hard, personally. Yeah, I think so. I mean, you kind of want them to come up with some sort of like, let's classify distillation happening, you know, fast enough that we can respond in real time. And now we've got some technique that like reverse distills, like that, you know, craps up the model that they're trying to teach, you know, by giving it bad output.
Starting point is 00:10:03 Like that's kind of the adversarial, you know, future that I want to, you know, if we're going to live in this like cyberpunk future, then that's what I imagine it's going to look like. But, you know, none of this filled me with confidence that they really have a great idea. answer for it. And as James said, maybe this is just inevitability happening and, you know, this is the, this is just how it be and we have to deal with with it, you know, and trying to steal jet engine designs. They can steal models. They're going to do it. And, you know, it's too late by the point that you're, you know, writing blog posts like this. I don't know. Well, and there's a lot of scuttlebutt around that they've been successfully buying and accessing chips that they're technically not supposed to and then just pretending, yeah, look, you know, we, we, we,
Starting point is 00:10:45 trained this one on a bunch of potatoes you know it's like no you've got a basement full of blackwells uh down there and that's how it's that's how it's working but yeah you know you just sort of got to think like if you're china you're probably loving this right because you get 95% of the benefit with none of the cap ex right so um yeah that's how i do it good good for them i guess um moving on and uh yeah look staying with anthropic branding itself as the safe i i you know all over that blog post that we were just talking about, they're like, oh, this is really concerning because, you know, the Chinese are not going to use it safely and they're going to remove the safeguards that we have on this and blah, blah, blah, blah, blah.
Starting point is 00:11:23 And, you know, this is happening as there was a big blow up between the Pentagon and Anthropic. Pete Hegzeth has, you know, called the CEO to his office and said, you need to remove the safeguards from Anthropics so that we can use it to do whatever that we want with it. Or we're going to, you know, they're talking stuff like, declaring them a supply chain risk so the government can't use them or even like kind of pulling a pseudo-nationalization lever and getting the defense production act to force Anthropic to remove some of these, some of these things. You know, I think this is actually a little bit more nuanced than just the Pentagon doing something awful here because our colleague Tom Uren pointed out to us yesterday when we had our weekly sort of editorial meeting that it is the Pentagon's job to kill people. maybe you don't want a woke frontier model helping you to design the campaigns to kill people. James, what are your thoughts on this?
Starting point is 00:12:23 Yeah, there is a really interesting story arc behind this. So Anthropic was actually the first model to essentially get into the hands of the Pentagon, but that was done through a partnership with Palantia. And that's how Claude ended up being able to work on classified networks. So they were actually the first ones to get there. But more recently with the Maduro event, happening in there being a mention of Claude being used. This raised questions from Anthropic where they essentially asked Palantir, hey, was our stuff
Starting point is 00:12:51 used in that raid? And Palantir actually took offense to this. And they said, like, how dare you even question whether you're used in this? And so that's been a separate scuffle that seems to have then excited Hegsaith. But the subtlety that I found interesting is when Hegseth gets riled up about this, he talks about this being for a new network, a new project a new way that they're using AI. And there's actually been separate individual contracts signed up with many of the other frontier labs to join this new network and Anthropic is the last holdout. And that's the final sort of flashpoint that's come up now is that he's given them the deadline of Friday to say you'll join this new program with all of the other frontier models
Starting point is 00:13:32 for my new AI network that's going to have access to all of this defense data or else. So I'm looking forward to Friday. Yeah, I mean Anthropics case is that they will not let Anthropic be used to do mass surveillance or to kill people without a human in the loop, right? Which 100% sounds like, yep, that's reasonable. But it's also kind of not their job to put those sort of constraints on an organization like the Pentagon, which is tasked with, you know, doing deadly stuff. But, you know, I was chatting with a friend about this today and I was sort of, you know, I was sort of putting forward the case. that maybe the Pentagon had a good argument here. And then they were like, well, what about IBM selling to the Germans back in the 40s?
Starting point is 00:14:20 Which I thought was like a bit of an extreme comparison. But I took their point. Adam, where did you land on this? Yeah, I mean, it's a difficult one, right? Because like, can you imagine, like, if Lockheed Martin decided that they were not willing to sell weapons that killed people, like they would lose their government contracts, right? Because an F-35 that can't. Yeah, they decided to just, well, the next F-35 will only,
Starting point is 00:14:42 will only fire nerfed weapons, phone weapons. Clearly that's not a tenable situation. But on the other hand, the private sector companies can make decisions about what products they build and who they will sell to, right? That's also their right. And it brought to mind for me like the case of like Boston Dynamics, right, that make robots, they got bought by Google for a little bit,
Starting point is 00:15:05 and then they sell them again, right? And they're making, you know, autonomous robots that absolutely could be used in military context, but they've kind of decided that that's not really the market they want to be in, they want to sell it for industrial use cases, and that's kind of their choice. And the government can't really turn around and say, we force you to make, you know, killer robot dogs.
Starting point is 00:15:21 So, you know, on the one hand, like I feel like Anthropic is absolutely within their rights. So we don't want our technology used to murder people. And the consequence of that is we won't sell it to, you know, to the military. And, you know, everybody can do their own thing there. But it just, you know, I mean, I'm glad that somebody is willing to say, like, we won't have our software used to kill people, right? I mean, I would rather people won't get killed. Obviously, the Department of War kind of in the name that that's what they do.
Starting point is 00:15:50 But, you know, if they want defense contractor-grade AI, go buy defense-contractor AI. Yeah, it's funny. I just checked on the Google Boston Dynamics thing. And, yeah, from 2013 to 2017, owned by Google X, then went to SoftBank, now owned by Hyundai Motor Group. So there you go. That's about as interesting a... history as you can as you can get. Now what else are we got here?
Starting point is 00:16:18 We've got the embedded security scanning for Claude. So we've got a write up here from Derek B. Johnson. This has been everywhere, but we've linked through the CyberSoup version of this story. So there's a basically, you know, Anthropic has released, you know, Claude SAST, right? You know, I've been saying on the show probably for the last year that, you know, the legacy SAST is in a lot of trouble.
Starting point is 00:16:41 to large language models. And then, sure enough, you know, Claude, they ship Claude Code Security. Try saying that five times quickly. Claude Code security. And the funniest part here was the investors took a baseball bat to security company's shares on public markets, but including companies that, like, would be completely unaffected by this. Like, I think, I'm pretty sure even Crowdstrike, like copped it and like, you know, Fortinette and Pallow.
Starting point is 00:17:10 And it's like, oh, no. Anthropics figured out software vulnerabilities, so, you know, the whole sector coptered drobbing. James, you, I know, saw the humor in this. Yeah, I mean, as humorous as a double-digit, you know, drop in market valuations across the sector can be. But it's just such an irrational response. The sort of interesting thread in this is that the capability they released is based upon a pretty interesting data set where they've brought together outputs of Capture the Flag exercises, red teaming, data sets, et cetera,
Starting point is 00:17:43 to really create a very custom model that's been trained on how to catch these bugs. But at the end of the day, this is just largely going to be LLMs, making LLM generated code less LLM flaky. So it's a good thing, but it's like, yeah, it ain't going to affect Crowdstrike, that's for sure. No, no. I mean, look, I think if we're at the point where, you know, AI-based code improvement is at the point where CrowdStrike is actually being affected.
Starting point is 00:18:10 Well, we don't need EDR anymore. Like that's, I think that's a ways off. But I mean, Adam, you know, what are your feelings on, like, where all this is going in terms of, like, I think the SAST stuff in particular, I think that is now an AI product. You know, I really, I really can't see, I really can't see your sort of legacy approach to that thing. like, you know, sneak from five years ago, snick from five years ago is not going to be how it's done. No, and, you know, the, both the rate at which AI is changing, but also the rate at which how we build software is changing,
Starting point is 00:18:45 largely because of AI, right? And we've seen some numbers around, like, the amount of dependencies and the very, very rapid pace of, you know, like code being included and dynamic, you know, everything being very dynamic in ways that the SAST products of old were designed to deal with human-driven, human speed development, you know, the AI world is so much quicker. And so, like, the problems are also a bit different, you know, the speed of supply chain compromise or, you know, some of the, like, worms we've seen on MPM and so on.
Starting point is 00:19:13 Like, like, the ecosystem that those products play in has also changed pretty significantly. So, you know, it kind of makes sense that they're going to be out of it. And, you know, just bolting LLMs into it kind of makes sense. Where we've got this tool, we're going to be able to use it in all sorts of different ways. And, you know, reading code, reasoning about code is much the same as writing code. So, yeah, makes sense that this is how it's going. Well, and they're language models, right? Like, this is what I always say.
Starting point is 00:19:37 Like, there is a reason it's, you know, its use cases in software are incredible. But look, it's not all rays of sunshine and ain't AI amazing stories this week because we got a great one here about how Amazon's cloud unit. We got some leakers, apparently leaking to the, well, not leaking, but, you know, talking to journals at the Financial Times about how outages, have been caused by like AI agents doing really dumb stuff, like saying, well, this code's no good. I'm just going to delete all of it and start again. And that, you know, that caused a 13-hour outage. James, you used to work at AWS. So I'd imagine you would have some insight here beyond just the
Starting point is 00:20:18 obvious first reaction of yours, which I imagine was, lull. Yeah, yeah, definitely. Look, the first thing I thought of was that pretty much every major, large-scale outage that I saw happened at AWS happened because something dumb happened, right? It just don't happen because someone did something incredibly smart and creative and it just went kind of wrong. But this one is just like, yeah, you know, the agent decided to delete and recreate the production environment. Okay, hold on a sec. If your agent can delete and recreate your production environment, the problem is not the agent. The problem is you. You know, you've got to be careful of what you're putting in the hands of these agents. Amazon's just,
Starting point is 00:21:00 just hilariously trying to convince everyone the agent's not the problem. I think there was language like, it's a coincidence that AI tools were involved, and the same issue could occur with any developer tool or manual action. It's like, yes, again, but, you know, if you can press the delete production button readily and easily, you're going to hit it sometimes. Sometimes you'll mean to, sometimes you won't.
Starting point is 00:21:22 Sometimes an age will do it instead. The bigger, the higher order thing for me here is that, you know, just as this clawed code security feature comes out and it's, it's useful because it's been trained on specific data sets that are a corpus of information about previous failures that have been seen, I think there's a, there's something missing in industry at the moment around how do we capture these agent fails in a structured, meaningful way that can turn into that corpus of data that helps us train better models that have some of these dumb behaviors baked out of them, essentially.
Starting point is 00:21:56 Yeah, now we'll, I'll mention that to. because you're doing some work in that area. We'll talk about that in a moment. But Adam, I just wanted to get your thoughts on this one because, like, you know, we're used to hearing of people fat fingering a command or accidentally RMRFing the wrong box, right? And now we got agents to do that for us. Like, truly, we are in the future.
Starting point is 00:22:15 We are. And who amongst us has not accidentally deleted production or, you know, restarted it when you shouldn't or drop the wrong data? It's like it just happens. You know, humans make mistakes. And LLMs, you know, are modeled after us and kind of makes sense. I guess there is the, like, if you make lots of changes at speed
Starting point is 00:22:32 and you get comfortable with letting AI do it without, you know, with only minimal human oversight, like it's inevitable that at some point it's going to miss, right? And we do more changes more often, you know, we're going to see more failure. So it kind of makes sense. But overall, it's just entertaining, you know, watching them flail about in the press release
Starting point is 00:22:52 and try and in the communications about it and try and make it like not so much AI, more just human error, but, you know, maybe with some AI and balsa. That's funny. And in the end, we just love things breaking. Like, that's ultimately why I like to hack stuff is because it's just like breaking stuff.
Starting point is 00:23:07 You know, seeing stuff broken, it warms my heart. And we love to see people doing crazy risky stuff as well. So which brings us to this Microsoft security blog post from the Defender Security Research team, which is just the whole blog post is dripping with a sense of OMFG WTF. Because the people writing this are saying, you know, self-hosted agents like OpenClaura are like heaps of fun, guys.
Starting point is 00:23:33 But here's what they can do, which is like less than ideal from a security perspective. And, you know, you really get the impression this is Microsoft flagging to enterprises like, what are you doing? Like, you know, running these things in prod is like, or on corporate systems. It's a really bad idea. Actually, it reminded me of a conversation James, you and I had the other day about, how, you know, these agents will just find a way to get stuff done, right? So if they don't have the tool that they need to do something,
Starting point is 00:24:04 if they have the rights and permissions, they will actually write a software tool, compile it and execute it, in order to get stuff done, which, you know, I'd imagine your EDR suite or your allow listing is not going to like that very much, but there's going to be people in enterprise environments who are just turning off all of the controls so that Claudebot can get its work done. Adam, I want to bring you in on this. Like, can you see, do you think enterprises are going to learn the lesson quickly
Starting point is 00:24:32 that you shouldn't just let these things run right? Or do you think this is just going to be, you know, several years of headlines of like OpenClaude doing insane stuff to corporate systems? Yeah, I think we are definitely in the market for quite a lot of bad things happening because it's just so compelling, right, to individual end users to solve problems with computers. We give them tools they want to use them because it makes their lives easier. And if they can tell their computer to do their work for them and then they can go have a cup of coffee, of course. The incentives are just all there.
Starting point is 00:25:06 And so much of our security architecture is dependent upon a computer is a person and a person has security controls. It's not designed that every end user in an enterprise knows how to write and compile code. Imagine anyone who's been an admin or a security engineer in a technical organization, knows what a menace technical end users are, right? Regular corpo users are just going to pointy-clickie around SharePoint. It's fine. No one's going to write a script to delete everything out of SharePoint as an end user. And even letting end users have Microsoft access was a bad enough idea,
Starting point is 00:25:39 you know, because they might pointy-clickies in Visual Basic. And now they're all empowered like this. Like the controls are just not set up for it. And Microsoft's blog post here, like ultimately the advice is, don't run open claw with a real identity, like have specific accounts, don't run them on a real machine, run them in a virtual machine, which defeats the whole point of these agents, right? They're meant to be able to do your job, not a job with no access on a machine with no access.
Starting point is 00:26:06 You don't have to manage them like they're an employee, right? You want them to just get the job done. You want to be able to give them that API key, give them those creds, and say, off you go, get my job done. And look, in fact, we have published a podcast on that very topic. James's podcast feed is up and running. It's called Risky Business Features. You can find it on the front page of Risky.com. Just scroll down.
Starting point is 00:26:28 It's in the iTunes store and all of that, you know, the podcast directory. The only service you can't get it on right now is Pocketcast. I think I've got to go and like manually submit it or something. But if you're using Overcast or Apple Podcasts, you can get it. There's two podcasts in there so far. So there is the about a 30-minute solo podcast of James just talking about OpenClaught and these sorts of issues. And then there's a fantastic podcast he did with Brad Arkin. who was formerly the see-so of Adobe,
Starting point is 00:26:55 then he was the seesaw of Cisco, and then more recently, Salesforce. He's out of there now and doing some podcasts with us, which is great. That one is all about AI pen testing. Really fascinating discussion where, you know, Brad's whole thing is like, well, do we really think dollars in, bugs out is actually the metric
Starting point is 00:27:13 through which we should measure the value of a pen test? And his sort of thing is, well, you know, the value of a pen test for him was always to sit down with someone like you, Adam, at the end of the exercise, while you stroke your beard and he could ask you which areas of the code base you know,
Starting point is 00:27:28 did you have feelings in your waters about? Basically, so that's a really fun chat about a bunch of AI stuff in risky business features. Please go and subscribe now, support our work here, support James,
Starting point is 00:27:40 we like him, we want to keep him, go subscribe to his podcast. But looks, you know, just one more story about all of this is, it was just more of a curiosity,
Starting point is 00:27:50 something funny that happened this week, which was this, woman Summer You, who is the director of alignment at Meta Super Intelligence Labs. She was messing around with Claudebot and at some point, despite having told Claudebot to not do anything without asking her first, just started like deleting emails from her inbox, which was quite funny. She had to literally run to her Mac Mini to like start shutting down processes and stuff. And I think it's, you know, the fact that she actually works in this space
Starting point is 00:28:20 has made this blow up into a bit of a story. She should have seen that coming and maybe not, you know, posted about it on Twitter. I think the funny thing about it, though, was when you read through the replies, some of the replies in that thread where she was talking about it. I mean, the screen caps are hilarious because she's like, I told you not to do that.
Starting point is 00:28:36 And it's like, yep, you know, typical AI response. Yep, my bad. Good catch, you know. But there's someone just replied to the thread, like, I don't know what you people are getting out of running this thing. And someone just replied with, it's fun. And I mean, James, you're actually a Claudebot user. I mean, you seem to fall into that category. You're actually using it for some of the work you're doing with us, right?
Starting point is 00:28:58 Yeah, I use it a lot. You know, even this morning before the show, I'm throwing headlines out and I'm sparring on it with ideas. I'm trying to get it to think about novel aspects. It's interesting, right? I mean, you're accessing a massive, massive corpus of data in a novel way. Why wouldn't we experiment with it? Because it's the 2026 equivalent of talking to yourself.
Starting point is 00:29:18 You do realize that, right? You're like, you're muttering to yourself. And the problem is that it's a computer is muttering back. Yeah, like I said, you last week, yesterday, I think, you know, 2025, there was just voices in my head. 2026, there's voices in my head and outside my head. It's great. That's fantastic. And I think there are what, there's a couple more stories here.
Starting point is 00:29:39 Just real quick, Adam, you flagged this one. There was a Microsoft office bug where they weren't supposed to be throwing confidential emails into code. pilot and yet they were and then the fix is some sort of control around what storage locations can you walk us through this one adam yeah so if you had uh Microsoft DLP like data classification stuff in your network you can classify emails and documents and so on with particular classification levels and if you set a policy which said don't ingest you know confidential or whatever you know emails it didn't apply for a technical reason to drafts and sent items i think it was so like there was a couple of drafts that's the last that's the last
Starting point is 00:30:19 thing you want indexed by an AI, you know, they're the angry ones that you decide not to send. Exactly, yeah. So that was a technical bug that caused those to be ingested when they weren't. And then the other thing that Microsoft's doing around that kind of AI ingesting DLP bit is that previously the DLP classifications were like metadata that was stored in SharePoint or OneDrive, and it didn't apply to things that were tagged on local disks. They've extended their like thing that does the ingestion to respect their classification tags on local disk as well, which should vastly increase the number of places that those will be controls be respected. So that's an improvement, but I imagine it's not particularly
Starting point is 00:31:03 comforting for people who, you know, as you said, like had their draft emails ingested and read by the AI agents. Now we've got an update on the case of Peter Williams, who was the guy who worked for L3 Harris and was, who, who, pleaded guilty to stealing and selling exploits to like Russian a Russian vulnerability broker he's been given seven years in prison and I think that is just a remarkably short sentence given the harm
Starting point is 00:31:32 that he has caused to you know the security interests of the five eyes countries but yeah and separately the Treasury Department has sanctioned the Operation Zero where he sold these bugs to and the guy operates it and another business that he was spinning up to do similar sort of stuff. I mean, no surprises there. I would imagine that this, you know, this guy Sergei Siergyevich, Zelenuk.
Starting point is 00:32:03 I imagine he will print and frame some sort of letter involving these sanctions and wear it as a badge of honour. So I can't imagine that it's going to hurt him too badly. But look, staying with Russian stuff and Pavel Dirov is a... you know, the founder of Telegram. He is now the subject of a criminal probe in Russia. And, you know, the most people, their read on this is that, you know, Russia's government is saying, oh, they are failing to take down, you know,
Starting point is 00:32:32 a bunch of stuff that's been reported to it and they're facilitating terrorist activity. Yeah, facilitating terrorist activity by failing to respond to law enforcement take down requests. Funnily enough, a very similar complaint to the one made by France. So they're, you know, they're degrading. telegram as well the read on this for most people is that the reason they're targeting
Starting point is 00:32:53 Pavl Dura of the reason that they're degrading the performance of telegram is they want people to use Max Messenger which is their equivalent of WeChat which allows the government to surveil communications and etc etc but also we're seeing the Ukrainians pushing for tighter regulation of
Starting point is 00:33:10 telegram because they're saying that the Russians are recruiting locals to you know gather intelligence and perform various acts for money and they're recruiting them over telegram and they don't have insight there. So, I mean, you know, I think this is interesting that you've got Russia pushing towards a more open messenger, which they can surveil through Max, but also you've got countries like Ukraine, which, yeah, they have a legitimate reason to want to take away privacy enhancing tools. These tools are used, especially in Russia, Ukraine. Like, they're very popular in those demographics.
Starting point is 00:33:44 And, you know, these two places are at war. So, oh, you know, you know, Of course, both sides usually been their own ways, and it makes sense that everyone has their own bone to pick. I mean, the move towards Max, I think, is a thing that we flagged, you know, I think it was late last year, whenever it was. You know, this is just Russian modus operandi, right? I push everyone around into place where they can control, and it just kind of makes sense, and it underscores the importance of communication, you know, as a thing that everyone, you know, has an interest at either securing or observing. And back when we used to do this over the bare phone network, We have wiretaps and, you know, lawful intercepts and all of those sort of things.
Starting point is 00:34:20 Once the communication moves up into over-the-top apps, we're then to end crypto, then, of course, everything starts to move around, and not everyone can afford iPhone exploits, you know, to go read stuff on the endpoint. So, you know, it makes sense to look at other ways to solve the problem, and this is them doing that. And honestly, like, this is probably the most pragmatic way for Russia to address this issue because, you know, they can't iPhone exploit their way there. They can't lawful intercept their way there.
Starting point is 00:34:44 Well, it's the, it's the China plan. I mean, this is why China wound up doing WeChat, right? Which was just corral everybody into a spot where they can do it. I mean, I think I find it interesting when you're looking at this topic in that region, right? I think it's interesting because this is an extreme environment, particularly on the Ukraine side, right? It's the debate pushed to the extreme, where a privacy enabling tech is being used on one hand to recruit Ukrainians to do things on behalf of Russia. But on the other hand, we know that the Ukrainian military makes a lot of use out of signal for all sorts of stuff, including, I believe, transmitting live video feeds from drones. Right.
Starting point is 00:35:31 So, yeah, as I say, I just think it's interesting where you can see both the benefits and the drawbacks of privacy enhancing technology in a country that's in a state of war with a much larger neighbor. Yeah, no, it's a, you know, these kind of cyphepunk trade-off things have always been really difficult because the extreme positions on both ends, you know, have real problems, and then the middle ground is just difficult, right? And we, you know, I don't know that anyone has really figured out how to navigate all of these various equities and come up with a solution that's workable. All we've got is a bunch of, you know, kind of best effort kind of work. And it varies between countries, it varies between cultures and, you know, kind of government types. and, you know, there isn't an easy answer to any of the stuff. Yeah, I think telegrams are particularly interesting one
Starting point is 00:36:18 because it sort of operates like a cross between a messaging platform, a social network and like a Discord. You know, like it's a very, it's a hybrid of a bunch of different things. Anyway, slippery, it is. But it's also like if you wanted to reach out to a bunch of people in a certain community, like telegram's a way to do that. Signal less so, right? Like it's more for direct messaging, you know, groups as well.
Starting point is 00:36:39 But yeah, I'm sure you see what I mean. Anyway, moving. on. And look, we're going to talk about persona now. Pesona have sponsored a risky business snake oilers segment. They may sponsor something again in the future. I just have to disclose that whenever we talk about someone who has sponsored us. But we had this very strange situation over the last week where a group of researchers did a bunch of fingerprinting of persona's infrastructure. Now, Pesona is, it does identity verification. And, And a lot of these companies that are now being forced to do like age verification for certain online accounts like Discord is one of them, I think in the UK.
Starting point is 00:37:19 They're going to companies like Persona to actually do this stuff. I think ChatGPT uses persona for identity verification as well. But someone did like a bit of a tear down and fingerprinting of their infrastructure. And they kind of added two plus two and got 55, right? And wrote this long blog post about how every time Persona scale, someone's face, they're sending it to the government and, you know, it's like doing all of this really privacy-violating stuff and they're in cahoots with ice and it was all like, it was a lot. And it got so bad for persona that their chief executive wound up like publishing it, you know,
Starting point is 00:37:57 writing open letters back saying like, this is not true and whatever. Before this turned into such a big deal when that blog post was first circulating, I asked both of you, Adam, James, to have a look at this blog post and let me know what you thought of it and whether there was a there there. you both came back to me and said there isn't. James, let's start with you. What was causing these people to become so, you know, tinfoil hatty over what they found? Yeah, this is just unhinged. It all began with a found front-end code that had all the JavaScript source maps, right? So that's the equivalent of basically leaving around a debug binary that you can then go and get all the symbols out of
Starting point is 00:38:33 and deeply understand what it's doing. But then from there on, it just went into misassumption an incorrect sort of cognitive leap one after the other. So, you know, the endpoint that it was running on had a domain name that they somehow then concluded there was a FedRamp equivalent, so therefore the government must be involved. And one of the boxes was named Onyx. And, yeah, that's related to that ICE thing. That's definitely going to be government involvement.
Starting point is 00:38:57 And, you know, neither of those things had really much of a basis in truth. But fundamentally, you know, the front-end code that they found was just exposing all the ways in which persona can be configured, not specifically the way it was being used by Discord, or in this case, OpenAI, where I think they found the source maps. You know, each customer is going to configure it to do what they need to do that suits their use case. And, you know, while the things that it does, yeah, sure,
Starting point is 00:39:25 I can see why they raise journalistic interest when it's things like watch lists and looking for likeness of terrorist images, etc. But this is bread and butter, KYC, anti-money laundering, banks do it, right? You should be no more surprised that this stuff does what it does than a bank will flag suspicious transactions. It's just what they do. There's another aspect here, which is Peter Thiel's Founders Fund as an investor in persona, right? Which made people, like, lose their minds a little bit more because, you know, he's certainly a controversial figure. Adam, you came
Starting point is 00:39:56 away with the same, very much the same impression as James on this. Yeah, pretty much. But the technical quality of the write-up was great, right? The conclusions, they, like the technical assessment and the conclusions they drew about technical aspects of it, like these domain names, these APIs, the way, like all of those things made total sense. Like the person who wrote it, technically competent. But where it comes a bit unhinged is the like fitting it into the current kind of like cultural environment in the, in the US and all of the like things that could be happening in the world. Like do I think that Peter Thiel is, you know, directing, you know, via his investment funds persona to cooperate with the US government to capture everyone's identities and but like none of
Starting point is 00:40:39 that kind of makes much sense and much like James I felt like this is probably someone who is technically a compliment but probably hasn't ever worked at a bank or worked in an environment that does identity verification you know as a matter of business and you would expect a bank to be able to validate people's identity well and the fact that they can't do it online is kind of a problem for everybody which is why there's you know startups like persona trying to make this doable. And the way that you make a doable is by doing a whole bunch of comprehensive checks and, like, collating the results and then building a kind of like, how do we feel about this? Like, what's our confidence level on this identity? And, you know, you have to correlate
Starting point is 00:41:18 a whole bunch of data sources because someone who is faking their identity can't fake everything. They can fake a subset of those things. And you have to kind of like match everything up. That's, you know, what they're trying to do. So of course it has all of these extra features. then, like, as James said, like the fact the user interface has potentially an option to configure a button to report it to a regulator, like, that's a capability that you would expect them to have. Doesn't mean that everyone's using it, doesn't mean it's automated, et cetera, et cetera. So overall, it felt like, you know, a lot of conclusions drawn without necessarily understanding the implemented context, which, you know, it's understandable.
Starting point is 00:41:57 Like, it's totally understandable, you know, within the societal context that people are worried and scared and afraid of surveillance and government overreaching blah blah blah blah this didn't feel like an example of that yeah it's funny right like uh yeah things are things are quite tense in the u.s at the moment we actually had an email too where um yeah fair complaint i thought it was a fair criticism uh i came into our editorial inbox but it was a gentleman who took issue adam with you saying that um you know ciss uh being a victim of partisan political football uh over all of this ice stuff was a shame. And, you know, we had as a listener right to us who's actually based in Minnesota. And he's like, look, man, you know, it's a disaster here at the moment. You know,
Starting point is 00:42:40 people are being shot in the streets. So, you know, he took issue with the, with you saying it was partisan political football because, you know, in his view as someone who lived in Minnesota at ground zero of a lot of bad stuff happening with ice, you know, whatever could be done to put a stop to that was certainly worth it, not just a case of, you know, sort of petty partisan political tricks. That's it. I know that I just earned us a couple more one-star reviews because anytime I say anything critical of America's current leadership, the snowflake, mega snowflakes, who are the, like, they are the most snowflakey little children, little toddlers who throw their toys out of the pram anytime you criticize them, they run off to iTunes and do a one-star review.
Starting point is 00:43:26 and they have a little cry about it. So if you want to, if you want to annoy a MAGA person, please head over to the, you know, the Apple podcast store or whatever they call it and give us a five-star review to drown that stuff out. But yes, anyway, moving on. And we got some, you know, it wouldn't be an episode of risky biz
Starting point is 00:43:49 without talking about some real dumb bugs. And we got this first one. It's a write-up from Akamai, Adam. You found this one. And the crazy thing is, I think you pulled this out of Catalan's newsletter, the risky bulletin newsletter. So there's a CVSS 8.8 bug, but it's like, what is it like, IE with ActiveX? Like, who? It's, what? It's the year 2026. This was one of the ones that was fixed in the most recent past Tuesday.
Starting point is 00:44:18 It's some bugs and Internet Explorer. And essentially, it's like a chain of bugs that you can, if you get in front of, MSHtml, the old, you know, like, IE Trident Renderer, then you can bypass the, like, marker the web controls. You can bypass the, like, IE extended security controls, blah, blah, blah, blah, lead to straight up code exec. And this has been seen in the wild with some Russian crews using this particular bug.
Starting point is 00:44:47 Against who? And that's the interesting thing, right, is we don't know how it's being used. We don't, and, like, even the, like, how do you get code in front of MSHtml, these days. Like there's a few cases where like, you know, the Microsoft help, like the, what's it called the Microsoft, the help application where like you can have like help files that they are rendered by an old version of IE. And there's like the IE tab and Explorer that you can use IE mode and Explorer. This particular campaign seems to be dropping like malformed LNK files that have HTML that eventually gets up, ends up in front of Trident to render. But I couldn't find like a, there's like some
Starting point is 00:45:25 hashes of samples that were meant to be on virus total. I couldn't, like, I didn't manage to dig up and figure out exactly how they are doing it, but via some mechanism, someone somewhere in the Russian ecosphere is, you know, is hacking people with Internet Explorer, you know, in the year 2026 and, you know. So it's either a weird path to where it is still lurking in the Windows Code Base somewhere or they are combining this CVS 8.8 with some sort of time travel appliance and sending it back 10 years to pop shells, crazy, you know. It's just weird seeing this stuff still used. We've also got a Bloomberg report here,
Starting point is 00:46:00 Adam, that you say dishes on Yvanti actually getting hacked in 2021 by bugs in its own software, which is, as you said, I think you said in the notes, yep, there it is, loll. Yes, a detail analysis there. Yeah, this is quite a long place from Bloomberg, which digs into a bit of the history of Avanti and, it reveals that particular detail that at some point, presumably China broke into Avanti itself through Connect Secure, used that to then gain access to some stuff to hang other customers. But overall, this piece digs in a lot of detail into the kind of root cause here, which Bloomberg identifies as private equity, buying the security firms, gutting their expensive staff,
Starting point is 00:46:49 and then selling the products for a while whilst they coast on the work that are previously they've been done. And I think that's a pretty important message. And in Bloomberg, you know, talking to investors, you know, that's a good thing for them to be saying. And they cite, you know, mostly it's about Devanti, but they also make the same kind of complaints about Citrix. You know, people who, you know, private equity that have been also, you know, kind of capitalizing on, you know, the COVID remote working thing and then, you know, the gutting of the, of these firms that build security critical products and kind of what that means big picture national security wise. And, like, that's a pretty good write-up.
Starting point is 00:47:26 And honestly, I think we're worth a read. And, you know, if this was a thing that got traction amongst how other people think about, like, if you're going to go out and buy a product in the market, is it safe to buy a private equity-owned security product? And the answer may not be that it is, you know, because they don't encourage, they don't incentivize the right behavior. And so, yeah, I think it's a great write-up and worth a read. Yeah, I mean, in one of my wide world of cyber conversations,
Starting point is 00:47:52 it sort of came up that a bunch of these companies, that are ostensibly publicly listed, like a still majority PE owned, right? And they've got like board control and whatever. So it's insidious, but not all PE is created. Well, the one that I always defend is Tom A Bravo, because they have a habit of buying like wobbly companies and then making them better instead of, instead of like just doing that sort of rent extraction model of a lot of PA, right? So let's not malign them all. It's the bad ones we've got to watch.
Starting point is 00:48:20 Not that I do any business or anything with Tom A Bravo, like I don't, but, um, You know, I just think it's cool when they are able to actually turn some stuff around. What do we got here? Yes, so there's some horrible Dell bug. It's a CVSS 10. It is in the wild. And what is it, SISA is like now telling government agencies to patch it. And we've got a write-up here from the Google Googiant, you know,
Starting point is 00:48:46 Google Mandiant services team all about this bug. I mean, it's, look, it's written in pure threat intel person speak. this Google blog posts, but if you can get through the writing, you know, it is, it is interesting. Adam. Yeah, so, I mean, the bug itself is dead boring. Like, it's hard-coded admin creds in Apache Tomcat. I'm going to go ahead and assume it's login admin password Tomcat, but it might be admin manager. It might be manager manager.
Starting point is 00:49:12 It might be admin-admin-admin. It's going to be one of those. It might be admin password. I mean, why not do I go with a golden oldie there? Yeah, exactly, exactly. So I did try to find a pock or an exploit or someone that had, you know, written this up. you know, I didn't download the product in time to go figure it up. Anyway, it's going to be a dumb password like that.
Starting point is 00:49:29 And that's not particularly exciting. I mean, you know, we've all Tomcat, Tomcat of our way to victory in the past. The Google write-up does talk about a bunch of interesting kind of like post-compromise activity. And like really pretty sophisticated, you know, kind of manipulation of VMware infrastructure for monitoring and, you know, other things like that. So like, you know, as threat intel reports go, like I quite enjoyed it. but it is, as you say, very threat-inteli. And the moral of the story is don't put Apache Tomcat on your network with default credits. Yes, I think that is a reasonable bit of advice.
Starting point is 00:50:03 But yeah, like to give you a sample of the writing, analysis of incident response engagements revealed that UNC 6201, a suspected PRC nexus threat cluster, has exploited this flaw since at least mid-20204 to move laterally, maintain persistent access and deploy malware, including Slay-style brickstorm and a novel backdoor, as Grimbolt. It's like, yeah, okay, how many
Starting point is 00:50:26 indecipherable terms can we squeeze into one sentence there, guys? But that's how they roll. That's how they roll. All right, we're going to wrap it up there. Adam Bueblo, James Wilson. Thank you so much for joining me to talk through the week's security news this week.
Starting point is 00:50:41 Really appreciate your time. Yeah, thanks for most Pat. I will see you next week. Thanks, Pat. See you next week. That was Adam Bwilo, James Wilson there with a check of the week's security news. Big thanks to both of them for that.
Starting point is 00:51:02 It is time for this week's sponsor interview now, and this week's show is brought to you by Corlite, a company that I really, really like. So Corlite maintains Zeeke, which is the sort of industry standard network security sensor. And Corlite makes its money a few ways. They sell an NDR platform based on Zique. So if that's something you're looking for,
Starting point is 00:51:22 you can get it from them. They also have some sock tools. And really a big thing that's a sort of traditional area of business is selling hardware appliances that can offer Zika, optimized Zika at just sort of insane line rate speeds, right? So you can get like a 200 gigabit per second like network security sensor. And I don't personally need one of them, but there are companies out there who do and they get them from Coalight.
Starting point is 00:51:46 So Brian joined me for this conversation, though, really about, it's about a couple of things. It's very AI-centric. It's about how very very, very, Very quickly, we've gone from a situation where we have been thinking about using AI in the SOC to now basically everyone's using AI in the sock and it is the way that things are done. So there's that part of it. And we also talk about really the opportunities for COOLID around AI in that, you know, no one's going to clod code a 200 gigabit per second full line rate network sensor. CoreLight's very safe in this regard.
Starting point is 00:52:21 But they've got to think about how they can make their product work better with everybody else's AI. So that's a part of this conversation as well. But we started off by talking about how that transition has happened, that sort of inflection point like we're past it, and about how AI and the sock is just the done thing. Here is Brian Dye talking about that. Enjoy. Watching the evolution has been key.
Starting point is 00:52:43 The forcing function, I think, is obvious to everybody at this point, right? You've got to fight fire with fire. When you look at the speed, the volume, the shortened timeline from vulnerability to exploit to tapping because of an attacker use of AI, that's been a rocket chip. But I think the other two pieces are kind of equally powerful. One is that the shift from kind of call it LLM-based defense to agentic defense means you can, you know, slice and dice the individual components of the investigation and security,
Starting point is 00:53:13 and as a result, you kind of take away a bunch of the hallucination risk. But then the other piece is defenders themselves, I think the models and the automation is earning their trust. because look, we're all hyper-sceptical, otherwise we wouldn't be insecure. I mean, it's kind of a, it's a required entry card. But as the solutions themselves, whether there's in-house or kind of vendor provided, are earning that trust, that I think has been equally important to adoption as well. Yeah, I mean, it just doesn't feel to me like that's even much of a question anymore. The question isn't, oh, should we use AI to do like alert triage?
Starting point is 00:53:45 It's like, well, how much AI should we use to do the alert triage or how much of the alert triage should we throw AI at? Is that your read as well? Yeah, 100%. And I think what's happening is folks are more comfortable using individual domains right now because you can kind of parse the problem and you're not trying to eat the whole elephant in one bite. But we're absolutely going to where it's going to be multi-domain orchestration to kind of solve the entire kind of sock triage problem. The biggest thing that I've seen that's kind of an aha is that people have changed their mental models over the past year. It used to be what's the LLM, what can the LLM do? Does it hallucinate? That's where we were 12 months ago. Now people's mental model has shifted of like, oh, wait, I need to think about this thing as a three-layered cake. Number one, do I have the right data? Because it turns up that we're not as worried about the hallucination anymore.
Starting point is 00:54:36 We're worried about our models hitting a data ceiling, right? That limits what they can do. That's step one. Step two, do I have the right decomposition into agents? Because now we're not doing a single LM. Do I decompose things correctly? And then number three is really interesting. the workflow you build on top of the model is essentially packaging the expertise.
Starting point is 00:54:56 It's either a vendor packaging their expertise or if you're doing it in-house, it's you taking your kind of your experts, your most advanced folks and packaging that expertise in. And I think splitting that model from the old school view of what's the LLM capable of, what new model comes out this month and what can it do to like, oh, wait, this is a three-layered cake. It's about the data, the agent architecture, and kind of how much expertise I'm embedding into the workflow.
Starting point is 00:55:18 that's been really cool to see. Yeah, I think one thing that's interesting too, and I bet you'd love to talk about this. I think people have been going a little bit overboard on the extent to which AI is going to eat everything, right? And I think call light is a perfect example of that. You can't AI your way into another call light, right? Like, you guys are so safe from basically being replaced by AI.
Starting point is 00:55:44 But where do you see all of that? Like, how disruptive do you think AI is? going to be to security vendors when you've got, you know, you can, I can point to you as a case where, well, not at all, really, if anything, it's great for you because AI models love data and you get data. But others, like, I don't know, where do you see it all going? That's just a scatterbrained question there. Sorry, Brian. Oh, it's a great one. Look, I think there's two interesting things going on here. One, the safest, the safest roles here are actually the defenders themselves. Like, if you're a cyber analyst, I think what AI is going to enable you to do, even with all this
Starting point is 00:56:18 agentic work is that right now folks can tackle the top 10% of their queue, and you can triple the productivity of a security team. Now they can tackle the top third of their queue. You still don't have the other two thirds of the queue covered, so I don't think anybody in their right mind is going to say, oh, I don't need the same size of security team. Like, I don't see that risk happening at all. I think we're just going to get better coverage of the inbound kind of alert queue. And in terms of what's going on in the technology landscape, because I think that's kind of where it's going to be a lot more interesting. I do think this is a real chance to re-architect how the socket self, the technology behind the socket self works, because the world where you put all the data in one place, you had
Starting point is 00:56:56 to centralize it, you had to put it all there for search purposes, I think that's come and gone, right? The ability to essentially use the LMs as a search federation and to really orchestrate the expertise and a point tool and bring that together, that's where things are going. Yeah, we finally get to knife the saying single pane of glass, because no, Nobody cares anymore. Like it absolutely, it absolutely does not matter. I mean, you know, you know that I work with Edward Woo over at Drop Zone. And that's one of the most amazing things with Drop Zone is it can go out and just like, it needs some information. So it goes out and gets it. Yeah. And look, the interesting thing is watching all of us as technology providers figuring out that, wait, we have to plan to live in that world. So what I think is going to happen is you've got, you know, NDR, EDR, ITDR, pick your various kind of control pillars. Each of us is going to have to do what we can, right? We're going to. have to do a bunch of triage and kind of alert aggregation, data curation, right? That's the value prop we have to do. But then we've got to realize there's going to be an agentic sock layer
Starting point is 00:57:54 that we have to be a partner for. So whether it's A2A interfaces or data APIs or just kind of MCP interfaces, we've all got to be actively planning that our job, we have a two-part job, one that's in our product, one that's in somebody else's product. That mindset, I think, is going to be really key. Well, so, you know, I've had a bit of time to sort of think about Coalite last couple of days, like seeing on my calendar, oh, yeah, I'm chatting with Brian soon and just been just been noodling on it. And I think one thing that's really amazing about call light is how little AI actually changes your car. All you've got to do is, yeah, spin up a good API that can be used for machine to machine, you know, get get your, get a good MCP server happening. And I mean,
Starting point is 00:58:33 you can pretty much call it a day and just go back to doing what you do, right? Like, it is, it is barely changing like what you do as, as a product, right? I mean, Am I wrong? Am I off there? It's not quite that easy. I wish it was. Think about kind of, is it the first party or the cross-sock experience, right? Because folks kind of consume two different form factors from us. They either consume just the network centers, which is the data detection, telemetry, the analytics. In that case, you're right.
Starting point is 00:59:04 We have an MCP server and a client. We've got a bunch of workflows built into that. Look, between you and me, I think MCP has more mind share than market share, right? the number of customers actually deploying their MCP servers isn't that high at this point. But you can absolutely do that, especially if you're in the biggest of the big, you can consume that, you're off to the races. If you're consuming not just the data detections, but the SaaS console, then we look like a bunch of other detection response products, right, where we actually do have a first-party
Starting point is 00:59:31 search investigation experience that's always had kind of not always, but for a long time now, has had AI acceleration features into it, and we need to continue to build that. frankly it has been a little bit easier for us just because we got this gift from being an open source based company where all the flagship large language models are already trained on the dataset. So it was a lot easier for us to deploy that. We didn't have to pay inVidia for all these tuning, yeah, everything else, right? Better to be lucky and good. Yeah, yeah, yeah.
Starting point is 00:59:58 So, I mean, that's the thing, isn't it? All the LLMs just understand CoreLite, you know, to its core, to its core. No pun intended, but yeah. Also in the Better to Be Lucky and Good category in the naming. I don't think we saw the LLM thing coming when we picked the company name out 10 years ago. Now, when you started with AI, right? I remember you were doing what everybody else did, which was to use Gen. AI to sort of explain alerts to people, right?
Starting point is 01:00:21 Now you've whacked some sort of agentic investigation sort of features into your NDR platform, which makes sense. But, you know, agentic socks don't really need agentic NDR. They're going to have, you know, they're going to have a more. Swiss Army set of agents that are going to be often querying that data. Like, how do you, and this is such a challenge, right, for so many companies that I talk to, how do you even think about, like, some of the, developing these products, some of these products, and putting resources into them, when they could turn out to be, you know, and I don't think Corlite is a dead end, but some of these stuff could turn out to be a dead end because
Starting point is 01:01:02 it's going to get eaten by the companies that are doing more generic AI sock. Like, how do you, as a chief executive of a company, go about, like, prioritizing this stuff and allocating resources to it? I mean, it must be, I mean, rather than me, pal. We've all got hard problems, right? I think the thing that there's a big yes-and to what you're saying is different sizes of companies are going to have different architectures as they roll them out. So there actually isn't a one-size-fits-all cookie cutter answer to this.
Starting point is 01:01:32 You know, if you've got a sock that has 500 or 1,000 people, and we've got customers that do, you're going to have one defensive architecture. If your sock has 10, 20, 30 people, you're going to have a different architecture. If your sock has three or five people and you're heavily relying on an MSSP, you're going to have a different architecture as well. So a lot of what we think about is, what are the architectures that we see happening in those three, and how do we put the enablement behind those different architectures all at the same time? And fortunately, there's a fair bit of overlap here.
Starting point is 01:02:02 Let me give you two different use cases. An easy one is, hey, look, we've seen an anomaly on the next. network, can we triage that anomaly and actually really understand what's going on there, right? That's a great first-party use case that we should be able to do because we have all the context around it. A different use case is let's say you had an alert come in from an EDR, and that's coming in through your SIM or your EGENTIC SOC or your EDR console, and you really want to get supporting information about that alert. Like, hey, can you tell me what you know about this IP address, right? That's a very specific thing that you could support with either an API or an MCP
Starting point is 01:02:36 server, right, or an agent-agent interface, right? But if you think about what the SOC is actually doing, you can break those down into specific workflows. Like, are you the lead or the follow in the investigation? It's kind of a simple one, right? That takes you down two different paths. And then you think about what's the architecture of the SOC itself, right? What's the technology stack that they're using? That's the grid that we're trying to solve for. And then it just becomes a sequencing problem, right? Where do you start based on where your business is strongest to kind of move through that stack? Yeah, no, it makes sense. And like an agentic, look, I think, you know, a product like your NDR stuff, I think adding some sort of agentic capability to it, I mean, it's 2026. That's kind of table stakes now, right? Which is crazy. But here we are. It's where everything's going. And look, this is the, like I said, the focus here is how much can you automate how fast? Because if I go back to kind of where we started, right? I mean, this is clearly where the sock is going. One of the favorite customer conversations, well, favorite and most horrifying, was we had our advisory board
Starting point is 01:03:38 together kind of end of last year. And one of the stats they shared was, it used to be about three weeks from when a new volume was published until when they would see exploit in the while. That is now turning into two to three hours, because you can take a, you know, essentially jailbroken or open source model, go hammer it against this vulnerability. You'll get some really crude, hacky exploit, but it'll work. Yeah, you can reverse, like reversing a pattern. now is like, yeah, a lot easier. And I'd also imagine, too, with some of the agenic features in something like Coralite, you will be able to do stuff like just tune the sort of alerts that it kicks out.
Starting point is 01:04:17 They've had some agentic magic done to them before they wind up being kicked out into some sort of other sock platform anyway, right? Yeah, I mean, that's the non-gen-AI side of AI, right? I mean, there's a whole bunch, if you look at what Coralide does specifically in the category overall, there's a whole bunch of anomaly detection, living off the land type stuff. that is really useful for these kind of live exploits and for more importantly living off the land because the other thing we see happening is that just like the LN platforms let people get in divide coding and kind of, oh, I'm not a Ruby on Rails expert, but I can hack it away for an afternoon,
Starting point is 01:04:50 right? It turns up that the attackers are using that to bridge their own skill gaps. And so techniques like living off the land that used to be a lot more advanced. It was in the nation states, the bigger kind of criminal gangs are becoming a lot more accessible to everybody else. So the traditional AI side, right, what you and I would call advanced math, actually absolutely still matters in this world. Yeah, yeah, 100%. Like, let's not forget about the deterministic, like machine learning, right? Because that stuff is also very useful. All right, Brian Die, fascinating to chat to you as always. I always really enjoy our catch-ups. A great way to make a living. And yeah, I'll look forward to chatting to you again soon. Cheers.
Starting point is 01:05:28 It's always a pleasure, Patrick. Appreciate it. That was Brian Dye, the chief executive of Corlite there. Big thanks to him for that. And big thanks to Corlite for having been a sponsor of the Risky Business Podcast for some years now. All good stuff. All right. That is it for this week's show.
Starting point is 01:05:45 I do hope you enjoyed it. I'll be back soon with more security news and analysis. But until then, I've been Patrick Gray. Thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.