Risky Business - Risky Business #824 -- Microsoft's Secure Future is looking a bit wobbly

Episode Date: February 11, 2026

On this week’s show Patrick Gray and Adam Boileau discuss the week’s cybersecurity news, including: Microsoft reshuffles security leadership. It doesn’t spark ...joy. Russia is hacking the Winter Olympics. Again. But y tho? China-linked groups are keeping busy, hacking telcos in Norway, Singapore and dozens of others Campaigns underway targeting Ivanti, BeyondTrust and SolarWinds products An unknown hero blocks 23/tcp on the US internet backbone And James Wilson pops into talk about Claude’s go at a C compiler This week’s episode is sponsored by Ent.AI, an AI startup that isn’t quite ready to tell us all what they’re doing. But nevertheless, founder Brandon Dixon joins to discuss AI’s role in security. Where does language-based understanding take us that previous methods couldn’t? This episode is also available on Youtube. Show notes Updates in two of our core priorities - The Official Microsoft Blog Strengthening Windows trust and security through User Transparency and Consent | Windows Experience Blog Microsoft prepares to refresh Secure Boot’s digital certificate | Cybersecurity Dive Microsoft Patch Tuesday matches last year’s zero-day high with six actively exploited vulnerabilities | CyberScoop Microsoft releases urgent Office patch. Russian-state hackers pounce. - Ars Technica Italy blames Russia-linked hackers for cyberattacks ahead of Winter Olympics | The Record from Recorded Future News Researchers uncover vast cyberespionage operation targeting dozens of governments worldwide | The Record from Recorded Future News Germany warns of state-linked phishing campaign targeting journalists, government officials | The Record from Recorded Future News Norwegian intelligence discloses country hit by Salt Typhoon campaign | The Record from Recorded Future News Singapore says China-linked hackers targeted telecom providers in major spying campaign | The Record from Recorded Future News Largest Multi-Agency Cyber Operation Mounted to Counter Threat Posed by Advanced Persistent Threat (APT) Actor UNC3886 to Singapore’s Telecommunications Sector | Cyber Security Agency of Singapore How Intel and Google Collaborate to Strengthen Intel® TDX Strengthening the Foundation: A Joint Security Review of Intel TDX 1.5 - Google Bug Hunters Active Exploitation of SolarWinds Web Help Desk (CVE-2025-26399) | Huntress EU, Dutch government announce hacks following Ivanti zero-days | The Record from Recorded Future News North Korean hackers targeted crypto exec with fake Zoom meeting, ClickFix scam | The Record from Recorded Future News BeyondTrust warns of critical RCE flaw in remote support software Rapid7 Analysis of CVE-2026-1731 Building a C compiler with a team of parallel Claudes \ Anthropic (1) Post by @ryiron.bsky.social — Bluesky What AI Security Research Looks Like When It Works | AISLE South Korean crypto exchange races to recover $40bn of bitcoin sent to customers by mistake | South Korea | The Guardian White House to meet with GOP lawmakers on FISA Section 702 renewal | The Record from Recorded Future News

Transcript
Discussion (0)
Starting point is 00:00:03 Hi everyone and welcome to risky business. My name's Patrick Gray. We've got a great show for you this week. We'll be checking in with Adam Vuelo in just a moment to talk through all the week's security news. James Wilson, our newest team member, will be dropping in also in the news segment to talk about Anthropics, new C compiler that Claude brought, that has made a bunch of headlines throughout the week. Is it impressive? Is it not so much? Is it somewhere in between? James will drop by with some answers in this. week's news segment and this week's show is brought you by a very very new company and and Brandon Dixon is coming along he's a co-founder of Ent he's coming along in this week's sponsor interview to talk generally about I guess what the security opportunities in AI are beyond the sort of co-pilot model right so you know if you're thinking you know wide open sky what would you do with AI he's got some thoughts. Ensworth not really sharing exactly what it is that they're building just yet,
Starting point is 00:01:10 but you'll hear more about that on risky business in a couple of months. But let's get into the news now, Adam. And first up, man, you know, you see that meme on social media, the Fell for It Again award. I feel like we need to hand some of them out this week because Microsoft is excited, Adam. Microsoft is very excited. Have you noticed?
Starting point is 00:01:33 like senior executives at these sort of companies, they love to be excited. So we've got a blog post from the chief executive of Microsoft here, and it starts with, I am excited. So Satya Nadella is excited to share a couple of updates in two of their core priorities, security and quality. So what they're announcing is that Charlie Bell, who was basically the person, the security, you know, executive vice president's security, responsible for most of the serious, like, product related secure future initiative stuff.
Starting point is 00:02:03 at Microsoft. This is a man with a terrific reputation. He is taking on a new role focused on engineering quality reporting directly to the chief executive. And now they are cycling in Hayet Galott, who spent 15 years at Microsoft with senior leadership roles across engineering and sales. They're putting her into run security. But it looks certainly more like that role is about figuring out how to sell more security products than actually trying to make as you suck less, which is what Charlie Bell's role seemed to be. Yeah, I mean, Microsoft has such an outsized role when it comes to security in our industry. And so any changes, especially at the top like this are things that we have to kind of pay attention to.
Starting point is 00:02:49 And, you know, Charlie Bell, as you said, did have a reputation of, you know, being a kind of technical engineering focused kind of guy and you got the impression that he understood it. The big question, of course, is what does this mean for the secure future initiative? You know, is Microsoft going to, you know, just do enough to keep the regulators at bay? Are they still going to be serious about it? Like, what does this actually mean, you know, for that? And, you know, the proof will be in the pudding. We don't really know yet.
Starting point is 00:03:15 But, you know, we have been through Microsoft's boom and bust cycle, of taking security seriously or not, you know, a few times now in our career at risky biz, and I think being skeptical is well warranted. Yeah, so additionally, Ailes, Ailes, Ales, Aalus, Holic, Holosek will take on a new role as Chief Architect for Security reporting to Hayet. Right? So you've got someone now is a Chief Security architect reporting into an EVP from a sales background.
Starting point is 00:03:44 I don't know. But such is excited. So, you know, that's the main thing. Yeah. I mean, I've always been skeptical about this, you know, Secure Futures Initiative. SFI or, you know, as I may have jokingly referred to it a few times, SFA. I'm not sure if that's an acronym that everybody uses. But yeah, I don't know.
Starting point is 00:04:07 I know a few people who've got some proximity to Microsoft who are like, oh, no, they've done some amazing stuff, but you see stuff like this and you just think, no, it's not enduring. I mean, it sort of feels like, yeah, they had a few bad headlines. They had a bad CSRB report, you know, made them look like clowns. So they did what they needed to do to make it look like. It was all serious and now they're just going to revert to whatever.
Starting point is 00:04:27 You know, that's what it feels like to me. Yeah, and like they've done it before. You know, trustworthy computing is still, you know, an echo of Microsoft past, and here we are going through the cycle. Again, I don't know. Like, we, you know, they have done some things, right? I mean, it's not entirely without positive movement. I mean, there was some, I think they put out a press release this week talking about some changes in that they're bringing into Windows that are going to introduce like, mobile operating system style app consenting for Windows.
Starting point is 00:04:55 So you'll be able to, like, approve access to devices or resources or files or whatever else. You know, and that's, you know, retrofitting that into a general purpose operating system like Windows is kind of difficult. And if they've done that properly, then that's a good, you know, kind of, you know, move in the right direction. But it may end up being USC in, you know, in a trench coat and, you know, not actually being that effective. So, like, you know, there's a lot of difficult problems to solve there, both ecosystem-wide and, like, specific Windows engineering things. And it just needs really strong leadership to do a good job of that. And, you know, I guess we're going to find out, right? Yeah, exactly, exactly.
Starting point is 00:05:30 Meanwhile, you know, there's heaps of Microsoft news this week, but what's going on with Secure Boot and very old certificates? So the secure boot thing, which was brought in, I want to say like what Windows Vista era, whenever it was back in the early 2000s, and there are certificates involved that are baked into all operating systems bioses or computer biases that allow it to verify the boot process cryptographically.
Starting point is 00:05:56 the certificates that were originally minted for that, the CA certificates that Microsoft used to sign stuff, are scheduled to expire this year. That's kind of a big deal, but also kind of practically not. I mean, they were issued in 2011, right? So you would hope that, because this article sort of points out that if you're on reasonably modern hardware, you can actually update these certificates,
Starting point is 00:06:21 and that's not really a biggie. But there is going to be some older hardware out there that is not going to be able to update these things, and that's going to cause all sorts of fun things to break, right? Yeah, I mean, it's not as bad as it sounds. Like, we're not talking like Crowdstrike, your computers won't boot kind of thing. We're talking, you know, people who run not Windows, who aren't getting updates from Microsoft,
Starting point is 00:06:43 because updates from Microsoft will ship have shipped new certificates a while ago, and manufacturers have been shipping new certificates with their devices for the last, going to year, a year or two. But everybody else that isn't Windows, so, like Linux users, servers embedded systems, their boot-up process might be a little more complicated. But the other kind of side of this is that the secure boot process reference implementations like the Tiano core reference implementation disables time checking on those certificates anyway, because validating certificates during boot without the internet already kind of hard, relying on time when you might be mid-boot-up
Starting point is 00:07:19 and not yet have time or in a machine that's factory fresh and doesn't yet have its bias clock set. Like there's a bunch of reasons why time checking for certificates was never a great plan in embedded systems or in, you know, early boot time. So that check, time check may actually never be a problem for most people. And, you know, at worst, maybe you have to, you know, get some manufacturer up there. So, like, it's not the disaster that it could be. It's not a disaster that I would love because, you know, we're all about, I am all about things burning horribly down. But it's still, you know, crypto is hard, doing it in the real world's hard. and, you know, I guess secure boot is a thing we do all rely on quite a bit.
Starting point is 00:07:59 So I'm having flashbacks to the news cycle for when all of that stuff was introduced when it was first proposed. And the Linux people had a meltdown because they thought that this was Microsoft conspiring with hardware manufacturers so that only Windows would be able to boot on modern hardware. Which, no, that's not what they were doing, but it was quite funny. having to survive that as a tech journalist at the time. Let's see, what else have we got here? Yeah, we got a whole bunch of like Patch Tuesday stuff
Starting point is 00:08:28 that's just dropped, including stuff being exploited in the wild. There's some office bugs that are being exploited by Russian like APTs. And yeah, there's just a whole bunch of horrible stuff. Horrible, horrible, dirty, unclean, unclean. Yeah, that's a pretty good roundup of bugs this past Tuesday. And none of the Windows ones are like super exciting, but six being exploited in the wild, I guess is notable. That's slightly up on previous few rounds that we've seen.
Starting point is 00:08:55 The one that you mentioned, the Microsoft Office bug that's being exploited by Russian hackers in Ukraine, is actually interesting because it's only a bug in ancient, like Windows, like Office 2016. So like unsupported, end of life, no security updates, available versions of office. So that's already kind of a bad place. Microsoft has patched them anyway, which is interesting. and then it's kind of like a bypass for the controls that you turn off embedded like OLA objects in office documents
Starting point is 00:09:24 so you end up with documents that result in code exec. The Russian hacking cruism jumped on that very, very quickly turned it into a work and exploit and so on. So if you're running ancient office, then yes, Microsoft has actually deigned to patch it for you even though they said they wouldn't. So that's nice, I guess.
Starting point is 00:09:40 I think though, I'm having a look at this. Like if you look at the Nest stuff, It is Office 2016, Office 2019, and Office long-term servicing channel 2021 and long-term servicing channel 2024. But I think it goes up to like 2019 and then into the like you're paying megabucks just to get patches streams, right? Yeah. So I think a nuance there is that for the later versions of Office, Microsoft were able to fix this kind of server side. There was some knob that could twiddle that basically killed it. So in terms of exploitation now, it's only, I think, 2016.
Starting point is 00:10:16 Okay, right, got it. 2019, I don't remember off top of my head which category that was in, but basically it's people running ancient office and everyone else, Microsoft pulled some trick that made it didn't work without you actually having to patch your stuff. Makes sense, makes sense. All right, Darina Antinuk over at the record, has a write-up on Russia-linked hackers doing cyber attacks
Starting point is 00:10:39 ahead of the Winter Olympics. I mean, we've seen there's been some physical sabotage happening in Italy and now apparently a whole bunch of like cyber attacks targeting things like consulates in Sydney, Toronto and Paris. But, you know, the Italians are like, yeah, we were able to repel these attacks and whatnot. But it looks like, I mean, you just sort of wonder why Russia bothers with this stuff, right? Because every time they're hacking some Olympic committee or something, it doesn't really achieve anything. You just really wonder what, you know, they've got other things going on at the moment.
Starting point is 00:11:10 Why do they bother with this? Seriously. Yeah, it's a great question. And so far they don't seem to have achieved really anything in this particular Olympics. Like some of the other Olympics, like it was one in South Korea, where they did actually do some pretty good hacking, like there was actual good compromise there. But even then, still really achieved nothing.
Starting point is 00:11:28 And they've got other things on their plate. And why do they even bother? And I think what struck me was, I think it was a couple of weeks ago maybe, or three weeks ago on between two nerds. Tom and the Gruk were talking about how Russian hacking often lines up with the reporting cycle. internally so that right before it's review time, a whole bunch of flashy high profile stuff gets hacked that then they can go and say to their superiors, hey, look, we've been causing cyber effects, even though there was no actual effect other than, you know, something
Starting point is 00:11:56 happening that gets their name in the press or what end. Like maybe the Olympics fit into that category, like it's just, look at us, look how cyber we are, you know, please justify our budget for next year kind of thing. So maybe that's what it is. So much cybersecurity news when it comes to like state groups is driven by KPI's man. Even if you think about like the Snowden documents, like the prism, the infamous prism slide where it's like it made it look way cooler than it actually was, you know. Anyway. Yeah.
Starting point is 00:12:26 American managing techniques have a lot to answer for. Oh, they do. They really do. We got another one here from John Grieg and Martin Mattershack also at the record, which is, I mean, the headline here is researchers uncover vast cyber espionage operation targeting dozens of governments worldwide. I mean, this is some research out of Unit 42, which is, you know, Palo Alto Networks. What's the go here?
Starting point is 00:12:47 I mean, essentially just as you described it, they've rolled up a campaign that's been very active around the world and a lot of places. The thing that I think is most interesting about this, like so it's a big campaign, countries all over the world, telcos, firewalls, all the things that you would kind of expect. This isn't salt typhoon, and it isn't the one that we're about to talk about in Singapore. It's, you know, there's like, you know, five or six of these global scale campaigns, all of which are China Nexus, all of which seem to be kind of roughly independent. Like, China, I guess, is very big and they have quite a lot of hackers doing quite a lot of things. And, like, that's the bit that this story I thought was interesting, you know, like, there's so many Chinese crews doing so many things. Yeah, I mean, they've got scale, right? Like, you talk to the people who track this stuff and they're just like, man, it is crazy how many of these people there are.
Starting point is 00:13:35 and just the number of simultaneous things that they can actually do. And speaking of, you know, here is a story about state-linked fishing campaign targeting journalists, government officials and whatnot in Germany. Doreena has a right up there. Yeah, this is like a signal fishing campaign targeting the device linking. So techniques that we've seen before and the targeting, I guess, in this case, German military, German and politicians is kind of interesting. you know, good in that, I guess, signal is being widely used enough that it's worth targeting rather than, you know, WhatsApp or whatever.
Starting point is 00:14:10 But from a technical point of view, basically pretty straightforward social people into linking their account with your device, your account with their attacker devices. Yeah, yeah. And there is, there's been a disclosure from Norway where they have been also Salt Typhooned. I felt like, was it last week we were talking about someone who had talked about being Salt Typhooned? It was the Brits, was it? I think it was, yes. Yeah. So, I mean, you know, talking about the Chinese being everywhere, doing all of the things all at once.
Starting point is 00:14:39 I mean, there you go. The Norwegians. Yeah. Yeah. And, you know, the fact that they have to run. Norway, the dagger pointed at the heart of Beijing. Oh, dear. Yeah, I mean, they're running Avanti endpoint manager mobile, which I guess is a internet kick-me.
Starting point is 00:14:57 Signs are not particularly surprised. But the attack has moved pretty quickly on this one. This bug came out, although it was. I think we talked last week about the fact that it was just a, basically the same bugger as last time, but in the next function over or something. So kind of what you'd expect. But yes, real people being actually hacked.
Starting point is 00:15:13 I think the Dutch also said that they had, was it their Dutch data protection authority, ironically, hacked by it, and the judicial council. So, like, definitely a bunch of people in Europe being hit who were still running this software. The EU also had some drama, right? I said to recall something from a headline in the bulletin.
Starting point is 00:15:32 Yeah, it all kind of blurs into what. Like it's all, you know, lots of enterprise, government-y things being hacked by lots of China. Yeah, well, and meanwhile, Singapore says China linked hackers, targeted telecom providers in major spying campaign. So this is the Cybersecurity Agency of Singapore said that UNC 3886 was behind. So this is like uncategorized cluster of activity was behind this campaign. I mean, this feels a little bit salt typhoony. Who are UNC 3886, Adam? I mean, I assume that this was salt typhoon, but now they, they.
Starting point is 00:16:03 The Singaporean authorities said that it was this particular UNC number, which is a China Nexus group that I think Google Mandian attributed a few years back, does many of the same sorts of things, focuses on firewall devices, complicated environments, telcos, and has been seen active around, you know, all over the world, not just Southeast Asia. But yeah, just China going large at this stuff. And I'm in all four major telcos in Singapore.
Starting point is 00:16:33 you know, like Singtel owns like, it was it optist in Australia as well? Yeah. Yeah. So like they're 100% owner of, you know, Telko's another country. So like, yeah, they're very busy. Lots of Telko hackin. The Singaporeans pointed the finger pretty clearly. And, you know, I guess, you know, they are in,
Starting point is 00:16:51 Singapore is in a particularly interesting kind of place, you know, between the East and West and, you know, having China all up in their stuff probably is not really a surprise. No, it's not. I mean, you know, Singapore is a pretty important place. Yes. in Asia, right? So,
Starting point is 00:17:05 um, no surprises there at all. Uh, we've got a, uh, late-breaking one here, which is a, um,
Starting point is 00:17:12 blog post from Bob Routis, Harbormaster over at Grayneuse Labs. This is, I think, fascinating. Uh, basically in January, gray noise just saw that telnet traffic went away,
Starting point is 00:17:26 which is weird. Because they see so much traffic, so many probes. And they saw it drop by like, like two-thirds, right? And then a few days later, out comes a security advisory for Telnet. And then Telnet starts getting hit with this. So, you know, I had Andrew Morris on the show late last year talking about this, about how you can actually tell when there's a bad bug coming in something by watching just randomly what's happening on the internet. And it looks like what happened
Starting point is 00:17:56 here is some of the major telcos, backbone providers, whatever, actually just were convinced to start blocking telnet because, you know, the powers that B knew that this bug was going to drop. I mean, that's sort of what you would infer from this, right? Yeah, that seems to be their kind of supposition. There's no real fingerpoint. They haven't figured out exactly where it happened. The blog writes up their kind of working theory, which is that at least one, maybe more major US background provided, decided to filter it. and then that has effects on like telnet traffic coming into the US or transiting through the US from other places
Starting point is 00:18:34 and then going to map on, you know, kind of map where they see the changes into the kind of structure of the internet. But, I mean, it could be as simple as one hero somewhere in, you know, some tier 1 ISP saw this particular bug coming on, you know, on whatever like information sharing platform when, you know what, we should probably do our civic duty and just drop 23, you know, globally and has done so. And, you know, ISPs have a long history of filtering stuff on the network,
Starting point is 00:19:01 like back in the early Windows Worm days, like 1 through 5, 1 through 9. Well, but that's what I was actually going to bring that up as well. But that's when it started because initially they were very reluctant. This all goes back to how I wound up getting kickband from Nanog when they figured out I was a journalist. But yeah, basically there was stuff like, yeah, slammer and blaster, right? Where they're like, oh, it's a 237 byte UDP packet that's like very, has some very specific characteristics and he's easy to feel. filter. Like initially it was a huge
Starting point is 00:19:29 bun fight among the network operators because they didn't, you know, they're like, their job is to get packet from a to point A to point B. Their job is not to do anything to packet, drop packet. No, no drop packet. Job is deliver packet, right? Even if packet bad, packet must be delivered.
Starting point is 00:19:45 So it is, you know, a sign of how much things have changed, right? Where this sort of thing happens regularly. And especially when they're going to drop an entire protocol, right? Yeah. I mean, I remember, I worked at a, and I in the late 90s, early 2000s. And, like, there was this idea that, you know,
Starting point is 00:20:00 we shouldn't interfere with that communication. We are a carrier, like the postal service. It's not the postman's job to read your postcard and decide not to deliver it because it has bad news. But, I mean, there was, I got to say, like, being around at the time as well, there was also, like, the network operators taking themselves ever so slightly too seriously.
Starting point is 00:20:15 You know, it is our duty, no, no rain, no, no, hail, nor sleet shall stop us in our, you know. It's a little bit like, no, get your hand off it, guys. Come on. Just help it. Yeah, but yeah, so, but I think overall this is good news. When bare Telnet across the internet, it's probably not a thing that very many people need. This particular bug in Gunnui Talnet that we talked about, like it was a, you know, it's a straight up, like remote, you know, orth bypassed, like login is route remotely kind of thing, which is clearly not ideal.
Starting point is 00:20:49 And, you know, Telcos also have skin in the game by virtue of having a great many embedded systems and, you know, modems and routers and customer premises equipment or whatever else. It's also in their interest to filter this stuff. But anyway, if you know why a whole bunch of US backbone started dropping Telnet, then I'm sure Bob Rudis would like to know, and so would we. So feel free to let Grey Noise know, and I'm sure they all share with us if we were the source of the tip. So dear listeners, do your duty. There you go. Now we've got a blog post from Intel.
Starting point is 00:21:20 Intel and Google did a joint security review of Intel TDX 1.5. which is this technology that allows people to run sort of like properly separated hypervisors. This stuff has been around forever, like various attempts at this sort of technology has been around forever. It always seems to have problems. I've never seen much about this stuff really being a hard requirement for procurement from a bunch of people. So I don't know like if this has crossed over from being something that's just academically interesting into being something that's like business necessary. I haven't heard about that yet. And you pointed out to me that it's interesting when, you know, Google and Intel get together to do something. Yeah. So, I mean, I think Intel,
Starting point is 00:22:05 Intel security team and Google's security team cooperated on an earlier review of the TDX extensions, which from Google's point of view, they are interested in because they use it to implement trusted confidential computing for cloud stuff. And I think, you know, to your point about enterprise, like very few enterprises care about. about the stuff, but the cloud service providers want to be able to say, when you buy computing off us, there is some mechanism that stops us snooping on your thing, so as to build trust and encourage, you know, uptake of cloud computing and so on. And that's, you know, AI world, obviously that's important. General cloud computing also kind of important. So I think
Starting point is 00:22:43 both the, like all the major cloud providers want to be able to point to someone else and say, Intel stops us from doing this, therefore you can trust us because it's not just us. you also have to trust Intel or Arm or AMD, whoever else. And so cooperating on this research makes a whole bunch of sense. And, you know, Google research teams very well equipped, very well resourced, and they knew about what I've worked with Intel before. And they've turned up some pretty good bugs. There was a couple of new features that Intel added one for doing like live migration
Starting point is 00:23:12 of trusted VMs, which, you know, that's a hard problem to get right. And one of the core bugs that they found was like you could turn on, like, debugging during migration and from that gain access to the confidential environment. So, you know, good research. The technical write-ups quite interesting. I'm just, I think it's cool that they are working together and, you know, I want to give them big ups for making that happen despite it probably being quite complicated, you know, organizationally to make happen.
Starting point is 00:23:39 Now, from these very interesting bugs turned up by Intel and Google to something a little more run of the mill, there's a bunch of SolarWinds web help desk bugs being exploited in the wild, Adam? Yeah, so Solowind's has had many, many bugs quite famously in this stuff, and this particular bug is not particularly exciting, I suppose. But the thing I did like about this is that the campaign, the group that's using this bug in the wild is actually kind of hip and cool and very cloud native. So Huntress had a write-up of the adversary and the tool change that they used. So they use this, you know, Solans bug to get in. And then after that, they dropped. like a Zoha assist agent for remote access,
Starting point is 00:24:23 and then they use Philosaraptor, the instant response tool for their command and control channel, which, you know, if you're an IR person, is pretty on the nose. And then they also have like dynamic failover where they can switch to another command and control channel. They exfiltrate data into elastic clouds. They land on the system and just bung it up into elastic
Starting point is 00:24:44 instead of exfilling it to their own systems. So all very, very cloud native, which, you know, I think is, is pretty fun. I think the funniest weird modern hip thing that I've encountered recently is knock knock, have a customer who want to use knock knock on their mainframes.
Starting point is 00:25:00 So they needed a mainframe client written and the guys had a first stab of it in Go. Tried to write a Go client for a mainframe and it didn't like it. It didn't work well, basically. So they had to redo it in C. But, you know, that was pretty funny. but go on mainframe you know hey why not that's a funny old world in it is it is um and look you
Starting point is 00:25:26 mentioned this one earlier but yeah that ivanti bug that we first spoke about being you know terrible and uh should not be there because it was like related to previous bugs that that one is out there um being exploited in the wild that's the one that's hitting the EU and various bits of the dutch government yeah i mean at this point you would think that running avanti was a bad idea but still with some people who are stuck with it for whatever reason. Procurement is hard, I guess. Well, and compliance is hard as well, right? So there's some parts to that as well.
Starting point is 00:25:55 Meanwhile, North Korean hackers targeted a cryptocurrency executive with a deep fake Zoom meeting and like a click-fix. They're trying to do a click-fix payload. It doesn't look like they actually got away with any money or anything at the moment. But the point is, I guess these deep fake Zoom calls and whatever, they're going to be standard operating procedure real soon. and it's going to be very, very difficult for people to tell the difference between, you know, like the true person and a deep fake.
Starting point is 00:26:26 I don't think people quite understand how bad this is going to get and how rapidly this is going to get bad. Yeah, I mean, identity verification is hard already, but there's also a great many situations in business where you are interacting with new people, that you don't have some, you know, anchor on which to decide whether you trust them. It's a new customer. It's a new inquiry. It's a new customer sign. up, right? There's, you know, there's plenty of reasons why, you know, you are not, you don't have a grounding to decide whether not a deep fake is a real person or not. Like, that kind of
Starting point is 00:26:56 doesn't matter. In this case, when you're chaining it with, you know, click, the lure here was you get into this video call and then the audio doesn't work well. So then they click, fix you to fix it or whatever else and then drop malware on you. But yeah, I think, you know, the prevalence of video deepfakes versus conference calls versus, you know, identity check and whatever else, it's going to be a wild ride for a few years whilst we figure out how to do distributed network identity because well because it's the last thing that we've been relying on right like if someone sim swaps me rings one of my friends and they're not me like it's i don't know it's it's a bit like if they're texting me if you know someone's texting from my phone hey can you send me some money
Starting point is 00:27:38 or whatever they're going to ring me and they're going to say hey pat you know you maybe to take you're saying you like that's not going to work anymore and then oh okay well you're bit worried about audio, might do video, that's not going to work anymore either. So I think this stuff is going to get bad. And we don't really have too many solutions. Like we had snake, like in last year's second snake oilers, we had persona, right? Who are like a KYC identity verification company, um, who do like remote identity verification for banks and whatever. And they're finding that they're selling licenses to enterprises to just cope with that issue of verifying the identities of like staff. So that's a whole different like, you know, business line for them. I've been chatting
Starting point is 00:28:19 with some people who are founding a company trying to tackle this problem. It's hard. It's a really hard problem to tackle because what do you, what do you, how do you do identity at a station remotely? What you bind it to the device, then you just identify, you know, then it's just someone has control of that device. It's, it's hard anyway. And I think it's, it's going to be bad. It's going to be bad. Let's see. Beyond Trust, remote code execution floor in their remote support software. Hooray.
Starting point is 00:28:49 Yeah, exactly what you want in a security product on the internet for remote access. You know, there's a lot of Beyond Trust out there, and I really hope people are patching because you have this stuff. The bug. Do you remember last week? Was it last week before we talked about like a particular command injection bug and bash that Watts Tower had written up and it was actually really cunning and I was like, oh my God, I wouldn't have thought of that when I was auditing it if I was ordering that
Starting point is 00:29:15 particular piece of software. Anyway, it's basically the same bash trick except in Beyond Trust and somebody looked at that What's Tower post that we talked about and went, huh, I bet other people have exactly that thing and then had their AI go look and found this particular, this particular bargain beyond trust. So it's kind of funny. It's a funny world. And yeah, if you have Beyond Trust, get patching.
Starting point is 00:29:37 if you are a Unix hacker, this particular bash trick is absolutely worth reading about. Okay, so at this point, I want to bring James Wilson into this conversation. James, of course, is the newest staff member and podcaster here at RiskyBizHQ. We will be launching a new podcast channel filled with Jamesy goodness soon enough once we've actually set up the feed. But James, you've been taking a look at some stuff for us this week around AI again. And in particular the news that Anthropic was able to, like it's, it's, it's, Claude was able to basically write a compiler that was capable of, uh, compiling the Linux kernel. Now, this was really, really big news. And opinion seemed to be split, right? Which is why we were so keen to have you look at this. Uh, because the, the takes, the hot takes were either, oh my God, this is incredible. This is the most amazing thing that ever happened in computing ever. Um, or, lull, this compiler sucks. Right. So I'm guessing the, the truth. The, the truth. of the matter here is going to be somewhere in between those two, but you tell me. It's a compiler insofar as it can take C source code and it can compile it down or
Starting point is 00:30:47 translate it down to the next sort of layer of instructions as a compiler would do. But it's far from a general purpose compiler that you can throw any properly formatted code at it and expect it to work with. And that was sort of the first. I did see people like trying to compile Hello World and getting errors, but I didn't know if they were trolling or not, right? Yeah, I mean, look, it's a little bit of an unfortunate sort of footgun moment to release this and have issue number one show up on GitHub as, yeah, great, but it doesn't compile Hello World.
Starting point is 00:31:17 And this is where I think Anthropic really hasn't done themselves any favors. Like, it's stating that you've built something that can compile the Linux kernel is a bold claim. But it misses the point of the article. The article's not about the fact that that was their, you know, long-term intention here. We don't need another compiler. We certainly don't need one that's written by AI. But this is a demonstration of what happens when you get multiple agents. working together in parallel and exposing the fact that there's still quite a few bottlenecks and
Starting point is 00:31:40 problems around the way to orchestrate and make those agents work together. Okay, so what are those bottlenecks, right? Because if anything, like I've been surprised, right, with how quickly agents, particularly Claude, have just been able to blast past existing bottlenecks. Like, are these more of those same bottlenecks that we're expecting these models to just blow through? or is this a little bit different? Yeah, I don't think it's a fundamental problem that requires a deep architectural change. It's more of just like an interesting write-up
Starting point is 00:32:14 of the fact that when you get multiple agents working together in parallel, the emergent property isn't not too dissimilar from what happens when you get a bunch of undomesticated developers working together. You know, people commit code that stomps all over each other. You get merge conflicts, you get problems,
Starting point is 00:32:28 you get people working on the same stuff. It happens, right? And the fact that agents do this as well, yeah that that's that's somewhat expected now Adam I wanted to get your take on this other thing we got here in the in the run sheet this week which is a write-up of some it's a write-up from a company called aisle which is all about what they what they've called it is what AI security research looks like when it works and it's a very interesting sort of nuanced write-up about AI based security research but I think this is similar to the compiler stuff in that every time there's AI bugs every time someone drops AI bugs, there is one of two reactions, which is, oh my God, this research is incredible, game changing, it's going to nuke every single research job forever, or it's, that ain't real hack and that's like a lucky find and the way that it found that bug was really,
Starting point is 00:33:18 really dumb. So what do you think of this and walk us through it? Yeah, I think it calls out the important distinction between, you know, there are these two different responses, but there's also so many different ways you can use these tools. And this particular blog post comes from a company that has been building tooling to do pretty real security work. You know, finding bugs in like OpenSSSL, for example, patching those bugs, getting those patches accepted upstream, interacting with the maintainers of things like curl and OpenSSSL that have, you know, pretty, I guess, developers that have, that are opinionated and are absolutely willing to tell you to go pound sand if they don't like your contributions.
Starting point is 00:34:01 you human or AI, and especially skeptical of AI in the case of curl, like quite famously, Daniel Sternberg has been, you know, complaining about the quality of the work that they get on bugmating programs, for example. So both of these things are true, right? That you, there are people doing real, interesting kind of frontier research, and there are people just using, you know, pasting stuff into chat, GPT, and then copying it into GitHub and I'm saying when bounty. And, yeah, both of those are true.
Starting point is 00:34:26 And it's interesting seeing people writing up both of those. bits and the compiler part, you know, the compiler story for Anthropic also kind of dovetails with, you know, they've been releasing work about their security research and their ability of their models to go find real bugs, perhaps real code. And everything is moving very, very quickly. And even opinions from last week, you know, kind of need to be re-evaluated. And that's the kind of, I think the real takeaway is, you know, immediately dismissing it is wrong, immediately saying this is amazing and going to solve all our problems is also wrong. But we do need to be constantly re-evaluating the state of the art.
Starting point is 00:35:04 I think you're right. I think the speed thing that you just hit on though is very, very real, because I'm finding that, you know, the sand is shifting under my feet so quickly when I'm looking at this stuff. I mean, James, you've been zeroed in on the AI stuff a lot longer. Like, is it playing out how you expect? Where do you think it's going with security research? I'm curious for, you know, because you are so zeroed in on AI stuff. I'm just curious to know what you think there.
Starting point is 00:35:27 Yeah, I think the pace is definitely quickening. I think there's two step functions that have happened. The introduction of using tools was the first big step function and then more and more of these agent workflows where there's just an endless iteration loop where stuff gets done is what's making this move really fast. The security research bit, look, to me, the fact that people sort of looked at this thing
Starting point is 00:35:49 as a, hey, it's a compiler that compiles Linux kernel, but it can't compile Hello World. Aside from the lulls, the deeper story there is that's the same parallel as what we're seeing, which is these models know how to create something that works, but they won't create something that works that won't be susceptible to attack unless you actually go a whole lot extra yards to bake that in and to make sure that that's actually the case.
Starting point is 00:36:11 And so just like the work that Ila was doing is great because it actually goes beyond just find export, get bounty. The same needs to be done with the way that these models generating code to make sure that we move beyond just it works, then ship it. It's got to be it works and it works safely. And I am not yet seeing a sufficient amount of work and effort being put into that side of the equation as well. Well, I mean, it's not, it's not, it doesn't look as good on the PowerPoint, right?
Starting point is 00:36:36 Like, in the meeting with the investors where you've got to convince them to give you another $10 billion into your money-destroying business. But yeah, all right, that'll wrap up that conversation for the week on AI stuff. We do have a couple of funny stories just to round out this week's news segment. Adam, a South Korean crypto exchange called, I think they're called BitThum. Yeah, bitthumb. Butter thumbs. They accidentally transferred $40 billion worth of Bitcoin to their customers.
Starting point is 00:37:06 Bitcoin that they didn't have, mind you. They were doing like a promotion, like a loot box promotion thing where you got a freebie. And it was meant to be they were going to send some of their customers the chance to win six. So was at 2001, Korean one. And they missed the currency units and actually set that to 2000 Bitcoin, which is quite, a lot of money. And so yeah, $44 billion later. So they managed to like reverse the balances of people on their platform, but a number of people managed to actually get the Bitcoin out of their system fast enough to go cash it out. And I think they said, you know, the company
Starting point is 00:37:45 said, oh, well, we've recovered like 99 point blah blah, blah percent. But when you go look at the numbers, they still lost something like 1800 Bitcoin, which is like $120 billion because they screwed up the currency. So that's just deeply funny. I mean, are these people going to have to, are they going to have to give back the money? Yeah, they're asking people nicely to give it back. Like, the ones are on the platform where they could just take it, they've taken it. The people who've moved it out of the platform, they are in the process of asking nicely. But Korean law is kind of funny on the subject as well, because crypto is not...
Starting point is 00:38:15 I think you'd find that it'd be the same in a lot of places, right? Which is, I don't know, though. If a bank accidentally puts money in your bank account, you can't just take it, right? Like, we know that in Australia, because it happens every now and then. And sometimes people, they cash it out, man, they're on the next plane to the Philippines. You know what I mean? But yeah, I don't know.
Starting point is 00:38:31 I don't know. I don't think it's the same everywhere. Yeah, so I don't know. It'll be interesting for them to see how much they managed to get back. And like, mostly I'm just imagining the employee who did it. Like the person whose day on the job, they screwed up in that particular, you know, $40 billion. Like, we've all made mistakes at work.
Starting point is 00:38:48 But $40 billion worth of Bitcoin, whoopsie, like that's a whoopsie. So that's our comedy story of the week. I should just mention, you know, we've got a tragedy story to pair with it as well, which is this story from Martin Matashak, which says White House to meet with GOP lawmakers on Pfizer section 702 renewal. This is the story that just never will die. You know, it feels like every few months we're talking about 702 renewal these days. So it's back.
Starting point is 00:39:16 It's back. And then, of course, it's going to get close to the date. And then everyone's going to start freaking out. And then there'll be some like last minute emergency authorization that will kick the can down the road for another three months or something, right? Like, that's going to be how this goes. We've been doing that for years right now, it feels like. It's been years, right?
Starting point is 00:39:31 I don't know. I don't know. It all sort of like has blurred into one in my head. But, yeah. All right, but that's it. That's it. We're going to wrap it up there. Adam Bwiloh, thank you so much for joining me.
Starting point is 00:39:40 And James Wilson, thank you for coming along to share your expertise on the AI stuff with us this week. Thanks to you both. Thanks, Pat. I'll see you next week. Thanks, thanks, Pat. That was Adam Wailleau and James. Wilson there with the check of the week's security news. Before we kick on to this week's
Starting point is 00:40:06 sponsor interview here is Tom Uren telling us all what he's been up to this week, both in the Between Two Nerds podcast and what he's going to write up tomorrow in seriously risky business. On this week's Between Two Nerds, the Grec and I spoke about the dynamic where security has been bad, is bad, probably always will be bad, but maybe that's okay. In seriously risky business this week, I'm writing about changes at the top of Microsoft. They're make me worry that the company is reverting to form and that it'll prioritize selling security products over making products secure. I'm also writing about Chinese cyber ranges. So apparently there's reports that they're replicating regional critical infrastructure networks. The only
Starting point is 00:40:49 reason you'd want to do that is to figure out how to attack them and disrupt them. So bad news. Finally, I'm also talking about news that the US disrupted Iranian air defense networks using a cyber operation. That's like the wet dream of cyber operations when it comes to the military. I think it's fascinating that that news has come out. But that operation, bombing nuclear facilities in Iran,
Starting point is 00:41:13 is a type that really suits cyber operations. So I'm not convinced that this is the sort of standard thing that will happen in a long, drawn-out war. If you would like to listen to those podcasts, please do subscribe to the Risky Bulletin RSS feed. You can head to risky.biz to find that.
Starting point is 00:41:30 And you can also subscribe to our newsletters there. It is time for this week's sponsor interview now with Brandon Dixon who is a co-founder of Ent AI. Now Brandon was the founder of Passive Total which wound up with Risk IQ and then Risk IQ went over to Microsoft and you know Brandon actually wound up being the I think he was responsible for a security co-pilot when it launched but he's out of Microsoft now and he's building Ent AI. So Ent is not really talking all that much about what they do but they are talking about how co-pilot related stuff in
Starting point is 00:42:09 AI is kind of just the first stage in what we can do in terms of using AI to improve security so this interview is really Brandon talking about what the bigger possibilities are and I personally I found it very very interesting I'm going to be working with these guys I'm going to be doing some advisory with Ant they're a Desbell company so I just need to disclose that but this was a very interesting interview about Yeah, where AI could go in security beyond just using AI to interpret the sources of information that we already have access to. Here's Brandon Dixon. Enjoy.
Starting point is 00:42:43 You know, cars for the longest period of time have been trying to quest towards like automation, like since like the 1970s, like making themselves driving. And it was only with the introduction of world models and like updated technology that that has become more possible. It's not perfect, but it's become more possible. The car is not trying to drive better. It is trying to anticipate what another car might be doing or what a person might be doing within that visual space. And so it's trying to be predictive. It's trying to anticipate what that next thing looks like.
Starting point is 00:43:17 When you think about security, security is about trying to do this detection. Like do everything right. Assume that everything's going to be right. And what it lacks is this organization, what we call an organization work model, this understanding of how the business actually operates in order to create an understanding and predict what somebody might be doing.
Starting point is 00:43:40 Is that normal? Is it not? Is it part of what the business typically does or is it seemingly stick out? Like those are the things that you want to be able to capture. So how do you go about actually capturing that? Is the question, right? Of course. So, you know, the way that you,
Starting point is 00:43:59 do this is that you have to be at the end point where you have the most feature-rich context, and you want to create a layer, a layer between the system and the user. And it's that telemetry, the same telemetry that feeds into the automated self-driving cars that exist today that help build up that world bottle, the understanding of what that person is doing. And what AI has given us today, recent advancements, is the ability to scale understanding that context without humans. So because we have a lot of semantically rich information at the endpoint, we're capable of now understanding what is the user actually doing. We don't have to guess.
Starting point is 00:44:37 We don't have to represent it in sparse signal. We get it in exactly what happened. And so for us, we see the endpoint as the holy grail of context, but also the greatest opportunity to intervene and stop somebody from doing something before that bad thing can occur, before the mistake can occur. And it's a combination of the advancements in AI, but also your deterministic rules as well. So where does AI actually bolt into this, right? Because if you're talking about like being able to do a statistical analysis of telemetry sources to understand context, like this is a game we've been playing for a while, you know, mostly around sort of operating system behaviors and things like that, like that's how the endpoint protection stuff on something like CrowdStrike works, right?
Starting point is 00:45:24 is there's some wacky event that just it hasn't happened before it pops up in some console where they're doing their MDR and they say hang on this strange thing happened and then away they go
Starting point is 00:45:37 so I guess what you're talking about though is a more flexible human centric context how do you where does AI come into like contemporary AI come into actually constructing that world
Starting point is 00:45:50 you know work model so there's statistical approaches that are tried and true that are going to be cheap and somewhat effective, but they're going to be noisy and brittle with false positives and in a bunch of noise. I think that's why those systems didn't work. Well, this is why people haven't done it when, you know, machine learning was all of the hype, right? Like, this is why we haven't seen someone come out and like solve the DLP problem with, like, machine learning. It just, it hasn't happened.
Starting point is 00:46:16 Yeah, I mean, at its core, you know, it's embeddings. Embeddings are the big advancement that we have today that recent updates in models have given us. Language models excel at language because they embed that natural language. They backed into understanding it and they make sense semantically over those concepts. They attend to words to then understand the meaning to then try and predict what the next thing is going. The next token will be. The next word will be. So where you have statistical approaches, which are fine, they give you some signal. What happens all of a sudden if you have a way to represent behavior in natural language and they can be formed in embeddings?
Starting point is 00:46:59 Okay, so you've got some room for AI to help you sort of understand the context, not just understand the context and collect information about the context, but also to continuously evaluate what's happening against that context. Is that kind of the idea? Is this all stuff that's sort of been enabled, with like recent, you know, reasoning models? I think it's advancements in how, like, the current language models have been able to scale
Starting point is 00:47:27 to understand language itself. They've backed into those concepts. The way that they've done that is that they've taken all the data from the internet and formed embeddings to transform that semantically rich contents, a context into vectors that computers can then process to find similarities between those. So if you're capable of,
Starting point is 00:47:48 building a layer that understands what's happening and you're capable of modeling that in language, then that allows you to go and identify something that's more similar, like similar to behavior that is normal versus not normal, baseline versus not baseline versus not. And it also allows you to express in natural language where you don't have to be in experts to then understand what actually occurred. I'm not. I mean, I was about to say, like this is what you're talking about is the skeleton of a system that can actually explain to you why it has flagged something or why it is complaining,
Starting point is 00:48:23 right? And that is a very new sort of thing instead of having to see some weird signature-based alert that you have to hit Google and find out, you know, why did you just trip on that? That's, you know, I don't understand. This will actually sit down and, you know, make you a cup of tea pretty much and explain it to you. Yeah. And well, think about it this way. When you go and join an organization, one of the first things that you experience is the policy of the organization. This is your data handling policy. These are the acceptable applications you can use. These are considered the norms for the business.
Starting point is 00:48:54 They're not expressed in signatures. They're expressed in natural language. We read these things as humans we try our best to understand it. That is the opportunity that's available to us. Can you consume those policies and then represent that as effectively as rules in natural language? The analogous term that we would have today in these language model systems is prompting. We're effectively prompting these systems to go do a task on our behalf. Generative systems are wonderful for being able to take complex topics, like the outputs of a
Starting point is 00:49:27 machine learning clustering algorithm, and explain in human language what the hell that actually means, right? I'm not seeing cluster one versus cluster two with like some feature set. I'm seeing cluster one is, you know, doing productivity documents and cluster two is deleting a bunch of information off my system. And when you're capable of representing information that way, it creates a new paradigm for how detection can be performed. But it also opens up again the opportunity since you're at the end point to intervene in near real time to stop the bad thing or the mistake from occurring before it actually happens. You save by not having to create all this downstream work
Starting point is 00:50:08 that security practitioners have to be experts on today. Today as a developer, if I go and run PowerShell on my system, it's going to get flagged as a suspicious command line usage. Somebody in the sock is going to either ignore that alert or they're going to be unfortunately having to run it down. They're going to go talk to the developer and they're going to say, what were you doing? Why were you running PowerShell? Developers going to be like, what are you talking about? It's part of my job.
Starting point is 00:50:35 I'm a developer. I was doing developer things. And they're going to put it somewhere in a service now ticket and it's going to fire the next day and the next day. And it's just going to go unaddressed, that context is missed. It doesn't get preserved in service now. It needs to be preserved in a system, that world model of the organization, something that's capable of expressing, this is unique workflows to your business.
Starting point is 00:50:58 These are unique applications that you go to, unique URLs that you visit. These are considered normal behaviors. And when people deviate from that, these are considered risky. I mean, it's interesting, right? because until now, like I can just think of like a dozen companies that are doing like, you know, AI SOC at the moment, right? Which is the idea is we're drowning in information already. We're drowning in telemetry.
Starting point is 00:51:24 So let's plug an AI agent into the sock and Bing, Bing, Bing, you know, fantastic. We've just saved ourselves a whole bunch of time. Increased the fidelity, you know, decreased false positives and whatever. I guess what's interesting here, though, is that you're saying, well, the sock is kind of yesterday's news and we've got an opportunity here to recreate that's to rebuild that model around data that has much richer context data that's much closer to the user data that's much more complete. I mean, that seems to be the thinking here, right? Yeah, I mean, I do believe that the AI like agents that are taking place in the SOC have a fundamental flaw. There's there's
Starting point is 00:52:05 there's good and bad. The good is we need a we need help scaling in the moment. And I think services and automation in the SOC as it exists today is a net positive. Even if it's another layer on top of existing solutions that we have, we're not going to have traditional threats just magically go away. They're still going to be plagued with EDR problems. And you know, we're going to have to have some coverage there. But but but we need something else now. We have an opportunity to have something else now, which is an additional control. Right. Well, additional insight, visibility, context, whatever. Yeah, there's a first principles approach that you could take,
Starting point is 00:52:41 which is if you can model that behavior, that behavior information becomes applicable to the SOC. It becomes applicable to security awareness training as well, right? It becomes applicable to modeling inside risk, seeing how data flows across the company. Yeah, but it's high quality. It's high quality data pumped into the SOC, which we can always use more of that, right?
Starting point is 00:53:00 And it's additional context when, yeah, I mean, if you have got good information from endpoints, you know, correlating that, using agents, I guess, in the SOC, you're just going to get better detections. You will get better detections and you'll get real explanations as to what happened. Because now when you have that developer that got the PowerShell, that there won't be a SOC analyst looking at it because it will be automatically resolved. This was normal for the person. We've seen it as far as their baseline over the last three months.
Starting point is 00:53:31 The command was innocuous. Shut it. Right? And like, and when you see something that looks similar to that in the future, just silence it. And like maybe here's a suggestion on how to go in tune that traditional AI SOX system, right? Like there are your your traditional detection mechanism. But eventually you want to squelch that stuff again. Being directly on the endpoint, that alert should never fire in the first place. So I think that's an opportunity where that that endpoint movement, there's a there's a V3 effectively, right?
Starting point is 00:54:00 Gen 3 of endpoint. You had A.V. You had EDR. and now you're going to have this autonomous endpoint in which we're eventually going to get to a point where we do trust the systems enough, the non-deterministic working with the deterministic, i.e. this neurosymbolic system that is capable of achieving this security defense. It can actually go and manipulate your system, but it's going to do so in a way that has the appropriate garb rails and safety that you feel confident, unlike a molt bot or claw bot where you've basically give it full rain over your machine and hope that it doesn't go and delete your file
Starting point is 00:54:39 system or send all your files out through your personal Gmail or something, right? Like, there's, that, that feels like the direction that we need to go in. And I think the way in which you achieve that is you have to start from a first principles approach. If you try to retrofit it, like the co-pilots, you know, just simply bolting AI on. And it's not a knock on the copilot. Again, I think that they have, they service something. But when you bolt this stuff on, you don't get the benefit of thinking about it from a brand new vantage point. You have this technical debt that you're trying to like walk around or retrofit.
Starting point is 00:55:16 Why? Just start fresh. Go big. Try to do something bold. That's worth doing in security. All right. Brandon Dixon, fascinating stuff. You're going to be back in April or someone from NAI is going to be back in April to give a more detailed pitch on exactly what it is that you're building.
Starting point is 00:55:32 That was a fascinating conversation. Thank you. Thank you. That was Brandon Dixon there from Ant AI. And those guys will be back in a couple of months to talk in more detail about what it is that they have built. That is it for this week's show. I do hope you've enjoyed it. I'll be back soon with more security news and analysis.
Starting point is 00:55:50 But until then, I've been Patrick Gray. Thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.