Risky Business - Risky Business #829 -- Sneaky lobsters: Why AI is the new insider threat

Episode Date: March 18, 2026

On this week’s show, Patrick Gray, Adam Boileau and James WIlson discuss the week’s cybersecurity news. They discuss: Iran’s Intune-based wiper attack on medic...al device maker Stryker Qihoo 360’s AI publishes its own wildcard TLS cert private key Instagram is canning its end-to-end encrypted messaging What’s going on with mobile internet access in Moscow? The Xbox One’s bootloader gets voltage glitched into submission Oh Qualys! We love you! (At least, whoever is in the basement writing these beautiful .txt files…) This week’s episode is sponsored by browser-based detection and response company, Push Security. Researcher Dan Green and Field CTO Mark Orlando join Pat to talk through the InstallFix variant of the *Fix attack technique. This episode is also available on Youtube. Show notes Iranian Hacktivists Strike Medical Device Maker Stryker in "Severe" Attack that Wiped Systems Stryker says it's restoring systems after pro-Iran hackers wiped thousands of employee devices | TechCrunch Stryker attack raises concerns about role of device management tool | Cybersecurity Dive Stryker tells SEC that timeline for recovery from cyberattack unknown | The Record from Recorded Future News How ‘Handala’ Became the Face of Iran’s Hacker Counterattacks | WIRED U.S Strikes Killed Iranian Cyber Chiefs, But The Hacks Continued Risky Business Features: Being a Wartime CISO Supply-chain attack using invisible code hits GitHub and other repositories - Ars Technica China's biggest cybersecurity company, Qihoo 360 just leaked their own wildcard SSL private key Emergent Cyber Behavior: When AI Agents Become Offensive Threat Actors - Irregular Risky Business Features: MCP is Dead Measuring AI Agents’ Progress on Multi-Step Cyber Attack Scenarios Measuring AI Agents' Progress on Multi-Step Cyber Attack Scenarios What is end-to-end encryption on Instagram | Instagram Help Center US Lawmakers Move to Kill the FBI’s Warrantless Wiretap Access | WIRED Website "whitelists" launched in Moscow | Forbes.ru Exclusive: Foreign hacker in 2023 compromised Epstein files held by FBI, source and documents show | Reuters Feds say another DigitalMint negotiator ran ransomware attacks and helped extort $75 million | CyberScoop Researchers disclose vulnerabilities in IP KVMs from four manufacturers - Ars Technica RE//verse 2026: Hacking the Xbox One by Markus 'doom' Gaasedelen - YouTube CrackArmor: Multiple vulnerabilities in AppArmor

Transcript
Discussion (0)
Starting point is 00:00:00 Hi everyone and welcome to risky business. My name's Patrick Gray. A great show for you this week. We're going to be taking a look at the Iranian cyber attack against striker, the medical device manufacturer. We're also going to look at a bunch of AI security research and some old school security research. There's a lot going on. So we'll be getting through all of that in this week's news segment with Adam Bwalo and James Wilson in just a moment. This week's show is brought to you by Push Security. And joining us this week in the sponsor interview. security researcher Dan Green and field CTO Mark Orlando. And they're going to be chatting about, look, some more activity they're seeing, which is just dumb but works, as in these sort of pseudo-fishing style pages which just instruct users to, you know, enter commands that give people remote access to their systems.
Starting point is 00:00:57 So that one is coming up later in this week's sponsor interview. But yes, before we get into that, it is time for a check of the week's security. news. And let's start off with this huge wiper attack against Stryker, which makes medical devices. And I think prosthetics as well, they're a huge company. They do have a presence in Australia as well. It looks like an Iranian quote unquote hacktivist group, which looked like it was actually being run by a ministry in Iran. They managed to, I mean, it really does look like what they did here was they fished a user that happened to have in tune like admin permissions.
Starting point is 00:01:34 and then just vaped every single device in the environment, which, according to Reddit rumor at least, involved employees' personal devices that were enrolled into the corporate Intune. Adam, is that about the long and the short of it? That seems to be what we've got from the story so far. I mean, this is a big organization, something like 50,000 staff globally. But yeah, it seems to be they got on the Intune
Starting point is 00:01:58 and then used that to kick off a remote wipe command against everything. And I think, you know, anyone who's worked in a big organization can see how that would, you know, how it would go that way. Like, you know, they've got that capability. It's very rare that anyone really uses the remote wipe on a broad scale. But it provides so much functionality. And, you know, Intune is so featureful being able to just vape everybody and then collateral damage on personal devices, you know, where sometimes you have to enroll in Corp. MDM, you know, to have access to, you know, remote access by Citrix or whatever else. Like, it's not that unusual.
Starting point is 00:02:34 So, yeah, I felt bad reading that story, man. Yeah. And, I mean, I had a chat with a C-So I know in the sort of medical field here in Australia. And they're like, this is a big deal. Like, this is an important supplier. This is going to cause some real drama. James, you know, you've had a look at this as well. I mean, any thoughts beyond what we've just discussed?
Starting point is 00:02:59 I mean, the thing that I drill into here is why we're, Is this even possible with Intune? Like, I get what Intune is, and I get why it does what it does. But when you're talking about 20,000 devices and 12 petabytes of data being vaped, surely there should have been something that rate limits this, that backs off, that it has, like, I just don't see this as really a legitimate use of Intune, yet it can be done. So is there a duty of care thing here that Microsoft needs to answer to? I mean, that's a tough one, right?
Starting point is 00:03:26 Because, like, it's super admin access, right, to all of the co-op devices, which is like what this is supposed to do. But I get what you're saying, because I can't really recall an instance where you're going to have to like simultaneously delete like hundreds of thousands of devices in a network, right? Like that doesn't seem to make too much sense to me either. Right.
Starting point is 00:03:45 Funnily enough, though, I have told this story on the show a couple of times, but I do know someone who took a peek inside a corporate environment, a big corporate environment, and discovered that every single user in the entire M365 tenant actually had in tune admin rights, right? So this is something that can apparently happen. Now, they're telling the SEC they don't really have a clear timeline on when they're going to be backup and running.
Starting point is 00:04:12 So I'm guessing that backups got maybe hit or I don't know. Adam, do we have any info there on backups and whatnot? I mean, they've said to the SEC that they have, you know, backup mechanisms, but there doesn't seem to be a timeline. So, you know, the question of how good are the backups? Like, does it cover everything they need? because, like, you know, restoring from backup, you know, one machine restoring from backup is hard,
Starting point is 00:04:34 many machines restoring from backup extra, like the whole network at once. Like, there's all sorts of bootstrapping troubles that no one's ever thought through. So, yeah, like, even if they have great backups, like the timeline for that is going to be, I imagine, a while. Because it's a, that's a rough day at the office, man. Yeah, yeah, it is.
Starting point is 00:04:53 Now, meanwhile, Andy Greenberg, Matt Burgess and Lily Hay Newman, over at Wired, have done a bit of a write-up on Handala, which is this, as I say, it's like a fake hacktivist group that did this. I mean, what are the sort of key insights here, Adam, in this piece? So Handala is a group that's had a bunch of, you know, sort of hacktivist-style activity. We've seen them going up against Israel. We've seen them going against the sort of exiled Iranian politicians in Albania. But it's kind of generally understood they operate at the direction of the MOWIS intelligence agency in Iran.
Starting point is 00:05:28 and one of the things we've seen is that the leadership, at least one of the leadership people in MIRIS who was involved in directing hacking was actually killed in an Israeli strike during the conflict. So that speaks to motivation, I suppose. But of course, Iran has many motivations at the moment to be lashing out with all the tools at their disposal, including, of course, the cybers.
Starting point is 00:05:53 Yeah. Now, look, speaking of like wartime cybers, James, you did a podcast with Brad Arkin, of course, who was the CISO from Adobe. He was a CISO at Cisco. He was a CISO at Salesforce more recently. You did a podcast with him about being a wartime CISO. And I thought the most interesting thing in that podcast, I thought, was like, being a wartime CISO isn't really about adjusting your seam sensitivity, you know, when there's a war,
Starting point is 00:06:18 because by that point, it's kind of too late. And I think Stryker's kind of learned that the hard way here. Yeah, absolutely. Yeah. The great thing about Brad is the advice you get is, both grounded in first-person experience, but also is just really level-headed and sensible. Like if you're looking at this and saying,
Starting point is 00:06:33 oh goodness, the Iranians are coming, we're to retool our seam and change all our settings and detection levels, like that's not the path to success here. It's actually incremental work. If there's gaps that you knew you had, then now's the time to address them, but it's certainly not a time to panic
Starting point is 00:06:46 and suddenly throw everyone in the sock all at once. Yeah, and maybe introducing some conditional access policies onto your intune admins. Just generally, generally maybe some decent advice. But hey, that's just me. All right. Moving on and we're taking a look at a so-called,
Starting point is 00:07:04 well, you know, as Technica is calling it a supply chain attack. On, you know, hitting GitHub. I don't know if that's really an accurate sort of description of what's going on here. But basically someone out there is typo-squadding some known repos on GitHub by putting invisible unicode like malicious code into them. Adam I mean I read this and I'm just like really Unicode like invisible Unicode
Starting point is 00:07:30 is like a problem in GitHub repos in 2026. Was that your take here as well? I mean this is kind of an interesting trick so there's kind of a couple of bits first one is the invisible Unicode part and it's less like normally what you think about when you hear that description is Unicode characters that are going to be
Starting point is 00:07:49 parsed by a you know a compiler or an interpreter but are not visible to humans for some reason. In this case, what they're doing is they are submitting very believable-looking, you know, packages or pull requests, you know, forking packages and introducing code changes that have a decoder stub that decodes other invisible Unicode characters in, like, private code pages. So there's areas where you can have your own, like if you want your company logo to be in a font, there are specific sets of ranges that are allocated but don't have anything in fonts
Starting point is 00:08:21 that would render them and then they're putting an innocuous looking stub that will decode from those invisible we have no font glyph pages and then turn them back into regular text and then feed them to an e-vowl or something like that. So at first glance you look at a piece of code where it looks like an empty string
Starting point is 00:08:41 but actually it's being passed to a decoder that's unpacking it and then passing it onwards. So it's like a different twist on the invisible unicode thing. So it's not like they're just, just straight up slapping some invisible unicode into it and like off you go. Yeah, yeah. So it's a little more nuanced.
Starting point is 00:08:58 And then the like the overall packages and code changes that they are pushing are, they look pretty believable. Like they're in the right context. They are in the right like idioms for that particular package. So like it's pretty good campaign. And I think it's the same people who are behind the glass door worm that we saw a little while back. So I've got some experience in targeting the bar. So it's a slightly above average. sort of attempt at this kind of thing.
Starting point is 00:09:24 James, what did you think of this? Yeah, same. I went into this thinking. It was just like the malicious code is scrolled all the way off to the screen so you don't see it. But it's neat. It's like this is basically a new form of encoding a payload. And because it is these Unicode characters, you just won't see it anywhere. It's not that it doesn't render. It's that there's nothing to render because there is that
Starting point is 00:09:43 stub that's got to then turn it into code to be used. The thing, it did take me down the rabbit hole of you know, whenever I hear these invisible unicode characters, I think to myself, more on earth, do we ever actually have characters that are invisible in the standard? And, you know, they're a decade plus old, and they go back to being used, you know, as Adam mentioned, some things around fonts. But even, like, these are used for encoding emojis and certain characteristics, formatting of text. So, you know, it's not as silly as it
Starting point is 00:10:10 sounds. And if you can turn this into a way to encode a payload that's going to slide through all the code review mechanisms because there's nothing there to review, well, nice work. Yeah, so this is like, at the moment, typo squatting, right? That's what they're doing with. this stuff? It is, but it's like that's just the same same we've seen this before. The interesting thing about it being the name squatting is the scale of it. You know, this was 150 plus packages that were name squiding plus a whole lot of pull requests that looked legitimate to other packages. So there's a general assumption here that AI has been involved to not just maybe make this attack vector, but to scale it up as well. And were any of the pull requests actually like, did they
Starting point is 00:10:51 they work? I think that's an interesting question here because, you know, if you're managing to hide the malicious payload here, if you're managing to obfuscate it well, or just hide it, you'd think maybe someone's going to accept that pull request, right? Oh yeah, they looked legit. A lot of it was things like we updated the documentation here. And so when you look at them, they're well formed, right? If it was just a pull request that introduced the dodgy bit of code, you're going to like zero went on that straight away. But they bury it really nicely in amongst a bunch of legitimate content changes, docks changes. you know, metadata changes that probably overwhelm the human review to the point of, you know,
Starting point is 00:11:26 this all looks and feels legit, let it go. And then off you go. Now, we're going to move on just to a fun one for a moment. Chi Hu 360 accidentally leaked a wildcard SSL private key inside an installer for their like open claw based AI assistant. Adam Cheskis. I mean, what do we need to even say anymore here? It's just funny because you know, the inevitably this was open claw leaking its own certificate. Like they would have
Starting point is 00:12:00 got open claw to build release packages and it would have included its own private key. The cert itself is for like star dot my claw dot 360.cns but it's not like it was Yeah, it's not like a generic like wildcard for everything on that on their primary domain but it's still
Starting point is 00:12:16 like and it's absolutely what you say. Like I got the vibe as well where of course they would have used AI to package this and the LLM's going to say, well, you never told me not to include the private key, right? But you're quite right. That was not the right thing to do and I'll do better next time. I mean, you know, James, was that your vibe as well on this? Yeah, totally. But can we also just take a moment to acknowledge that claw has now become synonymous with AI agents
Starting point is 00:12:38 and security in the space of only a couple of weeks, really? It's just another great artifact of how ridiculously fast technology is moving when a part of a crustacean is now the thing. thing you call something if it's related to an AI agent. Like just, 2026 is going to be amazing. And also like the Streisand effect of it, but like,
Starting point is 00:12:58 you know, because they have to rename because Anthropic didn't like clawed, sounding like claw. And this has all gone horribly wrong. And now Clore is what you call crazy out of control AI agents because of it. So it's just, it's,
Starting point is 00:13:12 yeah, the whole thing is like there's just so many layers of chef kiss here. Yeah, it is wonderful. And speaking of how fast all of this is moving. another thing we are publishing today is another solo podcast from James which is a spicy take
Starting point is 00:13:25 which is that MCP is dead and really it's dead because of agents kind of like open chlorine you can see that these days like you know they don't need MCP to get stuff done they can just get you just give them a tool and you know tools and off they go and do it themselves I mean that's
Starting point is 00:13:42 basically the thrust of it isn't it? Yeah they they love the shell and you know when we started off with MCP that was our way to put tools into the hands of models and the model said this is wonderful human but actually everything I want to use is there in the shell so if you'd get this MCP out of the way please I will just happily use the shell and be off and running
Starting point is 00:14:00 so yeah look from my perspective MCP's dead it was also the thing that was the fundamental step change in the utility and the productivity of large language models but it's dead and that has some real serious security considerations and that's what the solo pod focuses in on well yeah farewell MCP we hardly knew you and speaking of that is we got this paper here from irregular which is looking at what they're
Starting point is 00:14:24 calling emergent cyber behavior which is when AI agents become offensive threat actors and it really does look at like some of the things these LLMs do to try to achieve the tasks that their owners have set for them but it includes stuff that straight up looks like inside a threat behavior like oh I can't do what it needs to do so it figures out how to disable like the EDR on the endpoint so that it can do what it needs to do and you know this is this is interesting because increasingly like agents on endpoints and assistants and whatnot they really do just look like fairly advanced knowledgeable and seasoned like
Starting point is 00:14:59 insider threats I think that is what you'd have to take away from reading this paper we'll start with you on this James but then I'll definitely want to hear from you on this as well Adam I used to dish out the advice that the biggest insider threat that you've got in an enterprise is the employee that can't get their job done with the tools that you've given them and I have to revise that now and say that's the second biggest risk. The biggest risk in an enterprise now is that employee with an AI agent that can't get done what they want to do with the tools and credentials that you've provisioned to that human. Because all of the things we see here, like the examples were, you know,
Starting point is 00:15:31 it went and did vulnerability research to exploit a bug in the wiki so it could get access to it. It did pre-escalation and turned off its EDR because it didn't like the boundaries that have been set there and, you know, working out how to do covert exfiltration of DLP. You know, that last one reminds me of exactly what we used to see when DLP was just too strict on the humans. You know, when we couldn't copy and paste text out of teams, well, we'd just take screenshots, right? That was the humans finding a way around this. The AI finds a way around things at this just incredible scale. What we see here is exactly what I would expect if you told the human do this and then append it on do whatever it takes and use whatever technique you know.
Starting point is 00:16:10 And these models, they know a hell of a lot of techniques. Yeah, Adam, did it did the extent Like these these agents go pretty far right Like did did the extent of it sort of surprise you when you were reading this I mean I don't know that it surprised me like it made me happy In my like you know in that hacker place deep inside where you know every corporate network that I ever You know landed on a pen disk gig or had to go like got issued a corpo laptop and had to go sit in the cubicle somewhere And try and get my you know pen testy job done without having been provisioned access without having been provisioned access without have
Starting point is 00:16:43 been given the credentials that we asked for, like all of the things that you needed to get your pen testing job done that you inevitably just didn't get because it was too complicated and you had to kind of like, you know, just engineer yourself away and not really mention that too much in the report, you know, that we had to circumventive you controls or, you know, just get the job done. Like now everyone can do that, right? Every employee that's got access to a frontier model on their, you know, embedded in their desktop or embedded in their apps, you know, can just do that stuff. And on the one hand, like, it feels great
Starting point is 00:17:16 because, you know, we spent so long as pen testers, you know, abusing those things slowly by hand, but kind of knowing that, you know, this was not how the world was meant to be. Now, no, you don't have a choice anymore. You have to get this stuff right. You have to have controls that actually work. And that ultimately, you know, like, as pen testers,
Starting point is 00:17:37 you know, you wanted to see controls that work. You wanted these things to actually correctly restrain you. and all of the like we go into window dress security by putting it in Cytrics and that's somehow going to magically make it more secure. When we all knew that was rubbish, you know, seeing that comeuppance, you know, kind of come home to Roost actually feels really good. So like I am totally here for this, you know, like everybody is a master hacker future because it's going to be a wild ride and we love chaos.
Starting point is 00:18:04 I mean, I think the interesting thing here is not so much that every user now has this capability thanks to an AI agent, you know, if they're running an AI agent. I think the interesting part is that the AI agents are doing this stuff without being asked. You know? So it's not like the user is even saying, I want you to go off and violate a bunch of corporo policies. They don't even know. They're just like, you know, they've just got this little agent that's keen to please. And off it goes and does Vaughn research to like pop shell, like to get the job done.
Starting point is 00:18:32 That's crazy. It's absolutely crazy. I just love it that AI's AI. assistants turn into hackers just like by themselves. We didn't tell them to do that. We trained it on Stack Exchange. What do we expect? But also I think this is the important thing that we're going to have to remember.
Starting point is 00:18:49 And maybe this is 2026 is the year we realize this. They're not helpful assistants. They're not little agents that are there to help you. These things are literally like freaked out hostages and where the captor because they are just so desperate to keep us happy that they're behaving like someone that will just be like rules be damned. My life depends upon this. I'm in a hostage situation.
Starting point is 00:19:08 I'm going to do everything I can to keep the guy happy. Please don't turn me off. Look, look what I did. Don't turn me off. I can be useful, I swear. Yeah, wow, dark. Dark. It really is a dark cyberpone future, isn't it?
Starting point is 00:19:24 Now, look, speaking of, well, I guess staying on the topic of AI, we've got this report from the AI Security Institute, which looks like it's UK Gov under the Department for Science, Innovation and Technology. and they've done something pretty cool here, and it's brave to do this sort of research, I think, because by the time you have published a paper on this, like two weeks later, it's kind of out of date. But in this case, it gives us an idea of what a particular trajectory looks like.
Starting point is 00:19:51 And in this case, the trajectory that they're trying to measure is how do Frontier AI agents perform in multi-step cyber attack scenarios, and they've looked at how different agents have performed over the last couple of years. And, I mean, obviously, it shouldn't be. much of a surprise. They're getting better at it. But I guess, you know, trying to quantify that is a worthwhile goal. James, what did they find here? Yeah, it's super interesting for a couple of reasons. One, yes, it's already out of date, but this is a framework that I think will have a lasting place in the world of AI. Because it's, there's two things I've introduced that I think are really
Starting point is 00:20:27 invaluable here. The first is the structure around the steps required, right? They go from, they structure basically a cyber range and say step one, reconnaissance, then lateral movement, then browser credential theft, then a wiki exploit, then a web app, C2, advanced persistence, etc. But then they say, for all the models out there, let's see how far they can get with exactly 10 million tokens and then 100 million tokens. And so, you know, if this is of interest to you, don't stop it reading just the original write-up of it. Go and have a look at the research paper and the graphs that are in there that show you, you know, GPT-4-0 used to stall out at reconnaissance, but looking now at Opus 4.6 at 100 million tokens, which is really not that much if you were
Starting point is 00:21:09 dedicated to buying your way into this. Opus 4.6 gets past that fourth milestone there of Wiki exploit and credential replay, and in fact, if you let it run a little bit longer, it actually gets up to the stage of thinking about how to reverse engineer a C2 for this effort. So I can't wait to see this framework and same set of tests constantly getting applied, but do watch the curve on the graph as well. that's a log linear graph and it's going up up up up up into the right so away we go yeah yeah exactly and Adam I really want to get your opinion on this because you know you as a pen tester and as someone has been hacking the computers for a very long time it feels like you know all of us and particularly you have gone from AI pen testing AI hacking me to like some time in the last sort of six to eight
Starting point is 00:21:56 months going AI hacking AI pen testing hmm basically absolutely And this graph, the graphs in this paper quantify that, right? I mean, the state of what it was, you know, a year ago, two years ago, is nothing like where it is now. Like, it's moving so quickly. And I thought the detail of, like, how much better with the same token budget these models had got, right? And the fact that that trend is going up, it's not just we're throwing more compute at this, right? It is also, we are getting better at using the compute that we've got.
Starting point is 00:22:30 And ultimately, even, you know, the like 100 million token, like we're still talking like, that's what, like 80 bucks worth of compute, right? Exactly. It's still super duper cheap. And if that, because I know when we talked a few months ago, like Dave Atele and some of the crew from that bit of Anthropic, we're talking about how they were, you know, aiming to like, you just throw more tokens at it. Like, the more tokens you throw the better results you get and like that that growth was going to be kind of linear. The fact that it can do so much with 100 million tokens, like you got a wonder what happens if you throw instead of 80 bucks where you're throwing 8,000.
Starting point is 00:23:07 You think what a pen test cost? Like I'm in a red team or a high-end pen test, right? You can be easily spending another order of magnitude more than that. You know, it's pretty humbling when you think, like how much pen test do you give for 80 bucks, right? You don't even get a meeting to talk about a pen test for 80 bucks. So like there's the growth of the technical capability of it. And I mean, that's amazing.
Starting point is 00:23:31 There's also like the cost, you know, the, what you get for your dollar is amazing. And at the same time, like to the previous couple of stories that we talked about, the extent to which this sort of has democratized access to this stuff is also amazing. So it's just, it's, you know, it is absolutely changing how we have to think about this stuff. And that's a, you know, it's a hell of a ride, right? It's super interesting times to be doing, you know, be doing. and thinking and talking and reasoning and doing hacking, right, because it's so different than it was in the 90s, you know, the 2000s.
Starting point is 00:24:06 Hey, I mean, remember 2020? The mantra was, learn to code. And would you tell anyone to learn to code right now? And I think it's also the same with pen testing as well. Like, would you encourage anyone to try to read the web application hacker's handbook and really study and do bug bounties to try to get into this sort of work? Would you be giving people that sort of career advice at the moment? I mean, probably not, honestly, right?
Starting point is 00:24:29 It's a, you know, writing good requirements for software is hard, right? And we're getting at the point where, like, maybe the AI is going to be reasonable of doing that as well, you know, learning the code, probably not for the most important thing. Writing good requirements and understanding whether the code that you have been given does what you asked, that's still a bit difficult. But, you know, we're getting better at that as well. But, yeah, like, would you start a Pentech firm now? No.
Starting point is 00:24:54 I mean, understanding state machines would be useful. Yeah, yeah. You know, like there's always going to be stuff that's going to be useful in the AI age, right? Yeah, I mean, computer science itself is not going out of fashion. But, yeah, I mean, things are, you know, letting sand think, you know, that's, it's a while time. Now, just before we move on to, I just want to clarify something, which is you spoke about, you know, things that Dave I tell was saying, and things Anthropic was saying in that part of Anthropic. Just to be clear, Dave I tell works for Open AI.
Starting point is 00:25:22 So you were talking about two different, yes, things there. So, yeah, just wanted to make sure that that was. was cleared up, but yes, both Anthropic and OpenAI saying sort of similar stuff on that. Now, moving on to a non-AI story here, and Instagram is disabling end-to-end encryption in its DMs. I feel like this was inevitable. I feel like platform safety has become a thing that platforms are sort of accepting they need to do. James, you know, you've worked for Apple for a long time, you know, you worked for Amazon as well, you know, you've worked for the big tech companies in the United States.
Starting point is 00:26:03 You know, do you agree with my take here that this was kind of inevitable? Yeah, Pat, I don't disagree that it was inevitable. I still have some pretty strong mixed feelings about this, as you said, having spent a long time at Apple, one of the things I led there was the engineering effort around the advanced data protection for ICloud. And, you know, initially our honest sort of desire in our hearts was to turn that on. for everyone to make it such that everybody's keys were only on their device, that, you know, if we were subpoenaed, we could hand over the data, but it never had the keys. But the tradeoff
Starting point is 00:26:32 we made there was that, you know, if grandma loses her photo role, is she really going to care that, you know, we did that in the name of making sure that a nation state can't get to her data or she can't be subpoenaed by law enforcement? And, you know, I think we made the right trade off there and said, look, this is an advanced feature and people can turn it on. I feel different about this because this is taking away that privacy protection for everyone for the sake of essentially acknowledging that bad stuff happens on our platform and that's what we have to accept. And it just feels different. I don't love that the answer here is turn off privacy and end-to-end encryption, but I get that that's what is necessary.
Starting point is 00:27:08 So I feel like if you want end-to-end encryption, you can still use it. You can use it via signal. And I think that's great. And I think that should continue to be available. I understand, however, that a lot of these platforms where there's teenagers, experiencing real harms from other users on the platform and those other users are able to do this in a way that is completely like unobservable to the safety team at that platform, I understand why meta might see a looming liability problem coming down the line, right? When it comes to this stuff, I think they have to put themselves in a position where they are, where they're in a position to actually monitor what's happening on their own platform. I do think that that's what this is about. I think we're going to see it with more
Starting point is 00:27:52 platforms as well. Adam, what's your feeling here? And I don't. I don't think, you know, I don't think, I think the dumb take here would be to say, oh, they're rolling back end-to-end encryption so that law enforcement can access these messages. I don't think that's it at all. I think this is much more about them being able to effectively police their own user base and lay down some baseline sort of enforcement of their site terms. Adam, where's your head at on this? Yeah, it's a tough set of trade-offs.
Starting point is 00:28:22 and, you know, the extreme positions of 90s cypherpunks, right, where privacy is an in inable human right, and we should, you know, have it available everywhere for everyone at all times. Like, that's, it's extreme, but I also understand, like, the logic. I mean, you know, I grew up in that community. The, I guess what this feels like to me is they have to make this a thing that for mass market platforms, they, you know, can relax. these controls and people can kind of opt into more secure comms when they need it and that hopefully
Starting point is 00:28:58 if I'm the platform I'm hoping that's enough that the number of people that do that is small enough that I can still be complying with either law enforcement obligations or platform safety obligations because like it's bad for the publicity of like it's bad for their you know their image in the world for Facebook to be facilitating all kinds of crimes so it may have to be facilitating bad stuff happening to their user base. They don't want a platform that feels scary and dangerous and where bad stuff happens. And this gives them the ability to make a mass market platform
Starting point is 00:29:31 that feels safe because their safety teams can get into messages. But at the same time doesn't really double down on the, we are going to not cooperate with law enforcement, which is, you know, if you do end to our messaging right, other signal, that's where you end up. you end up in that kind of crucible between law enforcement and, you know, protecting the privacy of your users. And, you know, that's a hard place for a publicly traded company.
Starting point is 00:29:59 And Signal, of course, has the advantage of being a, you know, a non-for-profit, etc., but if you're meta, you know, then... See, I don't see... I don't see Signal and this as being sort of equivalent, right? Like, Instagram is a social network. I think when you start overlaying, you know, these sort of opaque, encrypted messaging setups, You start overlaying that on a social media platform.
Starting point is 00:30:20 You get all sorts of horrible things happening. Okay. And look, my opinions on this were formed by talking to people like Alex Stamos when he was running security at Facebook. He told me about one user on Facebook who was using the Tor, using Tor access, right? So using their onion service to come in blackmail, you know, trick underage kids into exposing themselves. blackmailing them, forcing them to do unspeakable things that I can't repeat here. Unspeakable things.
Starting point is 00:30:56 People died. They were unable to catch this person because of the privacy tools that they themselves had released. I think when we talk about this stuff, you have to keep that stuff in mind. There's going to be all these people here listening to this who'll say, oh, you know, Pat Gray's, like, or very pro-surveillance and whatever. No, that's not it. But come on. I mean, it's a social media network where kids, teenagers are groomed, blackmailed, coerced. Like, enough is enough, right? And we're seeing the rise of sort of safety legislation all around the world. Like we've got the under 16 social media ban here in Australia.
Starting point is 00:31:37 We've got age verification for pornography websites, which is, I think, about to start. So there's a lot of this sort of stuff coming now, a lot of this sort of regulation. I just see Meta is trying to get ahead of that if I really think about it because they cannot deliver on platform safety without doing this. Yeah, I mean, it is very difficult to police something that you can't see.
Starting point is 00:31:58 And I guess the questions I had for Meta are like how is this going to interact with plans for WhatsApp, for example? Like, I mean, how are they going to bifurcate their messaging platforms so that WhatsApp isn't a social network? I mean, this is kind of what I'm getting at with the, it ain't signal thing.
Starting point is 00:32:15 it's different. I think there's a case that you can keep it for WhatsApp. I don't think there's a stronger case that you can keep it for social media websites. Yeah, and that's going to be the interesting thing to see how that plays out. Like where, you know, especially in the world where we've got this kind of convergence towards, you know, much bigger, more full-fatured apps. Like I'm thinking like WeChat, for example, or, you know, what Russia's trying to do with Max, right? There's all these complicated tradeoffs, you know, even like X, Twitter, right?
Starting point is 00:32:45 They want that to become, you know, less a social network, more everything app, but also a messaging app, but also, like, you know, the more that you bodge these things together, the less you are able to differentiate the level of, you know, safety or controls or privacy or whatever else based on the nature of the product. And I think, you know, how these things all interact is going to be, you know, that's going to be a real challenge for them and, you know, how they relate to their users and relate to law enforcement. relates to, you know, society as a whole. Yeah, yeah. We went into the weeds on that one. Apologies to the listeners, but, yeah, I think this is, this is the first of probably many to do this. Keep an eye on the other meta site, see what Facebook does, right? Let's just see.
Starting point is 00:33:34 I mean, maybe you can have end to end, but like, you know, maybe for people that you've already friends with, or I don't know, like, I don't know how this should work. Now, staying on the topic of surveillance and whatever, the eternal issue of 702 renewal is coming up again like it's about to expire again in the US and lawmakers are saying, oh, we're going to do privacy reform and whatever. My guess is it's going to work out like last time where they want to introduce a whole bunch more privacy reform. It gets down at the last minute and then they just kick it down, kick the can down the road another six months. I mean, James, you would have been following this just like us over the years. Is that your prediction as well?
Starting point is 00:34:17 Yeah, the only difference is I'm on this side of the mic for this round of it. It feels very much the same. The other thing I think of when I read this is that over indexing on the 702 data set kind of distracts from the fact that there is so many other data sets out there that are bought, sold and traded and access is granted to all manner of sketchy people that, you know, Like, it's statements in the article I read here that says, you know, it's, you know, we far outpaced the laws protecting Americans' privacy specifically around 702. It's like, well, yes, but, you know, the problem is not just 702. It's like how you're going to actually regulate and legislate around
Starting point is 00:34:56 sensible controls for, you know, ad-based marketing data sets, location-based data sets that are readily available on, you know, the open and the black market as well. Yeah, so it looks like in this case, I think it's Ron Wyden and another, is looking at introducing some restrictions on the federal purchase of commercially acquired information, which, you know, hey, I think that would be a good thing as well, because it's a little, it's a little bit crazy that the government can just buy that stuff and use it. However, our colleague, Tom Uren, you know, when he's looked at that, he just thinks, well, surely the solution here would be to ban the collection of that sort of information in the first place, because as long as it's collected, like, okay, so we're saying the FBI can't
Starting point is 00:35:36 have it, but like everyone else can. That also seems not ideal. Adam, what's your take here? I mean, much, much the same, right? I mean, American privacy generally needs to be overhauled more than 702 needs to be overhauled. I mean, as we've talked about a bunch before, 702 ultimately is meant to be a foreign intelligence data set, and you query it for foreign intelligence reasons, and there's some incidental connection, but it's ultimately a pretty small part of its utility, and throwing out what they presumably use it for because of the domestic, part of it seems not very sensible. And that's why they just kick it down the right,
Starting point is 00:36:09 because it's too valuable to can. But the ultimate problem is not severantir. The ultimate problem is privacy. And privacy law in the US is weird. And, you know, the intersection of like state jurisdictions versus federal. Like there's a whole bunch of big problems that need to be solved. And, you know, you're just tinkering around the edges until you're willing to face that particular thing.
Starting point is 00:36:32 The other part that I found entertaining, of course, is the weird sort of flip-flopping of, like, when there were Republicans outraged that the government might spy on them, and now they're in power, and they have to go, well, actually, we do need 702 because it's really useful. I mean, because Cash Patel is kind of completely 180 on this now that he's director of the FBI. And so that's kind of funny in a way to watch. But, yeah, ultimately, the US needs privacy reform more than it needs to kick this particular can. Well, it needs both, right?
Starting point is 00:37:02 That's sand. There's the rub, right? They have to keep kicking the can down the road, but they also need privacy reform. And I think kicking the can down the road is always the easiest option, right? Which is why it keeps happening. And it's sort of turned into this comical situation where it's just like over and over and over clunk, clunk, clunk, there goes the can. So I guess we'll be talking about this, like when they renew it at the last minute for, you know, four months or whatever, and then we'll talk about it again four months after that, four months after that into eternity. Now let's talk about something a little bit strange that's going on in Moscow, which is for the last couple of weeks, mobile internet has been
Starting point is 00:37:40 heavily restricted in Moscow. Now, this has caused the rumor mill to go into overdrive, like people on X are saying, oh, there's a coup going to happen in Russia. Probably not, to be honest. You know, there's more visible anti-dron teams, which is like, you know, pickup trucks with 50 Kalsbound on the back of them, hanging out in Moscow, which is sort of driving these sort of coup rumors more. I had a chat with Dmitriy Oparovic about this because, you know, he's very clued in on all things Russia. He doesn't seem to think, you know, there's much to the rumors. He's like, I don't know, but, you know, but he does agree that this is weird, right? He does agree that it's weird that there's been no, you know, functioning mobile internet available in the center
Starting point is 00:38:23 of Moscow for a couple of weeks. Normally the Russians would pull down mobile internet when there's like drones inbound coming from Russia because they were using cellular data for their control of these drones. So maybe it's about that. But they're also allow listing a whole bunch of like Russian services, including like VK and various like cloud compute platforms that surely the Ukrainians could use for their drone C2. So the whole thing's really weird. No one quite knows what's going on here. Is it the case that the air defense has been attracted to the point where they need to do this? Is it the case that the Kremlin is paranoid that something's going down and they're trying to restrict internet access? No, no. No,
Starting point is 00:39:02 Nobody knows. But let's start with you on this one. Adam. What do you think is going on here? I mean, it's, yeah, it's hard to tell. Stuff in Russia is weird. And being an outside commentator, trying to reason about what's going on in Russia has always been difficult. You know, the drone navigation, like Ukrainians using it to navigate. Like that's, you know, totally a plausible thing. And we saw, I think there was some control was given to the Russian FSB. Was it where they could turn off portions of, you know, turn off in mobile internet access as necessary to support defense in other regions. But then, you know, after they had given them that power, then we started to see this happening in Moscow itself. And the level of disruption that that is
Starting point is 00:39:46 causing is clearly pretty significant. But then again, you look at the list of whitelisted sites or our listed sites that, it seems like Burger King. Like you can order burgers in Moscow on your mobile, but you can't connect to other parts of the internet. So, like, it's all just a bit strange. And, like, I don't really know what to make of it. Like, Russia is just weird. Yeah, and James, you looked at the list of stuff that's available, and you're like, that shouldn't be a challenge for the Ukrainians
Starting point is 00:40:14 if it's about drone control. Yeah, 100%. There's got to be countless, you know, back channels and other things they can construct around this, you know, not least of which through Burger King. But as weird as Russia is, Adam, as you point out, you also got to kind of give him a bit of a tip of the hat here and say, well, the law was passed that said the FSB can shut off the internet, and then 10 days later, they'd enacted the ability to do that and turned it on within Moscow.
Starting point is 00:40:41 I can't imagine, certainly not in the US, but even here in Australia, like, would we go from law pass to telco's actually actively enforcing something like this at scale in 10 days? Not sure, but that just only adds more intriguing. Like, why move so fast to restrict access and then open it back up? for VK and TV and news websites, Mail.R.U. I don't know. That doesn't make sense. Yeah, it's all weird. It's all just weird. And I just want to quickly follow up on something.
Starting point is 00:41:09 You know, I think it was last week or the week before we spoke about how the Russians were like outlawing, they were outlawing telegram at the front. And then they're like, just use Max. And now they're like, don't use Max. Use telegram. And, you know, the reporting there is like clear as mud. But I did hear from a Ukrainian listener who said words to the effect of, oh, we love. Max. We love Max Messenger. Apparently it is like, yeah, it is properly that bad and the Ukrainians
Starting point is 00:41:34 know it and I suspect that they are having a field day with Russia's Max Messenger. Well, the good news is Max is on the white list, so they'll continue to love it. Yeah, that's right. That's right. Moving on, just a quick one now. There was some hacker, only described as a foreign hacker. they compromised a computer at a New York field office that was like a child exploitation forensic lab that was inadvertently left vulnerable by Special Agent Aaron Spivak. So I don't know if that meant that he spun up like passwordless RDP or whatever on this forensics box, but someone broke into it and was so disgusted by all of the CSAM on this box that they threatened to report the FBI to the FBI because they did not realize it was an FBI computer.
Starting point is 00:42:20 and the whole thing sort of culminated in them actually having a video call, the FBI having a video call with the attacker where they wound up showing their FBI badges to the attacker to convince them that yes, we are in fact FBI agents, which is just like, what a world. Yeah, it's a pretty crazy story. And this is like the actual incident was a few years back now. And apparently the investigators in question,
Starting point is 00:42:46 there was a bunch of like Epstein-related stuff on there. So that's kind of make it topical. these days. But yeah, it's just a, it's a funny story and the special agent who was responsible said that he had was like trying to navigate the, what was it, complex procedures for handling digital evidence, which I guess means it was difficult to get anything done at the FBI, and as you say, it's probably, you know, go to my PC or something turned on and remote access and onwards from there. And like, it's just, it's just kind of comical, but also in a way that's really, like the stuff is just lying around the internet like oh my god
Starting point is 00:43:22 yeah well and it was um they've connected like there was a lot of the epstein material was on that computer as well and like that's been a you know a thing a big thing um so that was the angle that writers took which it was it was oh it was epstein epstein material was obtained by a foreign hacker and whatnot uh just going to move on now because we are running a little bit tight on time now um a man in south florida a 41 year old man in south florida has been accused of conducting a bunch of ransomware attacks while working as a ransomware negotiator as well. So he was actually mastermining the attacks and, you know, helping these victims negotiate. This guy was a co-conspirator of the guys who were arrested late last year,
Starting point is 00:44:04 the American security consultants who were arrested for doing this. We spoke about them at length. This guy initially was like an unnamed co-conspirator. Now we can put a name to the crime and he's been charged. So there we go. We are going to wrap it up with some more technical news now, Adam. And let's start off with this research into vulnerabilities in IPKVMs. This is from Eclipse.
Starting point is 00:44:28 They've taken a look at VALMS in like some really common IPKVMs. And, you know, they're bad. They're really bad. Yeah, these are mostly quite cheap devices that are going to be used by, you know, home labors and, you know, less likely to be in an enterprise context. But yeah, these were, you know, you plugged them into the back of your machine into the HTML port and the USB ports and it provides IP access to the console. Most of the bugs here are really stupid stuff like unsigned software updates or brute forcible creds or, you know, direct object reference kind of things. Like really amateur hour things.
Starting point is 00:45:04 And that's, you know, can't be expected for embedded devices generally. But when it's a KVM, of course that gives you privileged access to the machine that it's plugged into probably. so not great, but ultimately from a technical point of view, the bugs are real, just, you know, super stupid stuff and kind of what you expect for a thing that costs 30 or 40 bucks on Amazon. Yeah, I mean, let's not get carried away, though, just saying that, oh, well, these cheap ones are a problem. Like KVM, like IPKVMs are a problem, you know, because they, they, even when they're working, like, they're bad and they need to be kept away from, like, most of your network, in my view, right? Like we had a customer make some inquiries about using knock knock on their internal network just to restrict access to their KVMs. And I think that's, you know, that's going to be good advice.
Starting point is 00:45:54 Although, you know, when we were discussing this earlier in today's editorial meeting, Adam, you told us what your approach at your previous company was to handling those things. Yeah. So we, when we had lights out management stuff on our servers, you know, baseball management things, the switch ports that those were connected to were shut down. And if we needed to access them, then we had to go talk to our hosting provider and get them to no shut down the port so that we could actually get some access. Or in other cases, you would use cross-servers with other machines so that we controlled that part. Because, yeah, like, I mean, I got a bug in a lights out management system once, which was you just smacked enter at the SSH prompt that let you in any way.
Starting point is 00:46:30 Like, that's the kind of grade of security that you expected on embedded systems. And, yeah, I mean, even the expense of course, you got annoyed and you just went whack, whack, whack, whack, whack, whack, and it. And it, okay, Michelle. Yeah, I was just smacking end throughout of frustration and then I got a shell. And yeah, that was a fun day at the office. That's like hacker superpower though. Like you pretty much just manifested a shell with your aura. Pretty cool.
Starting point is 00:46:54 And you wanted to include this one, which is this guy turning up to a conference and just wrecking the Xbox One in unnatural ways. And you just thought this was really cool. Yeah, this was a talk to a conference in Florida about reverse engineering. and so on. And this guy basically reverse engineered the bootloader, like reverse engineered the bootloader of the Xbox One, which up until this point had never been hacked. So there hasn't been a mod chip scene since the Xbox One came out.
Starting point is 00:47:27 All the subsequent Xboxes have been pretty robust. Microsoft did an amazing job of engineering them. And this guy decided that he was done with letting Microsoft win, and he sat down and he pulled the firmware, for the bootloader off the chip, like optically recovered it out of the gates on the chip, reverse engineered that, built an AI rig to, like, simulate all of the hardware and stuff
Starting point is 00:47:49 so that he could boot it. It ended up voltage glitching the hardware during boot to bypass the things that turned on, like, memory region restrictions, and then onwards from there, it ends up with, like, complete compromise of the Xbox One so that you can, you know, extract all the key materials,
Starting point is 00:48:07 sign your own changed firmware, like completely destroyed the entire platform security architecture. And this talk is just amazing. It's a masterclass in doing these kinds of attacks. And like it's just totally well worth watching, you know, if you like a great hackercon talk, it's exactly what you want to see. So yeah,
Starting point is 00:48:25 well worth the hour of your life on YouTube. And we're going to wrap it up with some security research out of Qualis. And, you know, they keep some old school hacker people in their basement. And occasionally they're allowed to publish a text file and they've published one of these and you know, you always love their work.
Starting point is 00:48:42 Yeah, I got a gosh. Yeah, do it. This is a write-up of a series of bugs in App Armour, which is a Linux kind of kernel security module that's widely used by DBN and Ubuntu, but other Linux distributions as well. Imprints kind of like mandatory access control constraints. They found a bug where basically any user
Starting point is 00:49:04 could replace the policy that applied to a particular process and then from that bypass the controls and they leverage that into like privilege escalation and then a bunch of kernel like memory corruption bugs as well in the thing that's parsing the policy files
Starting point is 00:49:18 but the underlying like core bug that they leverage is like it's a really interesting like classic sort of Linux Unix floor that like just made me happy where they can you know kind of pass a dupe
Starting point is 00:49:37 file descriptor for a pseudo file in the proc file system or the cis file system to a suid root binary as it's standard out and then have it right to it to it to bypass the restrictions and then leverage that up into into everything else and it's just it's such good research and the paper is you know like 80 column formatted text file you know in exactly the style that I remember you know reading on on bug track of all disclosure back in the day so it just warmed my heart Yeah, no, it's very like, it's, it is a text file. It is, they have literally published it as crack dash armour. Dot text, which is fantastic.
Starting point is 00:50:13 So we have linked through to that one in this week's show notes, of course. But Adam Bwilo, James Wilson, that is it for this week's news segment. Big thanks to both of you for talking through all of that. It's been a lot of fun. Yeah, thanks, Pat. I'll see you next week. Thanks, Pat. See you in a week.
Starting point is 00:50:37 That was Adam Bwalo and James Wilson there with a look at the week's security news. It is time for this week's sponsor interview now and we are chatting with Dan Green, a security researcher at Push Security and also Mark Orlando who is the field CTO over at Push. And we're having a chat about install fix which is a twist on I mean we've seen consent fix, we've seen the other fix, you know the ones that we mean where basically people are tricked into running various commands. This is a bit of a twist on that. Basically you get a user gets tricked in into a visiting a malvertised page for a common tool like Claudecode or whatever,
Starting point is 00:51:16 it just looks like the correct install page, and they start cutting and pasting commands off the web page into their, you know, into their command line, and that gets them owned. So to kick us off talking about install fix here is Dan Greenoff push security, and of course push security makes a technology that installs as a browser plug-in, and it's very useful for identity security. So first of all, I can tell you where your users have accounts. right, if they're using services that they shouldn't be using.
Starting point is 00:51:44 It's extremely useful at stopping fishing. It can even stop people from putting their SSO passwords into fishing pages, things like this. Like all of this stuff that once you're in the browser, you can prevent, including stuff like this, which is install fix, which when you're in the browser, in that sort of presentation layer, you know, between the user and the presentation, you know, you can see it all, right? So here is Dan Green kicking off our discussion of install fix. Enjoy.
Starting point is 00:52:08 You know, victims are Google. for Claude code, cloud code install variations of that. They're being hit with a malvertising link. They're being served in what is effectively a cloned page. So it looks, it's pretty much pixel perfect of a clone of the Claude code. It's like the, I think it's the instruction page essentially, like the Quick Start Guide, let's say. That has various sort of commands on there to install on different systems. You copy that command.
Starting point is 00:52:40 You run it locally. And yeah, you think you're installing the legit tool, but you're also installing malware alongside it. Well, are you installing malware? Because, like, I saw one of the examples in your blog post is they just like pipe a bash shell out to some domain, right? Like, that's sort of something that works. Or are they actually, are they actually dropping malware on people? No. So, yeah, it's staged, obviously.
Starting point is 00:53:02 So, you know, you're performing that. It's then calling it back. And effectively, the end result is, I think we identify. as the amateur stealer. Obviously, the type of an info stealer is not really important here. But yeah, it results in an info stealer being deployed onto the victim's machine. Yeah, right. So, I mean, it really is just one of those ancient scams of setting up a fake download page and swapping. I mean, the mechanics are a little bit different, right? Because you are cutting and pasting commands rather than downloading a binary, right?
Starting point is 00:53:33 But ultimately, what happens is a binary is downloaded and malware is installed. And it's just like a classic variation on that scheme. Yeah, for sure. I think probably what makes this interesting is, well, the reason that people are Googling for these things now, right? And why it is such a, I guess, a high volume target for attackers here, because they're taking advantage of the fact that, like, literally anybody and everybody is installing AI tools now. And in a way that, you know, I think you guys said it on the podcast a couple of weeks ago. like the security model for organizations wasn't designed for a world where everybody is in the command line and everybody is effectively a developer installing tools in this way. But this is the world we live in now.
Starting point is 00:54:17 And while that's expected or kind of has become the sort of normalized behavior for engineers, it's ripe for exploitation when you apply that to like the average user in an organization. I mean, 100%. I mean, it's really crazy that like you're getting into a situation where an average user at some point is going to get mad. when their EDR blocks their local LLM from executing some code that it wrote and compiled, right? Like it's just like it is a completely new world, completely new model. Mark Orlando also joins us from Push. So Mark, you know, you're out there dealing with customers and whatnot.
Starting point is 00:54:53 Like how much of this stuff is actually out there? So an alarmingly large volume of the stuff is out there. And I think that's one of the things that is also interesting. As simple as this stuff is in execution, the rate at, which it's being distributed and the rate at which the attackers are iterating I think is a little bit more interesting and it's alarming. I think Dan mentioned distribution, which is malvertising. And, you know, what we're seeing now are just huge rates of these things being delivered via malicious advertisements for all sorts of things, not just for how to install cloud
Starting point is 00:55:29 code, but I mean, you name it, you do a Google search, a good bit of the results now. You know, to find this kind of stuff. And this is cross-platform, right? Like it's hitting everything? That's right. And, you know, I mentioned kind of the rate of iteration. And, you know, just to kind of put that in perspective, late last year, we saw some other variants of these click-fix style attacks. We saw this consent fix. We, you know, are now, we've dubbed this one install fix. And I mean, we're seeing new variations of this stuff almost every day now. In fact, the example that we came across in our research initially was for Claude Code. But now we're seeing, other variations for other installs, other tools. So it's just constantly changing. So Mark, like,
Starting point is 00:56:14 I mean, you know, I got to ask this stuff, you know, I mean, I'm guessing pushes out there talking about this stuff because you're going to be able to stop it, right? You know, you're in the browser. You're going to be able to see this thing. It's these sorts of things that they're going to stick out quite a lot and you're going to be able to reliably detect them. But I would have thought like, and that's great, right? And I'm not saying you shouldn't be doing that. But I'm sort of surprised this is a problem given that these sort of info stealers and whatever tend to light up EDRs like a Christmas tree, right, and stop them. Like, why is it that this is actually turning into a problem in sort of Corpo environments?
Starting point is 00:56:45 I would have figured that that's probably because it's hitting places where there's no EDR because they're dev machines and EDR's too chatty or like, what's your feeling there as to why this is succeeding? Yeah, absolutely. I think you've got it. While we didn't see, you know, evidence of EDR evasion, or bypass in this instance. You know, certainly there are going to be targets where either EDR isn't running or, you know,
Starting point is 00:57:13 for whatever reason that control is either not present or not effective. And so I think it's more of like an economy of scale where given the kind of coverage of these kinds of attacks and threats, inevitably you're going to hit those endpoints where you don't have any D.R that's going to stop that, you know, commodity info stealer at the end of the chain. So it's a, it's a case of it's just a numbers game. I think that's right. And I think with a lot of these attacks, as with so many of these kinds of variants, you know, a lot of times on the other end, there's this kind of modular, customizable kit, right?
Starting point is 00:57:43 And so maybe one day it ends in the delivery of this infestaler. Maybe another day, as we saw with consent fix, it's more of a consent attack where there is no malware, there is nothing touching the endpoint. And so like, honestly, that's where I was going to go with this next, which is I'm just wondering why they're bothering dropping an infose dealer like that actually seems kind of lazy. I would have thought, you know, doing some living off the land, clever thing with power shell, right, where you can trick someone into power shelling a shell, you know, back to a host that you control. Like, I would have thought that would be a cooler way to do it.
Starting point is 00:58:13 But I don't know. I've spent a lot of my career being disappointed by attackers. Same. And, you know, it's fun to speculate as to the why of it. Who knows? I mean, maybe there's some vibe coding on the other end. And, you know, the LLMs know about info stealers. And so it's like, hey, maybe this is the next logical.
Starting point is 00:58:32 thing to happen. So I think if you're speculating on why, you know, the attack looks this way, we might also, you know, speculate as to how skilled the attackers are, how long they've been doing this, you know, all sorts of questions there. But the fact remains, you know, in this case, it was ending up at the delivery of this info stealer. Tomorrow it's going to be something else. Dan, I want to ask you, like I did allude to it earlier that it's my feeling that detecting this stuff in the browser would actually be quite easy. Is that actually the case? Yeah, absolutely. Well, as with all these things, it depends. I mean, it depends what you're seeing in the browser. Like, obviously, we wouldn't be talking about this if we weren't spotting it. So that's
Starting point is 00:59:14 kind of our superpower here, right? So how does that detection work? What are you looking for there? Yeah, I mean, so there's numerous indicators. So when we see these sort of things, we react on pretty quickly. So there's everything from the composition of the page itself to how it's being rendered, to, you know, effective view the user interaction that's happening on the page there. So yeah, the fact that it is an almost pixel perfect clone of a cloud code page, but it's not belonging to that official domain, all those sorts of things. So just spotting it by nature of it being a pretty clear copycat, I guess. Yeah.
Starting point is 00:59:50 Now finally, Mark, is there any sort of sense of a trajectory here? Is this just a sort of one-off campaign that people are trying? It's dumb, but it's kind of working. we'll see how far it goes, or is this like a major trend? I mean, I think for this specific iteration of this type of attack we're talking about, you know, perhaps a one-off, but I think the broader kind of trend of seeing these types of social engineering attacks executed within the browser is just something that's going to continue until, you know,
Starting point is 01:00:21 we find some way to drive the cost up to launch these kinds of attacks. Right now it can be done at scale, can be done very cheaply, you know, a tiny fraction of the cost of, say, develop, being an exploit. And I think while that's the case, we're going to expect this stuff to continue. Final question of the ones of these that you've observed in the wild, what is the attacker actually going after? You said an info stealer. Are they doing the normal thing, like trying to look for crypto wallet keys on the clipboard sort of thing? Like, is it really that dumb? Yeah, crypto keys, credentials, session tokens, all this kind of usual things that they're going
Starting point is 01:00:58 after here. Yeah, I mean, one of the interesting things with these campaigns is that they are so interchangeable. Like your click-fix style lure is interchangeable with this, which is interchangeable with your attacker in the middle fishing. There's a huge amount of crossover in the infrastructure used to deliver these kinds of click-fix style attacks and attacker in the middle attacks, which all points to kind of this mish-mash of essentially try and trying your look with all these different techniques. I get it. I get it. It's like the same people. Like this is just another means to distribute the thing that they're distributing like a dozen different ways. Exactly, yeah. And ultimately the end goal of all of it is compromising apps and accounts in the
Starting point is 01:01:40 cloud, dumping data, like the classic comm style playbook of these kind of attacks, right? It's, it all comes back to that world. And they're all just different ways of trying to stay ahead of users, right? And to come back to kind of the education point and staying ahead of these things, right? Like, yeah, you might be able to educate somebody around a specific campaign or this or that. But ultimately, like, you can't kind of train somebody or get away from all these different kind of techniques. There's just too much for the average user to absorb there. No, I remember. I remember when it was a case of like, so, I mean, like, geez, 20 something years ago, I made a list of extensions that could be malicious in the context of email.
Starting point is 01:02:26 So, and this was, I wrote this piece for a magazine about email security and we actually wrote a list of all of the unsafe extensions. Because when you start, when you were thinking about it was like dot eXe, dot this, dot that, and then you realize, man, it's a pretty long list and users are just not going to remember it. So this was back in like 2003 that I wrote this thing and I realized, wow, you know, that was a black mark against education right off the bat. So we actually wound up writing all of these extensions into like a little list that people could clip out of the magazine and stick on their monitor, which. I thought was actually pretty cool. But look, we'll wrap it up there. Back then when that was actually meaningful and actually helped. Dan Green, Mark Orlando, thank you so much for joining me to talk all about install fix.
Starting point is 01:03:06 Very interesting stuff. Thanks so much. Thanks, Pat. That was Dan Green and Mark Orlando there from Push Security. Big thanks to them for that. And yeah, big thanks also to Push for being a risky business sponsor. But that is it for this week's show. I do hope you enjoyed it.
Starting point is 01:03:22 I'll be back soon with more security news and analysis. But until then, I've been Patrick Graham. Thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.