Risky Business - Risky Business #815 -- Anthropic's AI APT report is a big deal

Episode Date: November 19, 2025

In this week’s show Patrick Gray and Adam Boileau discuss the week’s cybersecurity news, including: Anthropic says a Chinese APT orchestrated attacks using its A...I It’s a day ending in -y, so of course there are shamefully bad Fortinet exploits in the wild Turns out slashing CISA was a bad idea, now it’s time for a hiring spree Researchers brute force entire phone number space against Whatsapp contact discovery API DOJ figures out how to make SpaceX turn off scam compounds’ Starlink service This week’s episode is sponsored by Mastercard. Senior Vice President of Mastercard Cybersecurity Urooj Burney joins to talk about how the roles of fraud and cyber teams in the financial sector are starting to converge. Mastercard also recently acquired Recorded Future, and Urooj talks about how they aim to integrate cyber threat intelligence into the financial world. This episode is also available on Youtube. Show notes Full report: Disrupting the first reported AI-orchestrated cyber espionage campaign Researchers question Anthropic claim that AI-assisted attack was 90% autonomous - Ars Technica China’s ‘autonomous’ AI-powered hacking campaign still required a ton of human work | CyberScoop Amazon discovers APT exploiting Cisco and Citrix zero-days | AWS Security Blog CISA gives federal agencies one week to patch exploited Fortinet bug | The Record from Recorded Future News PSIRT | FortiGuard Labs CISA, eyeing China, plans hiring spree to rebuild its depleted ranks | Cybersecurity Dive This Is the Platform Google Claims Is Behind a 'Staggering’ Scam Text Operation | WIRED A Simple WhatsApp Security Flaw Exposed 3.5 Billion Phone Numbers | WIRED DOJ Issued Seizure Warrant to Starlink Over Satellite Internet Systems Used at Scam Compound | WIRED Multiple US citizens plead guilty to helping North Korean IT workers earn $2 million | The Record from Recorded Future News Cyberattack leaves Jaguar Land Rover short of £680 million | The Record from Recorded Future News FBI: Akira gang has received nearly $250 million in ransoms | The Record from Recorded Future News Operation Endgame: Police reveal takedowns of three key cybercrime tools | The Record from Recorded Future News Inside a Wild Bitcoin Heist: Five-Star Hotels, Cash-Stuffed Envelopes, and Vanishing Funds | WIRED

Transcript
Discussion (0)
Starting point is 00:00:00 Hey everyone and welcome to risky business. My name's Patrick Gray. We'll be chatting about the week's security news with Adam Boisle in just a moment and then we'll be hearing from this week's sponsor. And this week we're speaking with Arooge Bernie, who is the global head of risk and resilience at MasterCard. And we're talking to him about, I guess, why fraud and cyber departments at financial institutions were traditionally separate and why they're unifying. That is an interesting conversation and it is coming up later. But first up, yeah, Adam, we got some great stuff to talk about this week. And one interesting conversation to kick us off, which is this report from Anthropic
Starting point is 00:00:43 about apparently a Chinese APT group using Anthropic to heavily automate operations. Now, what makes this an interesting conversation, I guess, is, I mean, there's the report itself, but then there's been the reaction to the report with a lot of people poo-pooing it. And we're going to sort of wade through all of that. of all, why don't you just walk us through what the Anthropic report actually says? So they spotted a campaign that was using their Claude LLM tooling to carry out some kind of attacks. And they went and kind of pulled that thread. And it turns out that a Chinese group was, you know, had built some kind of framework to orchestrate attacks using Claude as one of the
Starting point is 00:01:25 components. And they were attacking, I think in the end, something like 30 companies with this set of tooling and in some cases we're successful, able to break in, and judging by like the speed of the operation and like the kind of amount of human interaction, Anthropic, have written it up as kind of like a reasonably automated set of attacks that was able to do initial reconnaissance, submit those results back to a human for approval, carry out, you know, attacks. We saw some conversation about, you know, using credentials that have discovered, some using like SQL injection, some using service lab request. four three flaws. So using technical flaws that the LLMs had kind of pulled together to break in,
Starting point is 00:02:05 escalate some access, extra the data, triage that data for further credentials and access, and then use that eventually to extract information from the target organization. So a pretty, you know, reasonable end-to-end, you know, hacking campaign orchestrated by an LLM on behalf of maybe the Chinese MSS. So it's pretty cool. I mean, it's extremely cool, right? So what's kind of surprised me about this is the number of people taking a cricket bat to the findings, right, taking a cricket bat to the report. And there's this strange line of attack that a few people have participated in, right? So one of them is Kevin Beaumont, who is quoted in, I have to say, an excellent write-up on all of this by Derek B. Johnson at Cibuscoop. If you're going to
Starting point is 00:02:44 read one story about this, it should be that one. But, you know, it quotes Kevin Beaumont in that piece, you know, saying that Anthropics Report lacks transparency. Okay, fair. We'll get into why that might be a little bit later on. But he also describes, he said it describes actions that are already achievable with existing tools. And I've seen multiple people, you know, criticize the report saying, oh, it's not a big deal because they used existing tools. I mean, that's not the novel part here.
Starting point is 00:03:10 I mean, that's entirely what you would expect from this sort of campaign. So I don't understand that criticism. Yeah, yeah. I mean, I think, you know, we want to see amazing, you know, superhuman stunt hacks that no human could ever pull off. Like that's what we want from AI hacking. And we're not there yet. We may well get there because, I mean, you know, AI is moving in a very quick pace.
Starting point is 00:03:30 But like the thing here is that doing it at scale. And Anthropic had some, you know, data points in the report about kind of like how many people they think were involved, like kind of what scale of the operation this is. And being able to do things, you know, successfully break into, even if it is only a handful of the 30 targets, you know, with a relatively small team and to do that, you know, kind of at scale. That's kind of interesting, even if it isn't the superhuman stunt hacking that we want to see, you know, and maybe we will. Yeah, I mean, look, our mantra on this show
Starting point is 00:04:05 is talking about how operational sophistication beats technical sophistication every time. I mean, that is something that we have been saying for 10, 15 years about APT operations, right? So to see this criticized on the basis that it's not doing anything technically novel, that's not the interesting part. The interesting part here, as you point out, is the scale.
Starting point is 00:04:23 So I think people are attacking this as in, they're sort of straw mounting a bit and saying that the report is saying something, it's not. I mean, this report is not saying that a Chinese APT crew just entered, go hack these targets into a prompt and off it went. That's just not what is alleged to have happened here. But imagine this. Imagine you take, you look at the pyramid of skills in an organization like MSS or NSA or whatever, right? you don't have to go too far from the top of that skills pyramid before the skills tend to thin out pretty rapidly right so you've got you really really a tier sort of operators right who can you know write exploits direct attacks and whatnot and you know a big part of their job is getting the
Starting point is 00:05:07 lower tier people to be effective and that's what this solves so imagine if you take your hundred best people from an intelligence org for example and just task them with using a AI tools to do these sort of campaigns, you're going to get just insane bang for buck. And I think that's really what the message is here. And it is significant. This is huge. Yeah. I mean, I guess there's a number of questions like why anthropic, like why are they exposing,
Starting point is 00:05:34 you know, an American firm to the inner part of their hacking. And that's, you know, kind of, it feels proof of concepty rather than a thing they are, you know, using in like real anger. Because as you say, like the, if you were to do this with the hundred best people on an intelligence, or like the results you would get would be pretty impressive. And in this case, like, they are still learning, right? I mean, one of the things Anthropics says in the reports is that there are a number of cases where Claude has, you know, hallucinated credentials or hallucinated access that it's got
Starting point is 00:06:02 or overstated its findings. And actually, when I was talking about this with one of my ex-colleges, you know, we said, like, this is exactly like a junior pentester. There are plenty of people that I worked with, you know, that got overly excited about something they saw, they misinterpreted something, or they didn't think about the overall security model or, you know, and, you know, training those people into being amazing hackers is what, you know, kind of what we did back at Enzomia. And the idea that you can do the same with LLMs with the right guidance and with the right framework and plumbing, like, and scale it,
Starting point is 00:06:35 you know, it's pretty wild. And I think, you know, you are, you are right in the sense that, you know, as people go all in on this type of thing, like, it's going to get pretty wild. And we are only seeing the beginnings of it. And of course we want to see the end state, right? I mean, I want to see all of the juicy details. And, you know, we want to see, you know, what does this look like right now and where is it going? And we want to be excited about that. But it is very easy to, you know, also fall into the trap of, well, I mean, of course, it's just automating, you know, some SQL app where anyone can do that.
Starting point is 00:07:07 But that's kind of, as you say, not really the point, right? It's not the point. Like, I actually was having a conversation earlier this week with H.D. Moore for an upcoming sponsor interview for the, for the weekend. show. And, you know, there was this interesting part. I don't even know if it made the final cut, but we were just talking about how, like, something like Run Zero, you know, it's not really at risk from agentic platforms, but it is something that Agentic platforms will use. Like, so for all of the products that I can think of that where it makes sense for it to actually have an MCP server, it's like Run Zero, right? Because you want agentic platforms to be able to use these things.
Starting point is 00:07:39 So, I mean, I don't see anyone criticizing an elite pen tester because they use existing tools. Like, are they only supposed to use novel tools. Like it's just such a weird way to argue about this. I think it's also interesting, yeah, that they used Anthropic, given that they, there were guard rails that they had to bypass, for example. And you're thinking, well, why would you do that when you could probably spin up like a deep seek rig, you know, in China and do it? So Cyberscoop has sort of wondered whether this is signaling. I don't think so. I think it's just like, why not proof of concept? Like, let's give this a go and see what happens, see if we get rumbled. I don't think anything too sensitive on their end was exposed, but, you know, the whole
Starting point is 00:08:20 thing is just fascinating. Now, one criticism that you do have of the report is that it lacks detail. I think it's about as detailed as it can be without having the senior executives come and absolutely kick the crap out of you for sort of giving people a template for how these operations are conducted. I mean, I think they've been reasonably detailed in what the shape of this operation was. I was sort of surprised to see you criticize it for lack of detail. I mean, you wanted to see the command lines. You wanted to see the P-Caps. Yeah, exactly. I always want to see, I always want to see the juicy details. But, you know, because there, so much of interacting with LLMs is in the specifics, right, of how you craft the prompts, how you convince it to do
Starting point is 00:09:03 the things and how you're kind of berate it into doing what it needs to do. And actually, I think was a guy from Anthropics Threat Intel team was talking to Cybersgroup. Yeah, no, he gave great calls. comments too. That's why it's one of those. It's a terrific story. People should read it. Yeah, yeah, he did. And he's, he said that like the framework that they built to do this was probably the bit that involved the most engineering, right? Because like harnessing the LLMs and all the MCPs and doing all the integrations and making all of that kind of plumbing work was the real challenge here as opposed to the LLDM. As you say, like they could have easily subbed in, you know,
Starting point is 00:09:37 a deep seeker or something else. And, you know, I guess like just using Claude for a proof of concept or something, try out and give it a go. Like, you know, as you said, why not? What are they really going to lose here? Everybody's trying this kind of thing. But of course, I want to see the problems. Like, I want to see the input data. I want to see the output.
Starting point is 00:09:52 I want to, you know, review its output because, you know, I did so much of this stuff for a living. So I'm professionally curious. Like, how good a job did the LLM do of what used to be my job, right? So that's why I want to see all the details. One of the things the attackers did here is that, yeah, they broke everything up into tiny steps to kind of avoid those guard rails, right? and I sort of think, well, you know, if they start publishing exactly how they did that,
Starting point is 00:10:17 you know, I can't imagine they've fixed this yet. I can't imagine they've got a uniform ability, like a reliable way to detect this sort of behaviour. So I just can't see them doing that. Do you know what I mean? I imagine there would have been terrible management pressure as well. Like just imagine, imagine being the security people who wrote this report. Reading the anthropic report, like you can definitely feel a little bit of that frustration,
Starting point is 00:10:37 you know, between the lines when they're talking about how they were dressed it and added more, you know, kind of layers of controls and so on. So, you know, it is a very hard problem to stop large language models doing, you know, what they have been asked to do, right? And all of the guard rails you have to add and all of the kind of things you can do. Like the fact that you end up having to role play, but we're defending, we're totally legitimate security researchers or whatever with the LLMs to make them do stuff, is on its own just already hilarious.
Starting point is 00:11:03 One of the other criticisms has been around, you know, whether they provided enough details to let defenders, you know, kind of. see these attacks in the wild, so things like IOCs they've been sharing privately. But, you know, of course, I always want to see all of the good stuff, you know, for everywhere, and you're right. Like, I imagine the pressure on them around this and the amount of review that's probably had to go through,
Starting point is 00:11:24 you know, does mean it gets cut down a little bit for the general, you know, the general release. Yeah, I mean, I think we've got to remember, like what you just talked about in terms of, like, how they had to build a big rig to make all of this happen. And the AI was one small component of that. It's just such a critical component. Like, you can kind of think of this
Starting point is 00:11:40 is like a way to write, you know, to have a self-writing bash script or something like that, which is just so powerful, right? Simple steps, but you need some smarts in there. That's why when you look at how times have thought about AI, where they can just have like a single automation block that you can put into an automation. I mean, it's just, there's so much you can do there. So I do really see this as a big deal. We're going to move on to talking about some other stuff,
Starting point is 00:12:06 but one thing that I found fascinating is, yeah, these attackers were dealing with these things as you mentioned earlier, like hallucinated and fabricated credentials, exaggerated findings, you know, things like that. When you talk to people who work in AI-based startups, which I do quite a lot, these are the same challenges that they face. Like it's, you know, 99% of the work is in dealing with the shortcomings and unpredictability and weird results that you get out of these models. So, yeah, I just, I think this thing is really interesting and I think it is just so naive to
Starting point is 00:12:36 think that this isn't kind of game-changing, right? You look at what people are doing on the white hat side with scaled out pen tests. And sure, they're not going to be the most elite level pen tests and whatever, but scale. Scale is important. Scale is a very big deal. Scale is game changing. And I think you ignore this or shoot it down at your peril because this is the future of operations. And it's really just going to mean that those top people, those most talented people, get scale.
Starting point is 00:13:07 So, yeah. Anyway, everyone should check out that. It's really, really good. Now, talking about something a little more old school is Amazon has discovered an APT group exploiting some Cisco and Citrix O'Days. Hooray. Yes, yeah, this is a campaign that's targeting the Cisco Identity Services Engine, which is kind of their like radius and authentication plumbing system.
Starting point is 00:13:28 Underneath, it's just a Java web app. And the reason, obviously we see bugs attack this kind of software stack pretty often. The reason I want to pull this one out is Amazon looked into the, like the post-compromise parts of this, the implant that gets dropped and so on. And it's actually really good. Like it's proper in-memory only, you know, Java, you know, post-exploitation payload with proper crypto, with good orth, uses, you know, thread injection reflection stuff to propagate and stick itself in the right place in the Java process.
Starting point is 00:13:59 It hooks all of the incoming requests going through the Tomcat servlet engine for its own purposes to be able to, you know, get data in and out. Like it's just, it's very well done. It's nice and competent and, you know, you know that I love, you know, hurting Java systems. So when somebody else does it, you know, I have respect for that. Do we know who the threat actor is here? I don't know that we do. Amazon didn't really say, I mean, honestly, it feels Chinese, but that's just vibes.
Starting point is 00:14:26 Like, I've got no actual data there, I don't think. But, I mean, it's competent, you know, which I like to see. And another week, another absolute clangor in a Fortnite product. This one's, ah, dear. There was a bug in FortaWeb, which is their web application firewall product, and the bug is path traversal to code exec. So like dot, dot slash, dot to making accounts. You can like call an API to create user accounts and get code exec and whatever else. But it's just like it's in the web application firewall and it's path traversal.
Starting point is 00:14:59 Like what are you doing? And this is, they got stuck in the Sissor Kev list. It is, I believe, the 21st Fortinette entry into the Sissor Kev list. So like that's pretty special. I guess congratulations. They do security. Fortinet does security. They do it, I guess.
Starting point is 00:15:17 That's a verb you could use. Sissa has told, you know, U.S. federal agencies to patch everything super urgently. This thing got spotted in the wild as a zero day. I think maybe as far basic as August. We saw some people, you know, spotting us hitting honeypots and stuff. So just the classic Fortnite story.
Starting point is 00:15:36 Except that there is also another Fortnite bug which is being exploited in the wild as of a couple of days ago which is another what was that one? It was a command injection. So yeah, that's just, you know, the fact that there are so many bugs
Starting point is 00:15:50 and so many fortnight products being exploited by so many people and this is meant to be a security thing. Like we've said all of this so many times before but it's just, you know, it's embarrassing for us as an industry to have a fortinet in it. Yeah, it is.
Starting point is 00:16:03 I mean, I had a great conversation with Andrew Morris actually. I'm going to be publishing that one later this week. that's a soapbox edition and um you know i think there's a bit of a misunderstanding out there among organizations about what their risk exposure is when it comes to border devices and you know andrews sitting there at grey noise just seeing all of this stuff because they've got this massive honeypot network it's interesting too the amazon stuff the um the bugs that the citrics and Cisco stuff that amazon found that was also honeypots right so there's there's there's really cool scaled
Starting point is 00:16:32 honeypot networks now uh you know grey noise probably being the the gold standard there and um yeah he's just like there's so much happening he's like I can't he's like he can barely believe it he's like if people could see the scale of it like it would get more attention and it's just this black hole like you talk to orgs and they're like oh no we're totally comfortable with what's on our border and you're like why but they just don't know right I mean that having a fortinet on the edge of network is the strongest correlation to getting on that you can have right what are you doing what are you doing yeah exactly so as you said sis uh he's giving federal agencies one week to patch that which is great uh just in time to call mandy and um
Starting point is 00:17:08 But, look, speaking of Sissar, I mean, this is a real slap your forehead kind of thing. Eric Geller over at Cybersecurity Dive has this write-up, which is Sysa, is now planning a hiring spree. Adam. Now, this comes after they're just fired so many people, and now they're trying to rebuild. I mean, I just despair. Like, what was the point of any of it? What was the point of any of it? I mean, I get the sense that Americans are.
Starting point is 00:17:38 getting quite frustrated with this administration. And when you consider that he was only sworn in in January, like that's amazing how quickly he's become so unpopular. And it's just chaos like this, where you just think, my God, you know, you go in there, take a razor to an organization like this, and then you turn around within months, and you're planning to rehire,
Starting point is 00:17:59 according to a memo from like senior executives there. Yeah. And at what cost? How much did it cost to get all those people out of there? how much money did they say? Probably not very much. And now they're going to have to bring, you know, a whole fresh transfer people in, train them up again, rebuild all the relationships again, all of the skepticism that partners and, you know, people that they had relationships with.
Starting point is 00:18:20 Like, it's just so stupid. And at a point in time where, you know, like, who could have thought that cyber would be a thing they'd still have to care about in a few months, you know, that China wasn't just going to stop? And so it's just so frustrating and so predictable. And like I just, you know, I feel for people like Chris Krebs, right, who obviously were super involved in sister's existence. It must just be like, how does that guy get up in the morning and not want to just crack open a bottle of scotch at dawn, you know, because it's just so frustrating.
Starting point is 00:18:51 Yeah, it really is. So this comes from a memo from the acting sister director, which is Madhu Gotu Mukala. This was a November 5 memo to staff that Eric got his hands on and, you know, the agency remains hampered by an approximately 40% vacancy rate across key mission areas. So you just think, well, what was the point? Yeah, really. Yeah, exactly. It's so dumb. So dumb. Now, Google is suing 25 people. It alleges is behind this scam text operation that uses a fishing as a service platform called Lighthouse. This is a story from Matt Burgess over at Wired. And you think, okay, why, this isn't a very interesting story. And then you read it and you get a sense of the scale of the operation they're talking about. You're like, okay, wow.
Starting point is 00:19:36 Yeah, yeah, this is one of the kind of tooling and frameworks that's behind a lot of the scams you get for like, you know, you've got a package or you've got an unpaid road toll or whatever else. And there are some ties from this particular set of tooling back into some of the really large scale financial fraud operations that are trying to see, the ones that are using, you know, like phones preloaded with credit cards and stuff that they've gone through Google Pay or Apple Pay to get enrolled using fishing techniques. So there's a bunch of tie-in between this group and that kind of mechanism of cashing out. But yeah, the scale, like something like a billion dollars is what Google says, has been involved in this particular, the lighthouse group or the lighthouse software service. That's being used here. It's not just a monetary scale. It's like you look at the tens or hundreds of thousands of like scam websites that are linked to this.
Starting point is 00:20:27 And you're just like, my God, that's a lot of work. Yeah. Yeah. I mean, the scale is really quite something. and, you know, the quality of the integration with, you know, Google RCS and Apple I message and all of the, like, components you need to do this at scale. And, yeah, I think one of the numbers was something like between 100,000 and 200,000 scam, sites, not victims, just sites, right?
Starting point is 00:20:48 And, you know, tens, sure, as many as 100 million victims, right? And the scale is just, it's ludicrous. And this particular lawsuit, like Google hasn't, so most of these people are unnamed. but I think this is part of a kind of wider thing to start applying pressure, start shutting it down, start using some of the other tools in law enforcement options around the world
Starting point is 00:21:08 to be able to deal with it. But yeah, the scale is wild. And I guess good work, Google for pulling this together and dealing with going through the courts here. Yeah, I think you go through the courts when law enforcement hasn't done anything, right? Like that's a big part of it.
Starting point is 00:21:24 But, you know, staying with the theme of scale, it seems like scale is really our theme this week because there's been some researchers who, look, I mean, it's nothing much to get excited about, I guess, but it's still kind of interesting. Andy Greenberg has this one over at Wired, where some researchers were able to enumerate like some basic WhatsApp account details just by looking them up in like the WhatsApp contact directory service or whatever. You wonder if they hammered them from one endpoint? Probably not, because they enumerated 3.5 billion phone numbers and worked out that they were, you know,
Starting point is 00:21:58 WhatsApp phone numbers and a substantial portion of them, you know, you pulled down the like display name and the picture, which is interesting. I mean, we've talked about scraping over the years and how these data sets, even though they're very limited, can actually come in quite handy, especially when you start cross-referencing them against other data sets. But geez, 3.5 billion accounts. It gives you an idea of just how big WhatsApp is. And you do really wonder what they can do when it is essentially like this gigantic multi-billion endpoint, you know, network. What can they do to stop that sort of enumeration? I think it's kind of hard. Well, I mean, I think that this report proves exactly that because this is not the first time we've seen bulk enumeration of WhatsApp accounts. And
Starting point is 00:22:43 the last time around this happened, WhatsApp and Meta, you know, said they were going to introduce some rate limiting and some controls and so on. And it does not seem that that has been super effective. This particular set of researchers, I think they're Austrian. They started out enumerating some in the US, and they said they went through, you know, like 30 million American phone numbers in the space of half an hour, realized that they weren't getting throttled or stopped the blocked effectively, and didn't just kind of kept going to see how far they would get.
Starting point is 00:23:13 And, you know, when WhatsApp, you know, in some countries WhatsApp is super big, like in Brazil, for example, and they were able to get, you know, like a couple of hundred million people's worth of stuff out of WhatsApp in Brazil, which, you know, as you know, when you can correlate with other data sets or in particular cases where, you know, the market penetration is very high. In India, they also found, like, a heap ton, you know, because WhatsApp works really well in, you know, poor network connectivity situations, like it's one of the more robust messengers. So, like, there are things you can do with this.
Starting point is 00:23:44 And, you know, much like Google dorking for hacking targets or whatever else, like being able to, you know, do something comprehensively. at scale like this does give you things even when it is just public data. And I think, you know, the response here is what, you know, matter has come back and say they're going to work on, you know, rate limit and controls and so on. But ultimately, you know, using something like a phone number as a user identifier has this inherent problem and moving to, you know, usernames or something. Because remember we had the conversation about when Signal brought in their username feature and started like discouraging the use of phone numbers as identifiers on the signal network.
Starting point is 00:24:21 Like that's kind of the direction it has to go, which has its own frustrations, of course. Yeah. Inherently, it is very difficult to make phone numbers suitable for this purpose, and this is why. Yeah, yeah. No, I mean, it's like, it's funny what you said about the poor connectivity thing, because like cell service in Brazil is actually pretty good. My wife's Brazilian, I spent a lot of time in Brazil. The, you know, the connectivity thing isn't so much of the issue,
Starting point is 00:24:46 but the plumbing of the networks meant that SMSes were just really unreliable, which is why WhatsApp just took. off over there and you know it's a story I've told on the show before but we had a little car prang over in Brazil once and the entire insurance claims process was driven over WhatsApp like it is so ingrained uh into absolutely everything I've got a fun story that I'm going to tell actually which is when my wife moved to Australia she kept her old Brazilian phone number and kept her WhatsApp number you know so that was she just kept it on her Brazilian number and that was all well and good until eventually that number expired and then some kid wound up having the phone number and
Starting point is 00:25:21 her WhatsApp was just locked out. So we wind up messaging with this kid and he was like 12, 13 years old or something in Brazil saying, you've actually got our wife's, you know, we explained the situation to him and we actually had to get him to briefly relinquish the account so that she could recover all of her like WhatsApp history and everything, which was just this really funny situation where we're sitting here in Australia trying to convince this like 12 year old Brazilian to please like abandon the account. And he was a good kid. He did it. He actually. He actually. did it and we were able to recover the account and like churn it over to an Australian number and whatever which meant that I think she lost access to like groups or something like there was a reason
Starting point is 00:25:54 we hadn't do it hadn't done it but yeah it's just funny when you've got just so much yeah pin to a phone number and that phone number can be returned to a pool and um away it goes uh now let's move on and talk about a mystery being solved right because we saw that Starlink was finally cutting off some of these um some of these dishes that were being used by scam compounds in Myanmar and we thought wow Maybe that was in response to the announcement of a congressional inquiry or a committee was going to look at why this was happening. But I think we have a better answer now in that Elon Musk and his friends are not particularly well known for being, you know, scared of Congress. The DOJ has actually spun up a new task force that is looking to tackle a lot of these scam compounds. And one of the things they've been doing, which is very smart, is issuing like seizure demands for the dishes that are being used by these scam compounds.
Starting point is 00:26:47 which means that Starlink has to turn them off because they are sort of seized, they are property of the US government, turn them off, and I think that is a great way to skin the cat. Yeah, yeah. So, I mean, they can't physically seize, or at least there's no, you know,
Starting point is 00:27:01 obviously there's no easy way for them to do that, but the process of marking them as seized, making authorising them for seizure, now SpaceX has to help out and actually, you know, block these devices. And yeah, like, when I saw this come through when we were preparing the run sheet of the show, I'm like, was this an option?
Starting point is 00:27:17 all along do they only just think of this? Is this a novel approach? Because yeah, like it's working, which is great. And any tool that kind of helps hamper these scam commands is great. It's just funny that it kind of took this long. I would have thought that if this was an avenue, they would have thought of doing this before.
Starting point is 00:27:32 But I guess maybe there's some reason that didn't happen yet. Well, they would have been busy, right? Like that's always the answer. It's always the boring answer of like they were busy doing other stuff. And you would have to think like, okay, so you're a scam compound operator in Myanmar. You've lost your starling. you can't get fiber.
Starting point is 00:27:48 I mean, the next option, you're probably going to be going for like microwave links at that point, right? Like just beaming them across until you can get to some point where you can plug into some fiber. You would think. But then that comes with its own risks as well.
Starting point is 00:28:01 Hopefully we can turn this into a losing battle for them. Yeah, like having to build out your own infrastructure and maintain it and so on, like that's complicated. It also exposes you because of the, you know, at some point you have to interact with the rest of the network. And we saw that with, you know, like how mobile networks being built by, you know, the South American crime organizations, for example.
Starting point is 00:28:20 Like, you have to trust the people that are building it. You have to trust the equipment. You have to trust the interconnect points. Anything that increases the amount of interaction with third parties makes it more risky. And, yeah, Starlink, I guess, was just super easy. And now we've got a tool for taking that away from them. So, yeah, yay.
Starting point is 00:28:36 Yeah, now, staying with U.S. law enforcement actions and a bunch of U.S. citizens have pleaded guilty for, to helping these North Korean IT workers scams. What was interesting here, though, is a bunch of the people who are charged, I think there's one, two, three, four of them, they pleaded guilty to wire fraud conspiracy because they provided their identities to North Korean workers. So this was how they were making money. The North Koreans approached them and say, hey, we, you know, we want to use your identity so that we can have this job and you funnel the
Starting point is 00:29:04 payroll or whatever. But they even went so far as to, like, one of them went so far as to, like, take an employee like drug test for the remote North Korean worker. I think one of them was paid about 50 grand for his role. The other one's like three and a half. four and a half grand, like not much. And now they're in really serious trouble. But it's just interesting, isn't it, that the number of little moving pieces that are required to get a scam like this
Starting point is 00:29:28 actually running. And the amount of money they made in salaries was like $1.28 million, so not really that much. I mean, I don't understand quite why they bother with this as a money spinner. I mean, maybe as a way to get access, but it doesn't look like it's actually that profitable compared to the crypto theft, for example.
Starting point is 00:29:45 Yeah, it does. it does seem a little bit. I was surprised at the size of the amount of money involved here. Like I was expecting it to be bigger than it was. And certainly, you know, for the people who are helping them out here, I was expecting that were making a little bit more money than this. One of these, one of the guys that played guilty was an active US army soldier. Yeah. And like this is his side hustle is making, what, 50 grand out of North Korean scammers. Like, you would think that surely at some point in the indoctrination process, you know, you get some training about, you know, operational security and blah, blah, blah, blah.
Starting point is 00:30:15 maybe not, you know, peeing in a jar for a North Korean, you know, employee scam or, you know, having laptops and laptop farms in your house for this stuff. Like, surely this is a thing that would have crossed your mind, but, you know, and also for 50 grand, for 50 grand. Well, it's a lot of money for a grunt, right? I suppose, but, you know, still, oh dear, some people, you know, like, surely he must have thought through this process just a little bit. Anyway, I guess he's finding out now. We've got to the find out face. but yeah it's it's just funny you know seeing because you're selling yourself at the scapegoat as a service that's what you're providing here and I don't know it's just what were you thinking what were you thinking what I agree I agree but there's a lot of what were you thinking in this show you know there's an awful lot of it like Peter Williams comes to mind but oh and we should also point out to do
Starting point is 00:31:09 not be alarmed by the sirens in the background Adam is not at home he is he has traveled to Auckland because he's going to the Metallica show tonight, right? Yes, metal. Yeah, so that's going to be fun. So don't worry, they're not coming to arrest him. He's just in a noisy Airbnb. Not today. What's interesting about this one, too, is the DOJ's like, yeah, and we also seized like
Starting point is 00:31:29 $15 million and you're like, wow, that's great. And then you read that the $15 million is the proceeds of a bunch of different incidents. One theft of $37 million, one of $100 million, one of $138 million and one of $107 million. But hey, you seized $15 million. million. That'll show them. That's some hella ROI right there. Yeah. We got more absolutely staggering numbers from the Land Rover,
Starting point is 00:31:55 ransomware attack. Alexander Martin has this one up for the record. Headline is cyber attack leaves Jaguar Land Rover short of 680 million pounds. So it's about 900 million US dollars. And I think what was it? It lost $640 million. I think it was over the court. over the period, it just says.
Starting point is 00:32:15 But that was driven by the production halt, which is down from a $400 million profit in the same period last year, which is just, you know, wow. This is just, you know, I think if this doesn't motivate senior policy makers to take this as seriously as cancer, nothing will. Yeah. And they also face, what, $250 million worth of costs involved? Direct costs, yeah.
Starting point is 00:32:42 direct costs from recovering from the incident like it's a real big set of numbers there and you know given that they were probably British kids doing it like man oh man are they going to be in trouble and you're right like the the policy makers in the UK are just you know what else like how much worse would it have to be I guess for them to to take it real seriously and yeah these law enforcement wheels are going to turn you know they may take a while but they're going to turn they'll get there in the end and boy oh boy they're in for a rough time yeah and meanwhile the FBI's been out talking about the the Akira ransomware gang and they say that they've made about $250 million in ransoms
Starting point is 00:33:17 since like 2023 or whatnot, which is, you know, quite a lot of money. These guys are kind of like particularly scummy because they attack sort of small to medium enterprises, which you kind of think, and you know, K-12 districts and stuff like that, you kind of think if you're a ransomware actor, that is kind of smart. I'd avoid the school districts, though, just for political reasons. But if you want to stay under the radar, the SME is where it's at. Yeah, like you don't want to be doing a half billion dollar Jaguarial Land Rover. Why do you want to be doing a whole bunch of smaller things that you have a plausible chance of getting away with
Starting point is 00:33:51 and not being too big to get the kind of law enforcement intention. We've seen some of the other crews that got too big, you know. Yeah, I kind of feel, though, that when the FBI is going out and talking about how much money you made, like you are on a whiteboard somewhere. Maybe you reach that threshold, yeah. Yeah, that's not a happy place to be. Speaking of, another one from Alexander Martin, Operation Endgame, this is this rolling, sort of Europol coordinated operation taking down various components of the ransomware ecosystem.
Starting point is 00:34:18 They've done a bunch of takedowns as well against what was it like an infostealer and like what, like a Trojan network. What did they take down here? Yeah, so the ratamathus infestiler, the venom rat, Modaxus Trojan and the Elysium Botnet. So those are things that are used by crime organizations to build their just-in-time crime pipeline. and you're a poll just been grinding through, you know, taking care of business and it's good work. It needs to do it. We love to see it. Now, this last piece, which is by Joel Kalizzi over at Wired. He's on the business desk and he's done a terrific job with this yarn. This is like a reading list item of the week.
Starting point is 00:35:00 And it's about a guy getting belked out of 200 grand in Bitcoin, which isn't all that much money. But it's the sophistication of how they did it. I mean, this is really, really like a confidence scam. You know, this is an old school confidence scam like you see in the movies. It involves real-life meetups, you know, people in nice clothes, wearing Rolexes, you know, you can trust us, bro, kind of thing. But it was just, I got sucked into this piece big time. It was a great read.
Starting point is 00:35:31 Why don't you walk us through the shape of it? Yeah, yeah, it is. It's a great read. So this particular, you know, a couple of guys scam to do it. He worked for a company that made like Bitcoin mining hardware. And they originally showed up offering to buy some, you know, hardware from him. And then they invited him to a meeting in like Amsterdam or something and take him out for fancy dinner and so on and so forth. Eventually they, the crux of this game in the end was they wanted to do like a test transaction with some Bitcoin.
Starting point is 00:36:03 And so they did a little transaction and all worked and everything was fine. And then a couple of weeks later they followed up with, oh, alongside our group. crypto mining hardware you're going to sell us. Can we also have a couple hundred thousand dollars worth of Bitcoin? Or $400,000 with Bitcoin, whoever it was. And then pressure the guy into, you know, kind of like testing it's going to work. And they make him install a wallet on his phone while they're there, you know, in the club after a few drinks and a caviar dinner and so on.
Starting point is 00:36:29 And they must have had some kind of camera or third party watching this guy whilst he's installing this wallet app. And they read the, you know, the seed phrase off the screen of his phone. and then at some point $200,000 with Bitcoin ends up in that wallet and they nick it and stop talking to him. So pretty straightforward kind of scam, but just like doing it, as you said, like movie style, you know, with caviar and smart suits and Rolexes. And, you know, to the guy's credit, I think the company that he worked for ended up surviving, you know, losing $200,000 in the process. But just, you know, the perils of a financial ecosystem where you can, you know, give away $200,000. thousand dollars sitting in a hotel bar uh you know like that's just it's not way to run the financial
Starting point is 00:37:14 system with no recourse right exactly because you know code is law or whatever you know whatever the crypto bros want to tell you so anyway the point is this is a good lunchtime read and i think you know any friends and family you have that are you know kind of not really in the cyber world but but still appreciate a good kind of like heist story it's worth a read yeah it definitely is um all right mate's that is actually it for the week's news thanks for joining me for all of that and we'll do it all again next week. Yeah, thanks, man. Pat. We certainly will. And have fun at Metallica tonight. Hey, Metall.
Starting point is 00:37:46 Hello, I'm Tommy Wren, the Policy and Intelligence Editor at Risky Business Media. You can join the Gruck and I every Tuesday for the Between Two Nerds podcast, which is all about cyber intelligence and cyber war. Deny, degrade, discombobulate. You can find the Between Two Nerds podcast
Starting point is 00:38:04 and more in the Risky Bulletin podcast feed. Subscribe today by searching for risky bulletin in your podcatcher. That was Adam Vuelo there with a look at the week's security news. It is time for this week's sponsor interview now, and this week's show is brought to you by MasterCard, which, you know, thanks to its acquisition of recorded future and, you know, due to some historical reasons,
Starting point is 00:38:28 you know, MasterCard, I guess you can think of them kind of like they have a side gig doing cyber threat intelligence and cybersecurity services. And today we are speaking with Urugge Bernie, who is the global head of risk and resilience at MasterCard, and really talking about a big trend in financial service these days where the fraud teams and the cyber teams are starting to work closer and closer together, which makes a lot of sense to me. But I started off by asking Arooge why these roles were originally kind of separate
Starting point is 00:38:56 when, you know, I think intuitively it kind of makes sense for there to be a lot of overlap between them. And here's what he had to say. When we look at it why it was structured this way, I think there were maybe three things that we have to think about there. The first one is around priorities and success. metrics were different for the organization. Cyber security teams were focusing more on the technology aspects of protecting the enterprise, or more focused on systems, data, unauthorized attacks that would compromise their infrastructure and the enterprise. Whereas if you think about what the
Starting point is 00:39:29 fraud team's focus was on, they were focused on protecting customers. They were focused on transactions and making sure that fraudulent activity as it relates to transactions and payments was was being managed and customers could transact more freely and with less friction. So I think it was driven by, you know, maybe that was one of the factors there. And we're now seeing that ultimately when you think about it from a risk perspective rather than a cybersecurity perspective or a fraud perspective, but risk to the business viewpoint, this is where we're starting to see that combination coming through. And the organizations are saying, we have to look at risk more holistically across the organization.
Starting point is 00:40:15 That payment risk that was previously a little bit more manual, a bit more analog, is a lot more digital now. The impacts to payment systems are more cyber-enabled, more cyber-driven. And so there is a need to start bringing these two teams together, much more cohesively being able to have, you know, a relatively similar taxonomy and being structured organization in a way that they're able to communicate better, report more holistically, I think, in terms of risk and overall impact to the organization. Well, let's get into that for a moment, because what you just described was two organizations that have very, very different priorities, right?
Starting point is 00:41:05 Like, as you say, one is worried about, you know, a bit of malware hitting a corporate desktop and then you know some threat actor tearing through the network and you know accessing swift terminals or whatever it is uh whereas the other one is much more concerned with with protecting customers now you say that there's a need to sort of unify these teams bring them together like you alluded to some of that just then by talking about like cyber enabled attacks against like payment infrastructure and stuff so that's one area where i can i can sort of understand what you mean but like what what is fundamentally driving the need to unify those two teams because they still, to me, sound like fairly different functions within a, you know, say a bank.
Starting point is 00:41:42 Yeah, there are different functions, but I think there are what we have to understand that there are dependencies or there are, not dependencies necessarily, that may not be the best word, but there are connections between the two. So when you think about how do fraud teams identify why fraud is happening, it's because something has happened previously that is driving that to occur. That could be, you know, an increase in. in digital skimming infections on different types of sites where merchants are, you know, losing information or exfiltrating information related to payments. And that fraud team is seeing that.
Starting point is 00:42:22 The intelligence that comes in on the cyber site of the house understands that there has been something bad that has happened, but that is not communicated to the fraud team. So the fraud team action is reactive after the fraud starts to happen. and that is then understood to be the result of a cyber event that's taken place with the ecosystem. This is interesting because what you're basically saying is it's the people on the sort of cybersecurity side, right, who are responsible for defending the organizations, who are the consumers of the threat intelligence, which is useful to the fraud teams. So the question is, why don't you then just sell the threat intelligence to the fraud teams?
Starting point is 00:43:00 You know, why do you need to unify these teams when really what we're talking about is it's that awareness piece among the cyber teams that that is useful to the fraud teams? Absolutely, and you can sell that threat intelligence to the fraud teams. Unfortunately, they don't have that skill set that comes when being in the C-SO organization. So one of the things that we're actually looking to do with our solution, LASCAR threat intelligence, is democratize that information, make it so that it's applicable for the audience that it's looking to serve, which is the folks in payment fraud. And as a result of that, they are then able to communicate and understand their landscape, what threats they're facing, what's coming at them, what they need to be worried about.
Starting point is 00:43:44 And they go from being reactive to being a little bit more predictive. I'm not going to use the term proactive necessarily because proactive means you see things before they happen. They're still reacting, but they're being reactive in a more predictive manner, if you will. So that certainly is there. More of a process than everybody panic and run around and not constantly. know what you're doing. Exactly. And so it's more, and then once you have that, you're able to
Starting point is 00:44:11 share that information back and forth between these organizations. So you can structure them organizationally to be one team or you can have them sitting in perhaps what's, you know, conceptually thought of as a fusion center where teams work together. It doesn't have to be a physical location. It's just a means and mechanism through which they can exchange information. But ultimately, that's really what it's about. It's about the democratization. of what has typically been very technical information to be able to be used by teams that are typically not technical or don't have that same level of technical background.
Starting point is 00:44:48 And then the ability of those teams to then understand how things are shaping up and communicate how to implement more controls or better controls on the enterprise side of the house that the CISO and other teams can do. Now, you just mentioned like two approaches, right, since we've been having this conversation. One approach is that you unify those teams, you turn them into one thing, right, with a clear reporting line to the same person.
Starting point is 00:45:12 The other thing you mentioned is this sort of fusion center approach, which is actually like more what I've seen here, just over the years with the people I talk to, which is, you know, the fraud people and the security people sit next to each other and collaborate and are sort of told to get along, which seems to be the winning approach at the moment as this all changes. Is it the case that these reported, you know, that things are being unified into a single department, or is it more the case that we've got just better cooperation? I think we're seeing a little bit of both. It's very difficult to make this kind of a change overnight. So we're seeing organizations that feel it's better to have more control. And under one organizational structure, we see organizations that are global in nature where you can't have a single organizational structure. where you need to have more collaboration between teams that are sitting in different parts of the world. So we're seeing both of these structures come into play.
Starting point is 00:46:09 I think the biggest piece that perhaps is missing today is the governance model and the operating structure that these teams now need to follow. It's one thing to say that we're going to exchange information. But again, you have to have that taxonomy, the common language to be able to say this is what we're going to exchange and how does that actually make sense for both sides? the organization, not just one. Now, you mentioned MasterCard Threat Intelligence. Obviously, the reason that MasterCard is sponsoring the Risky Business podcast is because you have launched MasterCard Threat Intelligence, which I believe, I mean, I'm guessing, this is just an assumption.
Starting point is 00:46:43 Nobody's told me, but I believe that would be begat from the Recorded Future Acquisition. Is that about right? That is correct. So the Recorded Future acquisition was obviously very strategic, but it was done because we were seeing changes in how fraud was being perpetrated. So it was going from what I said earlier, analog to being more digital. And as a result of that, we were seeing similar things that were happening from an enterprise or corporate security perspective happening in that fraud space.
Starting point is 00:47:16 And the more cyber-enabled these attacks have become, we obviously needed to understand how to move, you know, perhaps a little bit left of boom. to get more visibility, to be able to apply the same types of principles that threat intelligence offered at a corporate or enterprise security level into that payment space. And so that's how we came about. We believe that having that level of visibility across not just MasterCard threat intelligence, but also being able to embed that in some of our other solutions gives us the ability to be a lot more predictive and proactive in some instances around helping our customers stop fraud from occurring, but also then taking appropriate action based on the information that they see
Starting point is 00:48:10 that they're able to understand about their organization, the threat landscape that they face, and just be able to better structure how they respond to things. So we know that, you know, Reported Future obviously has a very large customer base. They talk to the CSOs. They work with the CSOs and they provide information to the CSOs. Again, as I was pointing out earlier, even though threat intelligence is consumed by an organization, doesn't mean that it is actually shared across the organization. So we do want to broaden that base because ultimately security of the organization,
Starting point is 00:48:44 security of the ecosystem, the payments, the customers, it is ultimately the responsibility of, you know, the organization that has that customer base. So we want to make sure that the ecosystem is better secured. We want to make sure that there's more trust or a higher level of trust within that ecosystem so that when folks are looking to make transactions, they are not faced with potential loss of their credit card data or their personal information through a digital skimming infection on some popular merchant site, that they're not faced with, you know, potential loss of skimming infection on some popular merchant site, that they're, you know, cars that have been stolen, if they're being tested, that we can actually identify those and decline the transactions.
Starting point is 00:49:29 This actually helps stop fraud before it occurs. And so the intent is to, again, as I said, be a bit more predictive. The intent is to be ahead or at least abreast of where the attacks and threats are coming from so that we can change the way things happen today, right? The numbers around the losses are huge trillions, billions of dollars. And if we can make even a little bit of a dent, then I think, you know, we've done good by the consumers. Yeah, I mean, one of the reasons I was happy to do this sort of sponsor arrangement is because when I saw the news that MasterCard was buying recorded future. I think, you know, a lot of us had the reaction of what? That seems, that seems strange. But it's been great having it explained to me by various MasterCard people.
Starting point is 00:50:23 So, Arooge Bernie, thank you so much for joining me to talk through all of that. Very interesting stuff. My pleasure. Thank you so much for having me. That was Arooge Bernie there from MasterCard. A big thanks to him for that. And that is it for this week's show. I do hope you enjoyed it. I'll be back in a couple of days with a soapbox edition with Mr Andrew Morris from Grey Noise. But until then, I've been Patrick Gray. Thanks for listening. Hello, I'm Claire aired,
Starting point is 00:50:51 and three times a week, I deliver the biggest and best cyber security news from around the world in one snappy bulletin. The Risky Bulletin podcast runs every Monday, Wednesday, and Friday in the Risky Bulletin podcast feed. You can subscribe by searching for Risky Bulletin in your podcatcher and stay one step ahead. Catch you there.
Starting point is 00:51:12 Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.