Risky Business - Risky Business #726 -- Okta owned while Cisco takes a massive L

Episode Date: October 24, 2023

On this week’s show Patrick Gray talks through the news with Dmitri Alperovitch, NSA Cybersecurity director Rob Joyce and NSA CCC director Morgan Adamski. They discuss...: The Okta breach 40-50k feral Ciscos Why the http/2 protocol flaw is a real headache The Ragnar Locker takedown What the NSA CCC has been thinking about This week’s show is brought to you by Socket. Socket’s founder Feross Aboukhadijeh joins us this week to talk about their actually-not-crazy use of large language models in their product. Show notes Hackers Stole Access Tokens from Okta’s Support Unit – Krebs on Security Almost 42K Cisco IOS XE devices exploited, no patch available | Cybersecurity Dive Critical Atlassian Confluence CVE under exploit by prolific state-linked actor | Cybersecurity Dive JetBrains vulnerability being exploited by North Korean gov’t hackers, Microsoft says Citrix Netscaler patch for critical CVE bypassed by malicious hackers | Cybersecurity Dive HTTP/2 Rapid Reset: A New Protocol Vulnerability Will Haunt the Web for Years | WIRED How North Korean Workers Tricked U.S. Companies into Hiring Them and Secretly Funneled Their Earnings into Weapons Programs Ragnar Locker takedown Europol: ‘Key target’ in Ragnar Locker ransomware operation arrested in Paris Hacker accused of breaching Finnish psychotherapy center facing 30,000 counts The US Congress Was Targeted With Predator Spyware Lloyd’s of London finds hypothetical cyberattack could cost world economy $3.5 trillion

Transcript
Discussion (0)
Starting point is 00:00:00 Hi everyone and welcome to Risky Business. My name's Patrick Gray and it's a little bit of a different setup today. I'm not at home. I am in another place. I'm going to give you a clue. It smells like freedom and tastes like high fructose corn syrup. That's right, I am in. And this edition of the show is being recorded live at the NSA's Cyber Collaboration Centre, or CCC, and we've got three special guest hosts joining us this week. We've got Morgan Adamski. Now, did I get the pronunciation correct there? You did, absolutely. Fantastic. She runs the NSA CCC,
Starting point is 00:00:43 and we've got a good friend and international man of mystery, Dimitri Alperovitch. Hello. And former head of NSA TAO, former Orange Man cyber guy, and current NSA cybersecurity director, Rob Joyce. Hey, thanks for coming in, Patrick. This is going to be great. This week's show is brought to you by Socket, and Socket's founder, Feroz Aboukadej, will
Starting point is 00:01:02 be along in this week's sponsor interview to talk about their actually not crazy use of large language models in their product. But yeah, let's get into the news now. And I want to start off this week by talking about this Okta thing. So it looks like Okta support managed to get owned somehow. And then someone managed to steal some authentication tokens belonging to Okta customers. And we're still sort of seeing various people emerge
Starting point is 00:01:27 and claim that they had their tokens stolen. So it looks like what happened is some of Okta's customers recorded these HA files, they're like session files or something, and submitted them to support. Someone pinched them, pulled out the tokens, and then onwards from there. But oddly enough, I mean, I think this is almost like a weird sign of progress
Starting point is 00:01:45 because the attackers weren't able to do much, at least in the case of like Cloudflare, for example, because, you know, someone was able to authenticate with these tokens and then they couldn't really do much because risky account actions required MFA step up, right? Let's hear from you on this, Rob. I mean, this is an interesting thing, right? Because I think SSO providers like Okta for the ecosystem, net massive win. So I can't
Starting point is 00:02:11 decide if this is a good news story, because it was kind of limited what they could do, if it's a bad news story, because we've got, you know, an example of a critical provider having an issue. Like, what's your, what's your, what's a broad outline of your take on this? Yeah, I think it's a little of both, Patrick. So the idea that we're using really well managed identity to authenticate into these networks, that's, that's the the entry point to get into business these days, because if you have weak authentication, you're going to get owned. And, and the opposite side of that is is we continue to see compromises in, you know, a couple of active events now, um, you know, Cloudflare even threw some, some shade at them when, when they titled the blog.
Starting point is 00:02:55 Yeah. And they had a, they had a recommendations for Okta section, which is just like, Ooh. Yeah. But this goes way back. You look all the way at the RSA token compromise in 2011. If you're managing identity for hard targets, you are going to get subject to very elaborate high-end exploitation. So the companies that are providing it, they've got to have their A game each and every day. Yeah, yeah. But I mean, they didn't appear to get very far, though. I mean, obviously, it's early days. We don't really know. But I mean, they didn't appear to get very far though. I mean, obviously it's early days. We don't really know. But I think that's the thing, isn't it?
Starting point is 00:03:28 Like that's why the scattered spider stuff was such a big deal is because they actually removed the MFA requirement for super admin accounts. And I almost feel a bit for Okta here because first of all, you mentioned the previous intrusion. It wasn't much of an intrusion.
Starting point is 00:03:42 Like when the attackers only get a screenshot, I find it hard to say that they were breached in that instance. And then more recently, they put out that blog post advice warning people about what later turned out to be the scattered spider approach of tricking people into resetting MFA. At the same time, and we know this, at the same time, the same thing was happening to Microsoft customers. And Microsoft didn't put out a blog post.
Starting point is 00:04:04 So I sort of feel bad that Okta's trying to do the right thing. And now everyone's just just screaming at him. Morgan, can you add anything here? Yeah, I'm gonna be a ray of sunshine, Patrick. So I think this is a good news story. And the fact that this is really the stress is the importance of isolating trust and minimizing those privileges amongst users, right? So when you're in the cybersecurity community, all you deal with is breaches and sad news and stormy days. And so that is the good part of the story that I think we need to emphasize. Yeah, I mean, that's exactly what I'm getting at, where the good news is that this type of event isn't, you know, the death knell. Like, you can have this happen, and the fallout's actually not too bad, which, you know, is amazing. I also appreciate the part that we're constantly talking about breaches
Starting point is 00:04:46 and what actually happens and people are coming forward and saying this is how I solved it or this is what happened. Because if we hide that, that's how we don't benefit. We really don't benefit from those type of conversations. I mean, when you look at it from an adversary perspective, here you have access to the support portal from an IT user's perspective at Okta. You have access to all these session keys that are being submitted
Starting point is 00:05:04 and you get caught so quickly, right? Yeah. They must be so frustrated by this, right? You had a, you know, equivalent of a nuclear weapon here and it was neutralized. We've got to be careful though because like someone might have been done here
Starting point is 00:05:17 that we don't know about, you know? But like so far, all we've got, like one password came out, I think just like yesterday, saying that they had seen some activity from this as well. So we had one password, beyond trust, and Cloudflare. You know, I am a little bit concerned that maybe there were some companies that don't have quite the same capability who were affected here.
Starting point is 00:05:35 Presumably Okta was able to go through all the support cases and look at all the files that have been submitted. But they had two weeks. So let's see. But, yeah, I think we all agree that almost like this is a breach that's a sign of progress. It is. Yeah. Yeah. All right. Well, let's move on. No, you had something there, Dimitri. Go on. Don't hold that thought. The fact that it was able to be caught in those two weeks is pretty remarkable. It didn't go on for many, many months. So from a sign of progress, we're
Starting point is 00:06:01 going to go completely the other way right now. And we're going to talk about this Cisco iOS XE thing. Because I didn't do a weekly show last week because I was getting ready for this trip. And it's the risky biz holiday curse in full swing, which is somehow someone cobbles together some ODA for one of these Cisco operating systems. And they weaponize it. They go nuts with it. And and i mean the numbers we're seeing are crazy right so we're seeing the headline that i got in front of me says 42 000 devices have been owned by people unknown oh and i will just mention too quickly with the octa thing one thing that i've appreciated about this is that we're not actually talking about who the threat actor is which is a
Starting point is 00:06:40 really interesting it's an interesting thing because normally something like this would happen even a couple of years ago and so much of the conversation would be about the who. So I think that's an interesting progress marker as well. Although I am in a room full of people who care deeply about attribution. But anyway, that's for you to worry about, not for everybody else, right?
Starting point is 00:06:58 So another situation here where ThreatActorUnknown has gone out and just rinsed all of these Cisco IOS XE boxes. I mean, do they even have a patch out yet? They do. They do. Yeah. So, I mean, this is nuts. This is the sort of thing that you expected to happen in 2001, not 2023. Rob, let's start with you. I mean, what do you even say about something like this? This is a mess. Yeah. I almost go back to the Hafnium event, right, where it was a flaw in something that's widely pervasive across the internet. And somebody decided to seize a lot of internet high ground for future operations. And the question is, you know, why did they want it? And why did they work so hard to hold on to it so if you followed the weekend news right it
Starting point is 00:07:46 it came out and then it looked like somebody tore down most of the excesses it went from tens of thousands to a few hundred then it looks like maybe that was the threat actor doing it themselves or just better concealing their access into those boxes yeah So they actually didn't go away. It was, it was the, the actors updating their implant so that it wouldn't respond to these queries and be easily detected. Yeah. So I think it was Fox IT came out with a method to, to query these across the internet and they found something like 46,000 still alive. Yeah. So somebody's trying to hold those accesses. To what end? I got to ask, like, what do you do with 42,000 iOS access?
Starting point is 00:08:31 I mean, it's really good for DDoS, but you've got to attract so much heat. I just sort of, I can't figure out, like, who this serves, really. I'm not asking you to speculate on attribution, but I'm like, why do this? It's too noisy. High-end devices connected to the internet are useful for a lot of things.
Starting point is 00:08:49 We see SOHO routers compromised all the time as pivot points into other infrastructure. And you can do that, and no one's going to care. But this is different. That's what I'm getting at. It's almost like someone is the dog who caught the car here. Well, again, I go back to the Hafnium event where we found the example of, you know,
Starting point is 00:09:07 China hacking those boxes. And instead of slinking away like a good APT, they doubled down and they ran a script across the entire internet and grabbed everything they could. Yeah. We saw them with the Barracuda stuff as well. You know, that just wasn't cricket, as we would say in Australia.
Starting point is 00:09:23 I think somebody did this because they can, right? And they had ideas of how they would use it. And so I probably feel like this is criminal in that they wanted grand scale and had an idea and they didn't care a lot about the consequences of being caught. Now, Morgan, I want to ask you about this, right? And this is less of a current events question, right? Because I know you're constrained to what you can say about current events. But this is more a question about, you know, in your job, running this centre, you know, I would imagine you would have to go out and speak to people about how to stop being
Starting point is 00:09:57 affected by things like this, right? And the advice is just, you know, don't expose these sort of management interfaces to the internet and whatnot. I mean, that's a big part of the battle that you're fighting, right? Yeah, one of the main remits of the Cybersecurity Collaboration Center is really helping the defense industrial base, which is 300,000 companies. That's huge. So what I would offer is that we go out and we talk to the defense industrial base, we say, hey, here's your exposed devices, here's the mitigation that you want to put in place. But one thing that I think we really point to is homage to Halloween, right? The callbacks coming back from inside your network. That's initial exploitation, right?
Starting point is 00:10:30 You really got to focus on, okay, when they got in your network, what did they potentially move to? Do they move laterally? What else are they after? I think one of the things we really focus on telling our partners is, yeah, that's initial access and exploitation,
Starting point is 00:10:40 but you really got to think about if they got in, what else would they go after? Yeah, I guess I was just curious of how much of your effort is, because look, for a long time, we thought, you know, there was perimeter stuff, perimeter hacking, and then everything moved client side. And now it's all going back to the perimeter, right? And you actually do, you know, if you've got a constituency like that, you got to tell them, hey, you know, maybe you shouldn't have your F5 exposed to the internet. Yeah, quite often we get asked, hey, what is your mitigation guidance?
Starting point is 00:11:05 How do we specifically protect against this? And we go back to the stuff we were telling people 10 years ago, though. Yeah. Right, here's the top 10 mitigations you should have against piracy actors. Put it behind a 1990s checkpoint. That'll do it.
Starting point is 00:11:15 Right, enable MFA, right? That is like one thing we say quite often. And so it is a little discouraging at times, but if people really focus on the basic stuff, it usually helps. Yeah, people are like, give me a different answer, right? They want something much more bespoke. We don't have it, right? Focus on the basic stuff, it usually helps. People are like, give me a different answer, right? They want something much more bespoke.
Starting point is 00:11:28 We don't have it, right? Focus on the normal stuff. What do you make of all this, Dimitri? It's a wild time. It's a wild time. Yeah, this is, as you said, is just going back in time. This is a long line of historical cases where you've seen network devices like Fortinet, Pulse Secure,
Starting point is 00:11:44 getting compromised. And as Rob says, there's so much you can do with this. You can use it for DDoS attacks. You can use it as proxies to obfuscate your traffic. You can obviously use it to get inside the network if it's a particularly interesting target. So this is going to be very valuable to those adversaries, either for resale if it's criminal activity or if it's nation state for direct use. Yeah, yeah. Now, look, someone could use this for DDoS, right? And it would be terrific for DDoS because you're talking about devices that typically have extremely powerful high bandwidth
Starting point is 00:12:12 connections. There is this HTTP2 rapid reset thing. Funnily enough, we had an outage because of this. So I finished recording, the last show I did was two weeks ago, finished recording it. We talked about the flaw, which was, I think Microsoft saw massive exploitation, Amazon as well. They did a blog post, they fixed it for themselves and then posted about it. And of course my dinky little CDN that I use didn't get the memo. So their CDN just fell over for like 24 hours. And I thought maybe they're patching it. And I realized pretty quick, I'm pretty sure that they were getting hit with this the reason I want to talk about this one is there's not really it's a protocol level problem like this is just how HTTP 2 works and it's this one's going to get messy but I know you actually had time because I planted the seed with Dimitri see I told him about this
Starting point is 00:13:00 yesterday and I knew he was just going to go and like read the spec and he did. So why don't you walk us through actually what this problem is here? Sure. So HTTP2 is the obvious improvement to HTTP1. It's a protocol that was developed by Google to optimize handling of HTTP requests, where you can basically stream multiple requests through the single connection. So in previous protocols, you would initiate a different TCP connection for each request, very inefficient, particularly from a server-side perspective. But one of the things that this protocol allows you to do is send the request over that connection and immediately cancel it. And what these attackers are taking advantage of is that you can only have 100 requests outstanding in the protocol. So they're sending 100 requests, they immediately cancel on them, and then they're sending 100
Starting point is 00:13:43 requests again, immediately canceling them. And, of course, on the service side, you have a denial of resources situation taking place because you're allocating resources to handle those connections. You get the cancellation request, but only after you've already started doing the work, and then you have the new requests coming in. So there is no really protocol-level solution here. What you need to do is basically build in heuristics into your servers to look for, you know, on a typical connection,
Starting point is 00:14:08 how many cancellation requests are you getting? Is this unusual? We're going to stop handling them. So it's a server by server situation that you have to basically apply these heuristics to. That's why there's not a patch and there won't be a patch. Yeah. Everyone's going to handle it on their own.
Starting point is 00:14:23 They're handling it, yeah. So there's like, and that's what makes this messy is everyone's going to handle it on their own. They're handling it, yeah. And that's what makes this messy is everyone's going to have to decide how they want to address it. Yeah, but basically this created the biggest denial of service attack that Google and Amazon and others have ever seen because it basically 100x's the potential
Starting point is 00:14:38 for these layer seven attacks that you've seen before by being able to shove so many requests in into that one connection. And mitigating it isn't just a matter of soaking traffic you know yeah you have to change the code yeah exactly yeah what do you think of all this rob uh you know the the ddos is a nuisance effect until you're the victim of the ddos and your site is down and your business can't function um you know it's it's a money making activity. It's something people do for the lulls. But it is increasingly a weapon. So you know, the Israel conflict right
Starting point is 00:15:14 now, they're undergoing massive DDoS because the hacktivists can. So it's, you know, it's a weapon for the people. But Rob, we've also seen sophisticated nation state intrusions where they're using DDoS as a way to distract resources and obfuscate what they're doing too, right? So it's not just a nuisance. It can be used to hide in the noise and get your security teams to focus on something else. Yeah, like anything, the really sophisticated actors are going to combine multiple techniques and multiple things um to get at their goals and so ddos may be one of those things that they bury it in the noise yeah but i mean ddos has always been like a bad business and they don't make money they don't like you compare it to like any other type of cyber crime it is like total loser crime you know so it's great for like booting people off like someone keeps hitting you on a game server or whatever you go give ten dollars to a stressor service who's
Starting point is 00:16:10 committed so many crimes to build that buddha service right and he's committing so many crimes by taking your ten dollars fifty in crypto to knock off some you know gamer it's it's a pretty good business for the ddos prevention companies well right? It's not really about making money. It's about imposing cost, right? If you've got to reallocate resources to deal with it, to Dimitri's point, if you've got to – and we saw that back in 2012, right, when the Iranians were DDoSing the financial institutions. That was about imposing costs on those companies
Starting point is 00:16:38 and just hurting their business. But did it, though? Did it? Because it was mitigated pretty easily. Like, you know. It was still in the early days when we were figuring out how to really mitigate those type of attacks so yeah yeah i don't know i just i just think ddos yeah as i say but i think activists are the ones who are keeping this alive these days they're the ones who are executing it with the most impact yeah yeah yeah uh now i
Starting point is 00:17:01 did just change the order as i said because we actually had a bunch of other stories in here that i didn't really want to talk about in detail it's just it is one of those time warp weeks you know is it a time warp or is it a groundhog day it's something it just feels eerie when we're still talking about atlassian confluence cves being used i'm so sorry as an australian for for atlassian uh i'm so sorry i i do want to apologize to you because i know you look at the people in this room look at the people in this room look after the defense industrial base and I'm sure they're what we would call heavy users. And yeah, yes, it's quite bad. And then there's this JetBrains thing that North Korea's using and we've got Citrix putting out patches for NetScaler that are being bypassed as well.
Starting point is 00:17:41 Like it's been a rough couple of weeks, you know? It certainly feels that way. You know, for us though, the focus is often on, you know, who's doing this and why. And I'll tell you, you know, certainly Atlassian, that's in the top 10 list of exploited vulnerabilities by the PRC. I'm so sorry. I'm so sorry. You see it all the time. And then as we talk about the Citrix Netscaler, again, we saw the Chinese build a custom backdoor for that. So they've spent a lot of time and energy in a capability they lost, but they built expertise on Netscaler architecture and infrastructure. So they're going to keep coming back at this stuff. But that's expertise worth having. You know what I mean? Because there's so much of that stuff out
Starting point is 00:18:27 there. And there be dragons, I guess is what I'm getting at, right? The other thing this brings out to me is, you know, why do you keep seeing similar devices or similar products keep getting exploited? Because they're not very good. Well, no, Google Project Zero does some really great stats where they talk about when you find a flaw, when you get a zero day and generate a CVE, chances are good around that there is more flaw. There are other things that...
Starting point is 00:18:57 This is a target-rich environment. Yeah, the coding practices are bad, the choice of non-memory-safe languages, etc., etc. It means there's other things there, and we keep seeing these devices exploited practices are bad, the choice of non-memory safe languages, et cetera, et cetera, it means there's other things there. And we keep seeing these devices exploited because there's adjacent flaws. Yeah. Yeah. I mean, how many more Fortinet bugs are we going to see, right? For example. Absolutely. Yeah. All right. So let's now talk about North Korea's moneymaker. This has been in the news a lot over the last few
Starting point is 00:19:25 years but we've got a great write-up from Kim Zeta you know dishing more details on how these you know these poor North Koreans are being sort of forced by the government to go and get you know work from home jobs with Western companies and you know. It probably beats other jobs that we'd get in North Korea though. Well it would that's true but it would be even better if they got to keep the money instead of just sending it all to the government, right, which is how this works. But, you know, there's often a reaction on this,
Starting point is 00:19:52 which is, oh, they're just trying to get access to these organizations so that they can do, you know, steal money and do fraud and steal crypto and whatever. But it really does look like this is a money-making enterprise for the North Korean government, which is, hey, we've got an educated workforce. Let's pretend they're from somewhere else and get them doing some of these jobs. What a world. You know, as I was reading that write-up from Kim and the level of effort they go through to get those jobs, you know,
Starting point is 00:20:21 stealing fake IDs, hiring people to do the interviews, proxying their connections through other people, and then actually having to do the job just to get the money. I just kept thinking, isn't it just easier to steal crypto? This is a lot of work to just get a salary. It seems like there's a lot of easier ways to do crime and fund the regime. Obviously, they're doing a lot of that as well. But it is interesting that they're diversifying into doing actual work. And apparently, some of that work is actually But it is interesting that they're diversifying into doing actual work. And apparently some of that work
Starting point is 00:20:45 is actually quite good. Yeah. They deliver. Well, the crazy thing about North Korea is it is just such an organized crime organization. Like these days,
Starting point is 00:20:52 that's what it seems like. And this is what an organized crime org, this is how they do it, right? You know, if it makes money, do it. My lesson learned is sanctions actually work. Because look how creative the North Koreans have to be to come up with these ideas,
Starting point is 00:21:08 whether it's the IT workers or it's the crypto scams or the energy to fish zero days out of the security community. They're working hard, man. Yeah. It's the ultimate outsourcing business. Yeah. I mean, maybe that's what they need to do you know like beef become fluffy bunnies and then they can have great it services to the rest of the world
Starting point is 00:21:31 wouldn't that be great wouldn't that be nice wouldn't that be nice uh now we got some new good news uh which is the there's been a ragnar locker they're a ransomware crew that you know often you hear about the takedowns of these ransomware crews and it turns out to be affiliates or it turns out to be some ransomware developer where the ransomware was never used. This is Ragnarok. This is a big one. The leak site has gone down. There's been a couple of arrests. Looks like some of them were in Europe, maybe one in Ukraine. I think there'd been some previous arrests from this group, but it looks like, yeah, they're definitely having a very bad time. I think if you're not Russian, it's really dumb to do ransomware, and this is why, right?
Starting point is 00:22:07 Well, the guy that they arrested is actually one of the lead developers from Czechia, so taking down developers is important because there's only so few people that are working on developing ransomware codes, so I think it's going to be impactful. Yeah. One of the big conversations we've been having on the show
Starting point is 00:22:24 is, like, looking at the long, long period takedowns that the FBI is doing, like Hive is a good example of that and these investigations and whatever. And they love just going and collecting evidence, which isn't quite the cyber knife fight that I've been advocating for. But it does look like gradually we're getting to a point
Starting point is 00:22:40 where it's starting to feel like authorities are getting serious about tackling this problem way too late, but it's starting to feel like authorities are getting serious about tackling this problem way too late, but it's finally happening. I mean, what's your sense of how all this is working out? I'll tell you, FBI is doing an incredible job. And it's not just them, it's all of the relationships they have and the partnerships. So international partnerships with industry, with us, with CISA, you know, I think the tools they're applying, they're eroding the trust inside that ecosystem, right? You can't trust that the person you're talking to is part of the criminal gang you used to work with. You can't trust that this site hasn't been penetrated and the FBI is not behind the secure messaging app. And that activity has just really had a huge impact on the ecosystem.
Starting point is 00:23:32 I mean, has it though? Because we haven't felt it yet. And I think that's the criticism that I get from other people is they're like, well, you know, it's not like ransomware is slowing down. So, you know, I mean, are we going to start feeling this eventually? I think without the governor of what they're doing today, it would be a 10x issue. 10x, that would be quite a thing, wouldn't it? Now, we've also got this, I don't know, study out of Lloyds of London. Did you have a chance to read this one, Dimitri? I did, unfortunately.
Starting point is 00:24:03 So they're saying there's like a 3% chance. So they're saying it's a one in 30 year risk. You know, the same way that we do that with flood events and things like that. It's a one in 100 year flood or a one in 30 year flood. They're saying there's a one in 30 year risk of a hypothetical cyber attack
Starting point is 00:24:21 targeting the finance sector and like transaction processing and stuff that would cost the world economy $3.5 trillion. On one hand, I mean, okay, sure. But on the other hand, I think how would you stage an attack that could do that much damage? I'm sorry. This is bogus because they're talking about the entire sector being targeted, an update being propagated to all these devices, and assuming that recovery cannot be done quickly, like this seems like so ridiculously unlikely to me. I don't buy the 3% likelihood
Starting point is 00:24:53 at all. I'm with Dimitri. You know, if there's one sector that has its act together, it's the finance sector, because they can quantify the risk, they can assess how much it would cost. Aren't you responsible for helping the defense industrial base there, Mr. Rowe? Yes, and if I said there is one sector, the defense industrial sector has come up. I want to point out that Morgan is shaking her head right now. They've come an enormously long way. And especially if you look at the big companies, they are exceptionally well buttoned up
Starting point is 00:25:26 as we've developed over time. But what they have is the whole ecosystem then cascades down into the small subcontractors. And the adversaries go after them. And even they've learned that we all have lawyers. And so the law firms that do your M&A, they do your intellectual property and your patents. Supply chains don't just exist with software, right? They get hit. And when was the last time you met the CISO of a law firm? Yeah, yeah. But, you know, the bigger problem here that I think a lot of these studies underestimate,
Starting point is 00:26:00 and having been part of the response to a lot of these wiper attacks from the Ramco cases, Sony wipers, NotPetya, you name it, people don't appreciate how resilient companies actually are. In fact, most companies don't appreciate how resilient they are until something like this happens. And they figure out how to make their business continue to run. Like Merck suffered a devastating attack during NotPetya. The shipments of their goods still went on. They went back to pen and paper. They were able to trace things. It was ugly. It was really, really difficult. It was absolutely a nightmare for them, but they figured it out. So I think in a lot of these situations, a lot of these companies will figure out how to get back up
Starting point is 00:26:41 really, really quickly. I think this is one of the problems I have with, you know, some of the more dire predictions of cyber war is there's people who might do research into, you know, various critical systems and stuff and say, see, you know, we take down these systems and everything stops. And it's like, I think my joke to you was when we were talking specifically about some of the software involved in the F-35 program.
Starting point is 00:27:02 You know, there's been research in that, which is bad. There's stuff they've got to address there. But also, like, people will load those planes with a hammer if they have to, you know what I mean? And I understand that maybe not everything's going to be working that great, but the idea that you could just completely stop everything with just computer... I mean, I do...
Starting point is 00:27:20 Look, I hope I'm... I know Morgan is going to be talking about VSAT later, but that's a perfect example, right, where you had a tactical success by the Russians, but then you had Starlink come in, other methods of communications, and the Ukrainians were able to basically get their comms up and running relatively quickly. Look, let's get into that part of the discussion now.
Starting point is 00:27:38 So Morgan's here, who's been hanging back in the news segment and is ready to talk all things CCC. So, look, why don't we just start off for the listeners. We have previously talked about the CCC on the show with Rob, but it would be good to just recap exactly what it is that you do here. What would you say you actually do? So we defend the dib, first and foremost. But the Cybersecurity Collaboration Center is really about taking what we know
Starting point is 00:28:01 from an NSA perspective, our insights, our technical expertise, and ensuring it gets into the hands of the people who need it most and can action it. It is operationalizing intelligence. That is a fundamental shift for NSA and is something we've been doing over the last three years or so. And so one of the main things that we've done is ensured that we can share information at the unclassified level. When a lot of people talk about information sharing, when you get to like the second and third layer of that conversation, people are not exactly sure what it should look like and why they're doing it. And really what we're trying to do is when people ask for context, intent to help with prioritization and how they put resources
Starting point is 00:28:35 against the problem, that's where we want to play into that conversation. You could tell people why they need to be doing something instead of just telling them to do it. Well, it comes down to the fact that we just don't want to take technical indicators and throw them over the fence and be like, good luck, I hope you figure it out. And oh, by the way, we're not really sure if this is the data you need, but we hope it's helpful. I mean, that's not going to work. And, you know, we've seen over years that the constant targeting
Starting point is 00:29:00 of the defense industrial base, right, they're targeting everything. I mean, let's be frank. There's some successful operations against the U.S. defense industrial base as well? They're targeted. I mean, let's be frank, there's some successful operations against the US defense industrial base as well. Like this was a problem. Yeah. And to Rob's point earlier, when a foreign adversary has a list of requirements of things that they need to find out about and learn about, they're going to continually hit that target every single day until they get it. And so we have to help them in this space. We have to say, hey, here's who's coming for you. Here's what their capabilities are. And this is how you protect against it.
Starting point is 00:29:27 And oh, by the way, when you kick them out, you're going to come back tomorrow. And so that's why it has to be a constant conversation that we're having. I mean, it's a bit of a surprise, I think, to people outside of this space that the defense industrial base was kind of lagging. Well, it's huge. I mean, there's so many companies
Starting point is 00:29:42 that are part of the defense industrial base and defense contractors. And it's not necessarily the big primes. The primes have robust, significant threat intelligence teams, right? Even though they're getting hit every day, they have a lot of talent. Some of it's come from NSA. And so we know that they can do it. But when you talk about the small to medium sized companies, the people that are producing those critical components for the F-35, like we have have to be talking to them and giving them that level of support, because they can't ingest all this information, they can't protect themselves from nation state threats. So we have to be there with them. It's interesting. I remember seeing recently, I think it was the North Koreans hacked into one of the Russian rocket manufacturers. And I'm like, wow, they must be doing all right at NSACC. They're having to go off and get the second best. Yeah, we want to make it harder for them.
Starting point is 00:30:26 We want them to go against the other targets, particularly our adversaries would be phenomenal. Yeah, what a result. That's fantastic. They just want to make sure they're getting the real stuff for the shells that they're providing to the Russians. That's it. A bit of an audit, right?
Starting point is 00:30:40 Yeah. So look, one of the things you were able to do recently was go and do a talk at Black Hat about the Viasat hack. And I think you did that in conjunction with them, right? Yeah. So look, one of the things you were able to do recently was go and do a talk at Black Hat about the Viasat hack. And I think you did that in conjunction with them, right? Yeah, absolutely. And I think that was a huge win for us to be able to publicly come out. If you would have asked me three years ago, if we would be standing on stage next to a big company talking about an operational success for NSA, that was unheard of. And so we went with them and we talked about what happened on the 24th of February. The fact that we had built this longstanding relationship with FIOSAT and they gave us that call and they said, hey, we've got an issue.
Starting point is 00:31:11 We don't think this has to do necessarily with some type of misconfiguration. We have an attack. And within five days, they shared technical artifacts with us. And we were able to take that information and really understand how the attack happened, what we knew about potentially who was behind it. Attribution does matter to us here at NSA. But most importantly, build tailored mitigation guidance that we were able to give to all the other SATCOM providers that were providing communication support to Ukraine. Did those mitigations get tested? Yes, absolutely.
Starting point is 00:31:40 Like, how do you protect against this? No, no, what I mean by tested, I mean, did Russia go after those ones as well? Yeah, we see constant targeting of all the SATCOM providers, right? That is that is a constant thing. And it's not just from the Russians, right? It's from everybody. They recognize the significance that if you're able to disrupt communications, especially on a military front, it has huge impacts. And so they're going to constant that's a constant requirement that we always see. I mean, it feels like Ukraine, the thing that we've learned the two technologies that have come out as like kind of wild cards i mean for us outside maybe not so much for the people in this room but it has been satcom
Starting point is 00:32:12 and fpv drones specifically like we saw isis 10 years ago putting grenades on drones and whatever but now we're seeing these fp fpv carcasses with like rpg seven rounds zip tied to them taking out tanks and it's like that those seem to be the two things that are really crazy you know i mean and FPV carcasses with like RPG seven rounds zip tied to them, taking out tanks. And it's like, those seem to be the two things that are really crazy. You know? I mean, and we learned a ton and we're still continuing to learn a ton from that
Starting point is 00:32:31 conflict, both in the cyber domain, right? How do you protect against all the things that the Russians were throwing at the Ukrainians? Industry was huge in that space. They really were seeing it on the front lines. And I would offer that we weren't the only ones learning from that crisis,
Starting point is 00:32:44 right? Our adversaries were also watching it. And so it's something that we're going to only ones learning from that crisis right our adversaries were also watching it and so it's something that we're going to have to evolve from pretty quickly yeah yeah now look another couple things because we've got a couple more topics we want to get through with you today another thing that you highlighted is something you'd like to talk about is actually the the the PRC's switch to more living off land techniques and of course you know we've seen public reporting on this. Telco's in Guam getting hit
Starting point is 00:33:07 and we've seen some interesting reports come out about, you know, their various living off the land techniques. Now, living off the land is not new, certainly, but we are seeing a big pivot into it from Chinese threat actors, Chinese government threat actors. They are doing some cool stuff. Like, you know, they're improving the in the world, in the wild practice of using lol bins and just living off the land generally.
Starting point is 00:33:34 But why is it you wanted to talk about it? Because I'm curious. Yeah, it's something we're really concerned about in terms of like scope, scale and sophistication. To your point, they're doing some really unique things, things that we're concerned about. And the fact of the matter is, is right, it was talked about in the annual intelligence assessment by DNI Haynes. They have the ability to degrade U.S. critical infrastructure in areas that aren't just espionage. And he talks about pipelines and oil. It's crossing a line for us, and it's something we really want to focus on.
Starting point is 00:34:02 And so we're most concerned about it. And the fact of the matter is that PRc cyber actors have evolved significantly over the last couple years they've learned they've they've honed in on their trade craft they're just going to get continually better you know rob talked about it earlier and the fact that in the hafnium they doubled down on exploitation after being and again with the barracuda stuff and the really like because i've had some interesting discussions around that with people. And the interesting part about that was that I think the problem that we had was that they burrowed so deep into those devices that they were going to have to be woodchipped.
Starting point is 00:34:34 And they would have done that knowing that they'd already been rumbled. And so they're just imposing extra costs. And just being, you know, as I say, it just wasn't cricket. Yeah, and in May when we released the Cybersecurity Advisory on living off the land activity, right, you'll see at the end of the acknowledgement section there's like 11 industry partners that help build that hunt guide. I will tell you in weeks or days, weeks, hours,
Starting point is 00:34:57 it all kind of blends together at this point, of briefing that to various sectors and industry partners. There are times when we had multiple people come back to us and they're like, hey, can you just send us over the IOC list? And we're like, no, no, no, that's not how living off the land works. You're going to have to put some significant resources behind it to be able to find them. There are no file hashes here.
Starting point is 00:35:13 I know. And that's hard to explain to people. And it takes a lot of investment. That's why I was asking, why is it that you want to talk about? What's the message here? And I think it really comes to the question, why is PRC doing this? And I guess it's because defenders aren't used to thinking about it as the you know the be all and
Starting point is 00:35:30 end all of an adversary's tradecraft right so they're more thinking like file hashes and IPs and whatever we've got to up our game you've got to die identity access management you've got to know what your sysadmins are doing are they in every single day are they actually supposed to be doing the activity that you see them doing or Or is that suspicious, right? And that takes a lot of time and effort to build those life cycles and that behavioral analysis. So it's really, this is a counter detection thing. Yeah, it's going to take a lot of work to find them. And oh, by the way, when we do, they're going to come back. And so it is going to take a concerted effort across everyone in the industry, as well as the net defenders, to really put a lot of time and effort behind this.
Starting point is 00:36:07 Although I have to say that if your IT administrator, your average IT administrator, knows these really esoteric Windows commands, you should give them a raise because chances are they don't. I know. If they're like side-loading DLLs to do some administrative function, you're like, hey. I mean, we're all upskilling to new tradecraft, right? Like we've got to learn how to detect this
Starting point is 00:36:27 and really defend against it. And that's going to take a lot of work for us, especially on living off the land techniques. I mean, but you were familiar with living off the land, obviously, when you were still at CrowdStrike. And you know- Except me all the time, yeah. Yeah, exactly.
Starting point is 00:36:39 And you and I have been talking about that a lot. I mean, one thing that I actually like about EDR platforms is it's capable. They're capable of viewing all sorts of execution events and whatever but I don't know you can still the problem is the number of new like living off the land paths that people are still enumerating I'll tell you Patrick one of the telltale signs was seeing these commands that you don't see before either the utilities themselves or the different command switches that they were using you're like there is absolutely no way that any administrator
Starting point is 00:37:08 in my company actually knows this really undocumented command or competent yeah or doing something really really strange with it with a standard utility uh that that's often what what is going to be tell but some of it some of it can be subtle you know when they find their way into like obscure scripting environments and things like that. Like, I just, yeah, it gives me the, as well. PowerShell is probably the biggest problem. PowerShell, at least you can do stuff like run it all
Starting point is 00:37:35 in constrained language mode. And get rid of PowerShell, too. Like, there's stuff you can do for PowerShell. You can, except that it's going to break everything in your environment, so you really can't. I think the other part that I would just emphasize, and it's one of the things that we found foundational to the CCC, is that it's not just about EDR providers.
Starting point is 00:37:57 It's about ISPs and tracking all the covert networks that the PRC is using to get into these U.S. critical infrastructure and these companies. It's about cloud providers understanding the personas behind the tradecraft and who these entities possibly are. And then it's EDR and endpoint protection to really figure out how do you find them once they're in the network. And it can't be in those silos. Like we have to share amongst all of us. And that's really hard to do right now because you got to get past a lot of lawyers. And then you've got to be able to get down to the technical details and everyone that has the expertise to be able to have that conversation. And that's a heavy lift. And it's something that we're trying to perfect, and it is a constant challenge. Now, the last thing we're going to talk about today
Starting point is 00:38:28 is actually the AI Security Center. And I did ask you if you heard our mean joke on the show about the AI Security Center, which to me sounds like something you would do when you want to get a funding bump. You know, hey, look, we're doing stuff with AI. We're going to protect America against AI. But why don't you actually walk us through what this AI Security Center is?
Starting point is 00:38:48 Because this is a CCC thing. Yeah, so absolutely. General Nakasone, end of September, announced that the AI Security Center will be part of the Cybersecurity Collaboration Center. And really what it is focused on is, again, what we think are our superpowers and what we do best. The fact of the matter is that we've been looking at AI technology for years here at the National Security Agency.
Starting point is 00:39:08 We recently did a study across the agency to think, okay, where do we really need to focus our efforts and have a more concerted lean forward on how we're looking at things? And AI security was one of those key aspects of it. When we think about AI security, it's really about protecting AI companies, their networks, and their intellectual property. We want to maintain that, to your point, America, freedom, US competitive advantage. Great. I've got an America button. What can I say? It's great. I love having an America button. It's not going to get out of my head every time I say AI security center, you for that uh but we're we want to ensure that we're helping those companies protect themselves from adversaries that want to steal that technology and then separately we want to
Starting point is 00:39:53 really look at the ai life cycle everywhere from data collection to deployment to operations and say how do you secure every single component and every single step to ensure that it has that integrity behind it when we're looking at what's the output. So you're trying to secure the AI ecosystem and the AI business. Yeah, it is AI security, not AI safety. Yes. Right. And so that's a little bit different. That's someone else. That's good. Let Google worry about that. Yeah. Let the hippies do that. You know? Yeah. AI safety applies to us when we look at how do we take AI technology and use it in national security systems and the defense industrial base, right? Because we have those national manager roles. And so
Starting point is 00:40:28 we play both sides, but mostly that AI security aspect is why it's named the AI security side. I mean, it's just wild when you start thinking about the implications of the AI boom to even global security, right? When you've got, you know, TSMC with just such a stranglehold on the ability to make this hardware. And anyway, that's a whole other topic that I'm seeing people already shift uncomfortably in their chairs. So we'll just... No, but this makes total sense for you to do
Starting point is 00:40:52 because it is one of the administration's key priorities to deny the PRC the ability to develop advanced AI models, right? That's what all the chips export controls are all about, preventing their own AI companies, which just ended up on the entities list last week, of being able to manufacture those chips, including on TSMC fabs. So one way they could get around it is by stealing the models directly from REA companies. So you got to protect that too. But I mean, that the hardware part of this is so big as well. I mean, that's kind of what I was getting at with the TSMC thing is because
Starting point is 00:41:23 it's great to have the world's best models, but if you don't have the gear to run them on at scale, right at scale, because it's just so computationally expensive, the hardware is expensive, everyone wants it. This stuff is essentially going to be rationed for the next 1020 years, AI is going to be expensive. Well, I think the bigger problem and I talked about this on my own podcast with one of yours, Gil Herrera, who runs research here at NSA, is the real problem with the eye right now is the error rate. When you're upwards of 10% in terms of error rate, it's fine for applications where you're going to have a human review it,
Starting point is 00:41:56 but when you don't... It goes back to what we kind of talked about. NSA, we've got insights into how adversaries want to exploit specific companies. That's our superpower. That's why we're going to share it through the CCC, because we've got insights into how adversaries want to exploit specific companies. That's our superpower. That's why we're going to share it through the CCC because we've built those relationships. And secondly, we're taking the hacker mindset and we're applying it to defense, right? How do you protect the models? And so that's really our strength and why it's here.
Starting point is 00:42:16 But isn't this outside of your typical constituency, you know, the people developing these models? No. So AI technology, key component for the defense industrial base, right? Of course. It's operationally relevant. So those companies that we want them to be our partners. But if you get wind of something going after like a civilian, you know, non-classified, non-DIB company, I mean,
Starting point is 00:42:31 is that something you're even able to move on? Yeah, because especially if it's our insights, but we're going to partner with FBI. We're going to partner with CISA, right? That's the whole aspect of ensuring who's the best position to have that conversation. It also comes down to like, who's just got the better relationship with the company, right? And that's really where we find comfort in trying to share as much information as possible. Is there a company out there that's
Starting point is 00:42:52 not part of the dip? Isn't everyone self-sufficient? There are a few, Dimitri. I don't know it off the top of my head, but I'll get back to you. All right. Well, I think we're going to wrap it up there. Morgan Adamski, Rob Joyce, Dimitri Alperovic, thank you so much for joining me to do this podcast. Hey, thanks for schlepping microphones all the way from Australia. It's really appreciated.
Starting point is 00:43:11 Thanks for having me, Patrick. Absolutely. Thanks for coming. Hey! Hey! And I'd like to say a big old thanks to everyone at NSA who made that happen. That was a lot of fun. It was great to come out and meet everyone. It is time for this week's sponsor interview now with Feroz Aboukadeje,
Starting point is 00:43:40 the founder of Socket. Socket is a software supply chain security company that can stop malicious packages getting into your software. Khadije, the founder of Socket. Socket is a software supply chain security company that can stop malicious packages getting into your software. And Feroz and his colleagues have been playing around with LLMs lately, and the use case here actually sounds pretty reasonable. So here's Feroz explaining how Socket is using LLMs to explain things to users, but also to automate some analysis of software packages and changes made to them and whatnot. It's interesting stuff. Here's for us. Yeah. So Socket, we scan and analyze all
Starting point is 00:44:13 open source packages that are published to NPM, PyPy, and the Go ecosystem. So we're looking at all this open source code. It's too much for a human to analyze, right? Way too many packages. We look for signals in those packages that indicate the package could be malicious. And so just to be clear, malicious is different than vulnerable, right? So we're looking, I mean obviously a lot of the times when people think of open source security, dependency security, they're thinking of vulnerable packages. We obviously do that, but also where the LLMs are most useful actually is in looking for malicious intent. So when we find signals that indicate the package is
Starting point is 00:44:53 malicious, we use LLMs in two ways. The first and the one you alluded to is where you really get it to explain in plain English to the developer, what is this code doing and why should they care? Right. Well, I mean, you say plain English,
Starting point is 00:45:09 but it could just as well be plain German or plain Japanese, right. Which is another nice thing about LLMs, but go on. Yeah. Fair, fair point. We don't have the,
Starting point is 00:45:19 we don't have that piece working quite yet, but you know, I mean, it, it, you know, one way to think of is that you take this machine output, right, think about the output from a lot of the tooling that we use every day, and it's not necessarily
Starting point is 00:45:32 something that a developer that doesn't care about security that much is just trying to get their job done, they're just trying to ship a feature. It's something you can use to explain the alert and why they should care. So that's how we've used it, right? I mean, when we find a package that's been compromised, we can explain literally there, you know, in English to them what the problem is. So we can say it reads environment variables from this, you know, from your environment, and it sends them to a random IP address, like, okay, you know, pretty clear that that's not a good thing. Yeah, I mean, you just did a screen share before we
Starting point is 00:46:05 started recording and showed me, well, you know, there was one instance where it was some package that got published that, you know, it takes a bunch of environment variables and throws them into a telegram channel, right? Which, and I'm guessing it's not LLM based analysis that's triggering the alert, but being able to explain that in, in plain language to a developer is probably more useful than just triggering an alert where there's a bunch of strings in it, suspect strings in it and stuff for them to look at, right? Yeah, yeah. It's also pretty good at indicating, like, if we're not sure that something's malicious, but we still want to flag it because it doesn't have that many downloads and it looks sketchy. I'll give you an example, right?
Starting point is 00:46:47 Obfuscated code is added to a package, right? Having the LLM explain why that's bad is really useful. So it can, you'd be surprised actually, it can oftentimes de-obfuscate the code. And I'm definitely not trying to say this is a panacea and can catch, can deal with everything, but we find cases all the time where we've caught malware that's still live I'm definitely not trying to say this is a panacea and can deal with everything, but we find cases all the time where we've caught malware that's still live on NPM or PyPy. It's out there and developers could install it if they were so unlucky as to install that
Starting point is 00:47:15 package at that moment. And we're catching it and the alert will come out saying something like, this script is obfuscated and it dynamically creates functions through string concatenation to collect the user's environment variables and send them to a remote server. That's an actual, I just read that off of my screen, that's an actual alert. So I'm always, I guess I'm surprised at the number of ways you can use these things. They're not gonna solve all our problems, but they're definitely good at explaining things. And they're definitely yet another signal to use when trying to figure out if something is malicious.
Starting point is 00:47:51 Yeah. So, I mean, one thing that I've been saying and banging on about quite a lot over the last few months is that I see this stuff as being primarily useful as a interface, as a way of connecting people to computers through language, right? And I think there's all sorts of productivity benefits that come from that. So certainly the main use case that you've mentioned fits into that, right? And just with Corelight, like explaining alerts, and now you're doing the same thing. You're basically explaining the alerts, you know, have a look at this alert and tell the user why it's occurring. But it also seems like there is that actual analysis component as well. And like with what GrayNoise is doing with SIFT,
Starting point is 00:48:30 it seems like there is actually some analysis going on with the LLMs. So I guess what I'm getting at is perhaps I was wrong in just saying that it was purely an interface thing, but it seems like they're distinct functions, aren't they? There's the interface function and the analysis function. Where does it fall over though? Because you would have played around with this quite a lot. What hasn't it been able to do?
Starting point is 00:48:55 Yeah, so it definitely can't de-obfuscate arbitrarily obfuscated packages. It definitely, sometimes it thinks things are malicious that are merely a bad code, right? Like you'll see a lot of just, to put it mildly, I guess, probably most open source packages you shouldn't use, right? Like there's a lot of stuff out there that I would say maybe the bottom half.
Starting point is 00:49:24 So it can't distinguish bad malicious from bad incompetent. Yeah, and there's a lot of stuff out there that I would say maybe the bottom half. So it can't distinguish bad malicious from bad incompetent. Yeah, and there's a lot of bad incompetent. I mean, there's a lot of packages out there where we'll find that there's a network call being made and the package has no business talking to the network, right? And so that's where the LLM can actually be helpful, right? If you prompt it correctly, you can get it to say, okay, given the purpose of this package, what could this network call be doing? What data is this file looking at? Where is the data going?
Starting point is 00:49:55 And you can get it to sort of do a little bit of thinking there for you. So I'm guessing you've rolled this out into the product. You've actually got people using it. What's the feedback been like? Because, you know, this is all new stuff. Like, uh, you know, are people telling you it's magic or are they just like, eh, it just becomes normal. Uh, I mean, we're, we're rolling it out. Um, it's, I mean, it's been rolled out, uh, for a while. And, um, it's, it's how we catch, you know, we catch about, uh, you know, several hundred packages per week that are malicious that are being published to all these different package managers. But I mean, you're catching them.
Starting point is 00:50:32 You're not catching them necessarily with the LLMs, are you? I mean, you're catching them with different signals and then just getting the LLM to explain it. I mean, how many? Well, that's the question. How many are you actually catching with LLM based analysis? And, you know, are the false positives that you alluded to there making that tricky? So since we rolled this out in, I think it was March, we've detected about 8,700 packages using LLMs that are malicious. Now, when we find something that's malicious,
Starting point is 00:51:02 we report it and try to get it taken down from these registries to protect the community and then block it for anyone who's a Socket customer so that if their developers are unlucky enough to update or typo an install or whatever, however, they end up pulling these packages in, they'll be protected. People are benefiting from it already. We've seen cases where Dependabot will bump a package to fix a CVE, but then that new version actually has some protestware in it. We talked about protestware last time
Starting point is 00:51:36 where maintainers maliciously modify their packages. And so then the socket bot will come in on that pull request and say, hey, actually, don't update that. And so you'll have the war of the bots fighting each other on the pull requests. So yeah, I mean, we definitely, it's a core part of the product. It explains all the alerts that we send.
Starting point is 00:51:57 The developer sees the explanation directly in the product. People like it. And then we do have a human in the loop to go and edit those if they're not quite exactly what we want to say and what we want to put forth. So there is that human element as well. Well, again, I mean, that's the other thing that I've said a lot about LLMs, which is they're a productivity tool. And, you know, this is a great example of that where you still need someone to
Starting point is 00:52:16 oversee it, but it can do stuff at scale, you know, stuff like this. So, you know, you've got obviously signals that you will pull out like you know shunting data off to telegram bots you don't need an llm to tell you that that's shady right so what percentage i mean are you llming everything that you get a hit on and doing that llm analysis or is some of it just bypassing that step or have you just built this now into your process where you've got some signals that surface it and say this is dodgy then it gets kicked into the llm to do a little bit of analysis and then check by human and then onwards yeah it's the second thing you said so yeah okay
Starting point is 00:52:54 now i'm i'm much clearer now yeah yeah if you put everything into the llm uh i mean depending on on which providers you're using uh but if you do it in the naive way you know that's a great way to bankrupt yourself and use all your series a funding, send it all to OpenAI. Yeah. Yeah. I mean, we heard similar things from Proofpoint, right? When they're looking to understand the intent of an email and whether or not it's BEC, like you obviously cannot like throw every single email that Proofpoint gets into an LLM or, you know, I think that would consume earth's entire energy supply basically to do that so you need to do some filtering um and have you been able to like so so you mentioned before that you could get false positives like how do you tune them out of this process right
Starting point is 00:53:37 because at that point you're kind of beholden to the model to get stuff right like if it starts making mistakes can you actually teach it and say, hey, you keep flagging these things and they're not malicious. Can you please stop doing that? And it does. I mean, how does that work? Yeah. So we do tweak the prompts over time. And that is actually a big open area, just not just for us, but for the whole industry is how do you modify your prompts and improve the performance of these in a measurable, systematic way so you're not just tweaking things and then improving some areas and then making other areas worse. So you need to have very robust test suites, benchmark suites, and have a very
Starting point is 00:54:18 rigorous process to do A-B testing across any potential change you want to make to the prompting and to the training. Now, when we talk about generating code, as opposed to, and look, this might be a really dumb question, but if I want to commit a malicious package into some sort of public repo, can't I throw my code into an LLM and ask it to make it look not malicious to an LLM? And would that work? I don't know about that particular case. Maybe we should give it a try. I mean, so that's why you don't want to go with just an LLM is your only layer. I mean, you know, Socket works with a whole bunch of different analysis. I mean, that's the whole point, isn't it? We'd have to bypass all of that initial filtering
Starting point is 00:55:02 as well. Exactly. Yeah. So, I mean, we look at stuff like, you know, who's the maintainer and do they have a history, right? Like what is the, um, you know, what is the behavior of the package? So even before you get to the LLM stuff and you just look at the, what is the core product do? Um, you know, you would catch a lot of that kind of stuff. Um, so really concrete example is, uh, you know, there was a package that hundreds of thousands of downloads used by a ton of people and it had code added to it that would open up a protest kind of pop up on production on your front end, your website. We see that it's calling window.open to do that pop-up that wasn't there before, that's a sketchy behavior. And so we call that out and that's like a,
Starting point is 00:55:50 that's detecting the capabilities of the package. So it does, and we consider capabilities, everything from does it open a pop-up window to like, does it access the network? Does it read your files? Does it read your environment variables? Does it create child process? You can't hide that because it's like literally
Starting point is 00:56:04 what it does. So you can't ask an LLM to hide that. It's not going to be possible to hide it and still be able to do it. Yeah. Yeah. I mean, you can definitely hide it in certain ways, but not, I mean, at the end of the day, the code has to run. It has to do the thing. All right. Feroz Abukadije, thank you so much for joining me for that conversation and all about LLMs and how you're using them. Good stuff. Thanks so much, Pat.
Starting point is 00:56:28 That was for us a book of DJ there from this week's sponsor, Socket. Big thanks to them for that. And you can find Socket at socket.dev. And that is it for this week's show. I do hope you've enjoyed it. I'll be back soon with another edition of Risky Business. But until then, I've been Patrick Gray. Thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.