Darknet Diaries - 127: Maddie

Episode Date: November 1, 2022

Maddie Stone is a security researcher for Google’s Project Zero. In this episode we hear what it’s like battling zero day vulnerabilities.SponsorsSupport for this show comes from Zscalar.... Zscalar zero trust exchange will scrutinize the traffic and permit or deny traffic based on a set of rules. This is so much more secure than letting data flow freely internally. And it really does mitigate ransomware outbreaks. The Zscaler Zero Trust Exchange gives YOU confidence in your security to feel empowered to focus on other parts of your business, like digital transformation, growth, and innovation. Check out the product at zscaler.com.Support for this show comes from Thinkst Canary. Their canaries attract malicious actors in your network and then send you an alert if someone tries to access them. Great early warning system for knowing when someone is snooping around where they shouldn’t be. Check them out at https://canary.tools.Sourceshttps://www.sophos.com/en-us/medialibrary/pdfs/technical%20papers/yu-vb2013.pdfhttps://www.youtube.com/watch?v=s0Tqi7fuOSUhttps://www.vice.com/en/article/4x3n9b/sometimes-a-typo-means-you-need-to-blow-up-your-spacecraft

Transcript
Discussion (0)
Starting point is 00:00:00 I have a degree in software engineering. But can you remember a time in your life when there wasn't such a thing as software engineers? I can't. All my life, it's been a thing. But I bet my great-grandparents went their whole life without ever hearing about software engineering. So let's take a quick look backwards to find when software engineering popped into existence. In the 1950s, NASA was doing some pretty amazing things, flying spaceships to the moon and beyond. These spaceships were loaded with lots of technology,
Starting point is 00:00:33 antennas, radios, computers, cameras, software, and hardware. And that's just on board the spaceship. You've seen these giant command centers they have where mission control is. There are computers on everyone's desk and giant screens in front of the room. And there are dozens of scientists and engineers in the room. Yet not a single one of them was a software engineer because the term had not been used at any point in the 1950s. In the 1960s, NASA developed the Mariner space program. The goal here was to send unmanned spaceships to Mercury, Mars, and Venus to take photos of them. In 1962, the first Mariner spaceship was launched, and it was headed for Venus.
Starting point is 00:01:16 It didn't have anyone on board. It was controlled remotely, and on board were just electronics, antennas, computers, jet fuel, and cameras. But only a few minutes after launching, things started to go wrong. The computer onboard that was in charge of controlling the ship was acting erratic, giving all kinds of wild commands for the ship to do. The folks at mission control tried to correct the computer gone wild, but they couldn't do anything about it. Then they started to realize this rocket's not going to make it to Venus.
Starting point is 00:01:47 It's not even going to make it out of the atmosphere, and it might even crash into Earth and hurt someone. So the people at Mission Control decided there was no choice but to push the self-destruct button and blow up Mariner 1 over the Atlantic Ocean. That was the end of the Mariner 1 spacecraft, an $18.5 million ship blown up. So what happened? Well, scientists and engineers spent days replaying the events and logs that they captured after launch. A piece of hardware failed, which caused an onboard computer to kick in and try to control the craft.
Starting point is 00:02:26 But the way it was trying to control the craft wasn't right. Something was wrong with that computer. So they examined the code that was put on that computer. And that's when they saw the problem. A missing dash in the algorithm. A single missing dash. It's not like the dash you're thinking. It's more like a bar that was supposed to be above the letter R, which stands for radius. And that meant it should have been a smoothed value for radius. Without this bar, it was taking the current value for R. And since this rocket was trying to recover from some bad hardware, the values for R were bouncing all over. So the output of the program was bouncing all over. It should have been taking an average reading for R, not the wildly fluctuating values. So the computer was telling
Starting point is 00:03:09 the rocket to fly all crazy and out of control. The logic and algorithm that the scientists gave the programmer was correct. But whoever programmed that algorithm into the computer missed this little dash above the R. And because of that tiny little bug in the code, it resulted in the whole rocket being destroyed. When NASA makes a mistake like this, they try to find ways to prevent anything like this happening in the future. And they realized they were implementing software on a lot of systems, but had no way to test the reliability of that software. This is when it became clear that software engineering should be a discipline. And shortly after that, it started getting developed and became a thing. This software bug didn't just crash a spaceship,
Starting point is 00:03:59 but it launched a whole new field of study and new principles for designing, developing, and testing computer software. These are true stories from the dark side of the internet. I'm Jack Recider. This is Darknet Diaries. This episode is sponsored by Delete Me. I know a bit too much about how scam callers work. They'll use anything they can find about you online to try to get at your money. And our personal information is all over the place online. Phone numbers, addresses, family members, where you work, what kind of car you drive.
Starting point is 00:04:56 It's endless. And it's not a fair fight. But I realize I don't need to be fighting this alone anymore. Now I use the help of Delete Me. Delete Me is a subscription service that finds and removes personal information from hundreds of data brokers' websites and continuously works to keep it off. Data brokers hate them because Delete.me makes sure your personal profile is no longer theirs to sell. I tried it and they immediately got busy scouring the internet for my name and gave me reports on what they found. And then they
Starting point is 00:05:22 got busy deleting things. It was great to have someone on my team when it comes to my privacy. Take control of your data and keep your private life private by signing up for Delete Me. Now at a special discount for Darknet Diaries listeners. Today, get 20% off your Delete Me plan when you go to joindeleteme.com slash darknetdiaries and use promo code darknet at checkout. The only way to get 20% off is to go to joindeleteme.com slash darknetdiaries and enter code darknet at checkout. That's joindeleteme.com slash darknetdiaries and use code darknet. Support for this show comes from Black Hills Information Security. This is a company that does penetration testing,
Starting point is 00:06:07 incident response, and active monitoring to help keep businesses secure. I know a few people who work over there, and I can vouch they do very good work. If you want to improve the security of your organization, give them a call. I'm sure they can help. But the founder of the company, John Strand, is a teacher, and he's made it a mission to make Black Hills Information Security world-class in security training. You can learn things like penetration testing, securing the cloud, breaching the cloud, digital forensics, and so much more. But get this, the whole thing is pay what you can.
Starting point is 00:06:38 Black Hills believes that great intro security classes do not need to be expensive, and they are trying to break down barriers to get more people into the security field. And if you decide to pay over $195, you get six months access to the MetaCTF Cyber Range, which is great for practicing your skills and showing them off to potential employers. Head on over to BlackHillsInfosec.com to learn more about what services they offer and find links to their webcasts to get some world-class training. That's BlackHillsInfosec.com. BlackHillsInfosec.com.
Starting point is 00:07:16 Are you ready? Yep, sounds good to me. So what got you started? Hold on, let's start with your name and what do you do? My name is Maddie Stone and I am a security researcher focused on studying zero days that are actively exploited in the wild at Google Project Zero. We're going to get into what she does at Google, but I find that the path to get there is interesting. So when she was a teenager, she developed an interest in computers and after high school, went to college at Johns Hopkins University in Maryland. Yeah, so I actually double majored in computer science and Russian language and literature because I wasn't fully committed to this whole engineering thing. I didn't know if I would be bored during doing that. So I was like, let's learn a new language and ended up really enjoying the Russian and sort of just a very different way of
Starting point is 00:08:06 using your brain and classes and everything like that. And it allowed me to study abroad too, which I've always loved to travel. Whoa, this is crazy. So you know Russian? Well, I used to. I used to be good. But then you moved, like you studied abroad to where? So I did two months in St. Petersburg and four months in Moscow. And after graduating, got a job at the Applied Physics Lab at Johns Hopkins. Which is a government research laboratory. And that's where I ended up for the first four and a half years, studying or working on reverse engineering of like firmware and hardware.
Starting point is 00:08:44 It looks like a really cool place, actually. half years studying or working on reverse engineering of like firmware and hardware. It looks like a really cool place, actually. There are about 8,000 employees at this applied physics lab, and they take on research projects for the Department of Defense and NASA. So they get hands-on experience while doing advanced research. So I was also working with literal rocket scientists, if that doesn't, you know, keep your ego in check. And while working there, she simultaneously was able to get a master's degree in computer science, too. I was super fascinated by, like, the hacking portion. And, you know, when you see all these things but have never actually done it, it sounds, like, really sexy and everything like that.
Starting point is 00:09:18 And I had really, really loved assembly. I had actually listed that that was my favorite language when like, you know, they did around and did profiles of folks and interviews with different companies. They ask you and they're like, you love assembly? I was like, yeah. I became the teaching assistant for that course. And then as an independent study, created all new projects. That's very interesting to me. I too have an IT degree and I learned Java and C and C studied and created all new projects. That's very interesting to me. I, too, have an IT degree, and I learned Java and C and C++ and Visual Basic and all these programming languages,
Starting point is 00:09:51 all of which I could understand no problem. But when I took the assembly language class, I was so lost. It was the only IT class that I actually struggled with, and that's because it's so different than everything else. Assembly language is very low-level. A high high level language, you can see things like variables, if statements, for loops, and functions. But with assembly, you have commands like move, push, pop, add, subtract. Real basic and rudimentary stuff. A program that is just a few lines of code in Python can become 10 times longer in assembly.
Starting point is 00:10:26 But assembly has some superpowers. It can interact with memory and the CPU in ways that other languages can't. And it can be incredibly efficient too. You get much better control over the computer's resources. And you know what? You can go even deeper too, to an even lower level, and look to see what's going on in the hardware. You could open up the case of the computer, get out some probes, and jam them into the circuit board, and watch what electrical signals are moving through the circuitry. This is even more hard to read, because all you see at that level is whether the voltage is high or low. But having this kind of read-write access gives you really the ultimate power over your computer. And it was this low-level stuff that fascinated Maddie. It was like doing brain surgery to teach someone something
Starting point is 00:11:14 or to see how they think. A computer can't hide its thoughts when you're this deep into it. Another big reason she liked it was because she could break down any program into assembly. It doesn't matter what language a program is created in, you can run any compiled program through a disassembler and see the whole program in assembly language. A lot of applications and programs are compiled and in a sort of bytecode that's not human readable. And you certainly can't see the original code that was used to create it. So you
Starting point is 00:11:45 can't tell what so many programs actually do. But at the end of the day, the computer has to know what to do. And that bytecode can be converted into assembly so you can kind of read what's happening. So if you get good with assembly, you can get a much deeper understanding of how computers handle memory and processes, and you can decipher any program. It's just really hard to read at that level. It's kind of like reading a book, but you only get to look at one letter at a time, and the book only has 10 usable letters that make up all the words.
Starting point is 00:12:14 Anyway, getting better at assembly and learning more about hardware is what she spent four years doing at the Applied Physics Lab. And then one day Google calls and says, hey, are you interested in interviewing with us? Which I was pretty shocked about because as a student, I tried really hard to get any even interviews or calls
Starting point is 00:12:35 with all of the big tech companies, but I was not someone of interest to them. So I was very surprised to get the call and ended up going through the interview process and getting the offer to join the Android security team as a reverse engineer. A reverse engineer is someone who takes a program and tries to figure out what it does by sometimes converting it to assembly language and trying to make sense of it. And I mean, Google is where Android is made. So why would someone need to reverse engineer Android when they could just
Starting point is 00:13:08 look at the source code written right there in the same building? I was focused on all of the malware, you know, in the Android ecosystem. Oh, duh, that makes sense. The malware that's targeting Android is often compiled where you can't see the code that's used to make it. And Maddie's job was to reverse engineer and decompile some of this code and examine it for malware. And if it was malware, figure out what it's doing and then tell the Android developers how to fix this. And more specifically, I started leading a team that was focused on
Starting point is 00:13:39 finding any sort of malware or bad apps that were, one, potentially pre-installed on different OEM or manufacturer devices, you know, because there's like thousands of different manufacturers of Android devices, as well as looking at, can we find malware for all the apps that are off of Google Play Store? So, you know, in lots of parts of the world, there's apps that are passed around through different stores other than Google Play or they're peer-to-peer passed or things like that. So are there ways that we can still protect Android users
Starting point is 00:14:16 from those apps as well in figuring out what's malware and what's not? Okay, so I just got curious what kind of malware we're talking about here when it comes to Android. And I started looking some things up. One really popular virus going around is Gin Master. Apparently, there are millions of Android devices infected by this. And don't forget, Android is an operating system that's used on both phones and tablets. But this Gin Master malware, once it gets into a device, it will capture private data from the device and send that to an external server. It can also give attackers access to that device. Gin Master is
Starting point is 00:14:50 clearly something you'd never want on your phone or tablet. So why does it exist on millions of devices? Well, the way it often gets onto a device is that it gets tacked onto another app. And it's typically a bad app that a user is tricked into installing. A common strategy is to make a lookalike app of a popular game out there. This is to trick people into thinking that they're getting the app that they want, but it's not the real one. And then when someone downloads it and installs it, not only do they not get the app that they want, but they get infected with this Gin Master malware. So at the end of the day, it's actually a user who downloads and installs the virus. They just don't know it's
Starting point is 00:15:31 a virus. And when a device is infected with it, it can steal user data, take control of the device, or install more malicious stuff. So it's malware like this that gets sent to Maddy for analysis, and she can flag apps like that to warn Android users that this app contains malware. And specifically, the way Android apps are packaged is in something called an APK file, which stands for Android Package. Yes. So we find an APK file, which is basically just a zip file with all of the different components of an Android app. Not all Android apps are written in Java, but I think it is the most common language
Starting point is 00:16:09 they're typically written in. And what's nice for Maddie is Java apps can be decompiled pretty easily. And you can see a pretty close picture to how the original program looked. So she doesn't need to break it down into assemblies. She can read through what it's doing close to its original format, making it a lot easier to understand. But it's not always this easy. Sometimes hidden in the Java is additional compiled programs. Yes, so that's where some of the more sophisticated malware authors
Starting point is 00:16:38 would try to hide some of their behaviors in native libraries within the APK file. So these are compiled C or C++, which once it's compiled, it's in machine code, which we can disassemble to assembly code. And of course, this is where Maddie shines. She can read this assembly language to understand how the malware does what it does, then reports it to the Android team to see if there's anything they can do to protect users from malware like this. Yeah, so the first thing is we had to put flags into sort of the Google Play Protect system
Starting point is 00:17:22 because the number one thing is you want users to be alerted to given the option to remove it or disable the application from their device. And the next step is really writing automated solutions because especially when you're in a malware team and there's always more apps or samples to look at than there are humans to analyze them. So the goal is always that it's only ever reverse engineered once in that terms.
Starting point is 00:17:49 And that then after you've reverse engineered it once, then there's software automated solutions that can find all the other copies that may come out of that. So that's really the process is analyze it, figure it out, flag it so users were protected, and then figure out automated solutions. So tell me a story about maybe some interesting malware that you found or landed on your desk and you're like, all right, I'll take a look. And whoa, this is crazy, this stuff. So one of the biggest malware families that I was not expecting. And it ended up into like a year plus investigation was what we called Shamwa. It took a lot of practice to learn how to pronounce that
Starting point is 00:18:33 correctly, but it was a large botnet. And what was really interesting and how I got into it is that this application, which is usually written in Java, had this native library, so C or C++ compiled code in it. And as I kept trying to dig into this native library, it became obvious that it was heavily, heavily obfuscated, as well as doing an incredible amount of anti-analysis and anti-debugging checks. So it was very sophisticated in sort of trying to monitor, like, am I being monitored and analyzed by a security engineer, or am I running on a real device that I can infect? And I ended up diving into, I think it took like over a month
Starting point is 00:19:19 or a month and a half to really dive into all the aspects of that native library. And then when I started looking for other apps with similar native libraries, it became clear that it was this botnet and this family of malware that was doing some pretty sophisticated stuff. One of the funniest anecdotes to me is that I actually presented on that native library at Black Hat. Yeah, so in 2018, Maddie came on stage at the Black Hat Security Conference and showed everyone in the audience the exact techniques that this malware was using. So what are all these different techniques that we're going to talk about? What makes it so interesting? First, we're going to start about some of the JNI or Java Native Interface manipulations. Then we're going to go into some places where they've used anti-reversing techniques,
Starting point is 00:20:10 in-place decryption, and finally to about 40 different runtime environment checks that they use. And I think it was less than 24 or definitely less than 72 hours later. We saw the malware authors changing different aspects and characteristics of this library that I had just presented on. So they only changed the characteristics and techniques I had discussed in the Black Hat presentation. So that presentation hadn't been streamed
Starting point is 00:20:40 or anything like that. So that was very fascinating to see. Whoa, yeah, that is interesting. This means either the malware authors or someone who knows the malware authors were at her talk, watching her, taking notes on how she's able to detect their malware and then rushing back to their computers
Starting point is 00:21:00 to update their malware to make it harder for the Google team to detect it. And see, this is the thing about Maddie. She seems to be on this mission to update their malware to make it harder for the Google team to detect it. And see, this is the thing about Maddie. She seems to be on this mission to make it harder for malware makers to do what they do. She gets in their heads and learns where and how they're hiding so she can shine a big old spotlight on it and make them scatter. Her goal is to make it easier for people to find malware and at the same time make it easier for people to find malware and at the same time,
Starting point is 00:21:26 make it harder for someone to make malware. So one day I had a new calendar invite in my inbox from Ben Hawks, who was the longtime lead of Project Zero. And we had never met before. And he said, hey, I just wanted to chat about this potential new role and sort of experiment for Project Zero. Oh, wow. Project Zero was trying to steal her. That's pretty cool. This is a very talented team within Google, which focuses on finding zero-day vulnerabilities. Yeah. So Google Project Zero is a team of sort of applied security research with a mission of make zero-day hard. But the key thing here is this team of sort of applied security research with a mission of make zero-day hard. But the key thing here is this team will look for bugs in any software,
Starting point is 00:22:11 not just Google's products. I think the idea here is that Google users don't just exclusively use Google products. Yeah, so if you think about it, to protect, say, Google Chrome users or Gmail users or things like that, a lot of Google users can be attacked through vectors other than just the Google products. So whatever operating system you're running Chrome on, for example, if that has vulnerabilities, then that could be a way to hack those users.
Starting point is 00:22:42 Or back in 2014, you know, Flash was one of the biggest ways to attack people via the web. So doing a lot of research and vulnerability research into Flash would ultimately help protect Chrome users. So the team at Project Zero looks for zero-day vulnerabilities anywhere. Oh, and zero-day vulnerabilities are bugs that the software maker doesn't yet know about,
Starting point is 00:23:05 which also means the defenders don't know about it either, and they can't defend against this kind of bug. Now, if the Project Zero team finds a bug, they tell the vendor to fix it and then start the timer. If 90 days goes by and the vendor doesn't fix it, Google will publish this bug publicly. Anyway, this was the team who approached Maddy. So his hybrid role would be not just for me to not just be a vulnerability research, but sort of combine this threat and tele malware analyst side of it. And I would use the starting
Starting point is 00:23:41 point of zero days that are actively exploited in the wild. So not just hunting zero days that attackers could theoretically be finding, but instead having my starting point be the exploits that are actually used. Yeah, I get it. If the goal of Project Zero is to make zero days hard to make, adding a reverse engineer to the mix really boosts the potential research that can be done. Now, instead of just looking for unknown malware out there, you can feed known malware to Maddie and she can digest that and come up with patterns to look for more malware that's out there. It's sort of approaching finding malware a totally different way. And combining these forces makes them more effective. So she took the job
Starting point is 00:24:25 and joined Google Project Zero. So I really came into this team with not a lot of knowledge and just this basic idea from Ben that he told me, take it and run with it and figure out what makes sense. So I did not really have any Windows, iOS, browser, et cetera, vulnerability research experience. My experience prior to Android had been on hardware and embedded devices, which doesn't tend to be the biggest targets of interest for Project Zero. And so it was a lot of learning. But we started off sort of off big in that I joined the team in July of 2019 and Google received information that the commercial surveillance company NSO had this Android exploit that
Starting point is 00:25:18 they were using to target Android users in their delivery of Pegasus, the piece of sat spyware that has been all over the news lately. And we actually got sort of some like marketing details about this capability. And so my first job was taking all of those details and seeing if I could figure out what the bug was so that we could patch it and, you know, break the capability. And so I was digging through all the different Android source code, Linux kernel source code, trying to figure out what is this bug and somehow managed to figure out exactly which bug it was because the details we were given happened to line up that there was only one vulnerability that potentially matched every single detail we were given.
Starting point is 00:26:07 So that was a pretty wild first bug to report and put into the Project Zero issue tracker. We reported it to Android under a seven-day deadline instead of the 90 due to a high probability that it was being actively exploited in the wild. And then wanting to show that it could be exploited, I partnered up with Jan Horn to write a proof of concept, not just triggering the vulnerability,
Starting point is 00:26:35 but actually showing a way to exploit the vulnerability and how it would be useful to get or to use in, say, the Pegasus chain. So that was quite the wild week. For Maddie to identify how Pegasus software is used in Android and then to come up with a working proof of concept exploit all in a week, that's amazing. That's like finding and squashing a million-dollar bug. Seriously, there are companies out there who are willing to pay a million dollars
Starting point is 00:27:04 for a bug like this because it's so valuable to certain people. a million dollar bug. Seriously, there are companies out there who are willing to pay a million dollars for a bug like this because it's so valuable to certain people. Pegasus is the spyware used by NSO, which is a company based in Israel who sells the spyware to different countries around the world. And it's quite expensive to buy this Pegasus software. And so when Maddy discovers how it's used and makes it no longer usable, it must make NSO angry. Now they have to rip out their existing way of exploiting phones and find a new way to do that, which isn't so easy. But this is Project Zero's goal, to make it harder for exploits to be out there.
Starting point is 00:27:41 And if a company has a whole business model of selling malware and exploits to countries, then yeah, they'll be impacted by this. And it'll mean the price of Pegasus will go up since it's harder to find these vulnerabilities. Generally, it is nation state actors who are using zero day exploits. And they're generally using these zero days against human rights defenders, journalists, minoritized populations, politicians. And so while every human, you know, doesn't necessarily need to be worried about being attacked with zero day exploits, all of us are generally impacted when they're used. When journalists become scared or unable to write the truth that they find and that human rights defenders are being targeted so
Starting point is 00:28:34 fewer people are scared to stand up and speak out or minoritized populations are being targeted or critical infrastructure companies and things like that, that does ultimately impact us all. If you want to know more about this, I did a whole episode on NSO. That's episode 100. You'll hear how they sell software to countries and then those countries turn around and use it to attack civil society. And of course, nation state actors aren't always abusing their power. They do use their abilities to stop terrorist attacks and criminal activity. But at the end of the day, the measure of any technology is how it winds up getting used against vulnerable people, not just how it helps. So if there are zero-day
Starting point is 00:29:16 vulnerabilities out there that are being used to target innocent people, then finding those and fixing them will help civil society be more secure. And it's kind of wild to me to think that Maddie here is trying to disarm nation-state actors by finding what weapons and exploits they have, and then once discovering it, getting it fixed so it can't be used to exploit people anymore. Has there been any threatening reactions to this? Like, I can imagine NSO Group being pretty upset after your first project there and being like, okay, Maddie is now on our list. Like, do you ever get any weird stuff?
Starting point is 00:29:55 Well, it was actually very strange of in January of 2020, I was invited to the conference Blue Hat Israel. And so I went and there were actually two people who came up to me and their badges said they worked for NSO. And they said and they asked me questions about why I chose the techniques I did. And so that was a very strange interaction overall. But one of the more anxiety producing was back in, I believe it was 2021, Google tabbed the threat analysis group discovered that North Korean hackers were targeting security researchers, including, you know, security researchers from Project Zero in the hopes of trying to steal the zero-day exploits from security researchers to use in their campaigns. So being personally, you know,
Starting point is 00:30:55 or personally, I mean, you know, in the population of folks targeted is a rather frightening aspect of, but it also just gave a lot of empathy for people doing the real hard work and are often targets of the nation state attackers using zero days. Yeah, so some other philosophy here is like NSA is in the business of finding zero days and using them as weapons. And sometimes, you know, one of the nation states that you're going up against is your own nation. Do you get like cross-conflicted there or how does that feel to you?
Starting point is 00:31:38 I don't think so because the vast majority of the time we have no idea who is behind a bug. Also, because you're just working so quickly that like people don't usually have attribution, you know, immediately. They just, if attribution even comes out, like the threat intel experts are usually, you know, three to six months behind. So there's never sort of that conflict because all we get is here's an exploit sample or here's a patch dip and the bug was labeled in release notes. So I've never really felt conflicted in that way because there's no way to know. All you know is that people are being
Starting point is 00:32:19 harmed. So that would sit even worse with me to not try and get it fixed. Yeah. We're going to take a quick break here, but stay with us because we're going to hear more from Maddie when we get back. is more important than ever. I recently visited spycloud.com to check my darknet exposure and was surprised by just how much stolen identity data criminals have at their disposal. From credentials to cookies to PII. Knowing what's putting you and your organization at risk and what to remediate is critical for protecting you and your users from account takeover,
Starting point is 00:33:01 session hijacking, and ransomware. SpyCloud exists to disrupt cybercrime with a mission to end criminals' ability to profit from stolen data. With SpyCloud, a leader in identity threat protection, you're never in the dark about your company's exposure from third-party breaches, successful phishes, or info-stealer infections. Get your free Darknet Exposure Report at spycloud.com slash darknetdiaries. The website is spycloud.com slash darknetdiaries. Earlier this year, in 2022, Maddy saw that Apple patched a bug in their
Starting point is 00:33:38 WebKit product. This is the browser engine that Apple's Safari browser uses. And there was a pretty big vulnerability discovered in it, but the patch notes were a little vague. So Maddie started to try to learn more. And when I started digging into it, one of the ways that I also analyze when it's just a patch dip, I don't have any other information, is for open source software such as WebKit,
Starting point is 00:34:04 I will look at sort of the history of that file in the areas that they patched, or it's called the git blame of it sort of tells you when did this line appear or when was this source code line last changed. And what I ended up figuring out was that this was sort of a zombie bug and that it had actually been originally fixed back in 2013. But then the bug was reintroduced because that patch was regressed and undone in 2016. And then here we were in 2022 with the bug exploited in the wild and patched again. Why do you think it regressed? So I did a deep blog post into this,
Starting point is 00:34:50 really trying to understand, and it was actually, it became sort of a team effort because all of us were really interested in trying to understand how did this happen? There was also a very interesting sort of overlap of my teammate Sergei Glazunov was actually the original reporter of the bug back in January 2013. And was actually reported to Chrome because at that time, Chrome was still built on top of WebKit as their browser engine. They didn't split off until 2014, I believe.
Starting point is 00:35:23 And so he was jumping in and looking at it with me. So were some of my other teammates, like Mark Brand. And what it looks like overall is that they were trying to change, sort of do a refactoring to one, make it more performant. And through that, that meant there were some really huge patch changes. And just based on the structure sort of of security teams and reviewing, a lot of times folks aren't really given a huge amount of resources and time to scroll through and look at line by line, like, what are all these changes that are being made and things like that. It's got to be quite the embarrassing feeling to find that your code had been vulnerable for seven years
Starting point is 00:36:09 and you're just now discovering it. It makes you stop and wonder, who all knew about this? Is it possible some advanced hacking group or nation state actor had known about this and was using it to take over people's browsers when they needed to? It's hard to tell, and we'll never know. Back in the fall of 2020, we discovered some exploit servers and just happened to discover
Starting point is 00:36:41 that they were delivering us exploits on different devices and different browsers. And in that case, you know, you're generally first just getting the first stage exploit and then some sort of fingerprinting script maybe or something like that. And so we were like, oh my goodness, like this is giving us exploits and our devices are fully patched. Like what the heck is going on? This must have been a very exciting day to find that there's a server out there in the world that is able to remotely attack a device and exploit it in ways that are just not stoppable? For a security research team like this, it's a big moment. You want to quickly try to capture as many exploits as you can from their server,
Starting point is 00:37:31 and then analyze them and see exactly how they're infecting devices, so you can get them fixed. So in this case, it was a watering hole attack, where a watering hole attack is if you go to a website and it is just going to try to infect anyone who goes to this website. So that was sort of the case here of, oh, this is weird. Suddenly, this is very weird traffic. And, oh, that's an exploit and that's a fingerprinting script. What did we stumble upon here?
Starting point is 00:38:04 And this website had active traffic and users coming to it. So Maddie and the team at Project Zero knew that people were actively being hacked right now when they were visiting the site and wanted to move as quick as possible to stop any more people from being infected. And so that was where we all really came together and we're working through weekends and long hours to first get as many of the exploits as we could and then teaming up, tearing them apart, getting around the obfuscation, trying to figure out what exactly is the bug that is being exploited here and getting those reported and working with the vendors to get those patches out as soon as possible. So they were able to squash any bugs that Google was responsible for
Starting point is 00:38:52 and then get all the other vendors who had bugs to squash them too, which made this website no longer effective at being able to exploit updated devices that had come to visit the site. And this is why I'm always telling you to patch your software. Always update your operating system and any apps you have if there's an update available because it makes it harder for someone to hack into your stuff. So, I mean, did you ever figure out who was doing this?
Starting point is 00:39:19 Like, was it a nation-state actor or who your thoughts were that would want to, you know, run this kind of attack? So we assume that it is a nation state actor just because the sheer volume of zero days and the sophistication behind the zero days, it seems rather unlikely that anyone other than a nation state actor would want to have access and be willing to use that number. I believe that when we looked at it, it was approximately, I believe, 11 zero days that the actor had used over the course of a year. So that definitely would make me think nation state, but no, I do not know who was behind it.
Starting point is 00:40:09 And I also, I am not an expert in attribution, but I have not seen or heard any definitive answers on who the threat researchers and threat intel experts believe was behind it. Whoa, 11 zero days? That's amazing. To make a zero-day vulnerability takes quite a bit of time and skill. This isn't some simple social engineering attack or some off-the-shelf malware. Each of these 11 zero-day vulnerabilities
Starting point is 00:40:35 were something that took a lot of resources to find and to turn into a usable exploit. On top of that, the way these exploits were chained together was incredibly sophisticated. So because it takes so many resources to develop and weaponize that many bugs, then that's why Maddie thinks it was likely some kind of nation-state actor. This is beyond the capabilities of a cybercrime group or hacktivist group. If you can use a less sophisticated form of attack to get access to whatever you need,
Starting point is 00:41:08 then that will always be the choice. If your device, if your targets are insecure and say, you know, they'll fall for phishing, then that's the easiest route. And that's what you'll take. If your targets don't keep their devices up to date, and thus you can use a end day exploit, that's what you'll take. So these zero days are when, one, you really don't want to leave a trace because people don't know what this bug and exploit will look like, and you're targeting entities or individuals who
Starting point is 00:41:38 probably have some pretty good security hygiene and posture. And those are often going to be people who know their targets, such as our human rights defenders and journalists, etc. Hmm. So the way I understand it, nation-state actors typically have a few different objectives. It could be intelligence gathering, like hacking into another nation and stealing information. And it could be disrupting the enemy, like deleting the servers that a terrorist organization uses. But we've also seen nation states participate in cybercrime and hacktivism. North Korea has been hacking into banks and stealing money from them. And China has been hacking into U.S. companies to steal their intellectual property. But we've also seen
Starting point is 00:42:20 China hack into the Gmail accounts of human rights activists to try to stop them or figure out what they're up to. And we've seen the UAE hack into human rights activists' phones to track them and arrest them. And of course, Russia is meddling with elections and even sabotaging the Olympics in some weird ways. So there's a big spectrum of what governments are doing out there in the mean streets of cyberspace. And I don't know about you, but to me, trying to figure out this space, it gets blurry fast. What's good? What's evil? Some things are clear, but others not so much.
Starting point is 00:42:56 Like when a country hacks into and spies on another ally country. Why? Because they don't trust their ally? Because they want more information than what their ally is willing to give them? And what happens when they do find out that their ally has some nefarious plan? Do the ends justify the means? It gets tricky. And I imagine the weight of who you may be helping and who you may be hurting must weigh on Maddie as she does her work. Of course, I don't think anyone who's in this industry or business can't help but think about sort of the philosophy of it. And so for me, it feels pretty easy.
Starting point is 00:43:37 And I hope I'm on the good side of, I want people to have safe and secure access to the internet, whether it's, you know, just their data, their device, and everything like that. So the case and the part of that safe and secure that I am currently able to hopefully make the biggest difference on is in the zero day and zero day exploit space. But, you know, previously, I was trying to accomplish that with making sure, you know, every Android phone didn't have malware on it.
Starting point is 00:44:11 So that's sort of my guiding principle is I think the world would be a pretty amazing place if everyone could access and connect to all this amount of information and education and everything like that. If with safe and security and everything like that, if with safe and security and know that their privacy is protected. So, yeah.
Starting point is 00:44:39 It's nice that Maddie has a good ethical mindset to all this and is helping us all become more secure. But just keep this in mind. There are people just like Maddie who work for the bad guys, doing exactly what she's doing, looking through patch notes and trying to figure out what exploit just got fixed to see if there's anything the vendor missed or some sort of related bug. And then once they find a bug, they'll develop it into an exploit and weaponize it instead of getting it fixed. And that just makes me think, okay, if there are like enemies and allies out there where countries are hacking into each other,
Starting point is 00:45:10 then what does that make Maddie? An enemy or an ally? Or is there some kind of third faction out there? Also, NSA stands for National Security Agency. Their job is to ensure the U.S. is secure and is able to send secure communications without our data getting into the enemy's hands. So you'd think that if the NSA has found a way to bypass the security of something, they'd want to find a way to get that fixed right away to ensure that the software used by hundreds of millions of Americans is secure, right? But despite that the NSA spends millions of dollars on finding and developing vulnerabilities, they don't report that much to vendors. We have seen them report some things, sometimes, but it's often under suspicious reasons, like when the shadow brokers claimed they had NSA exploits, NSA told Microsoft to patch a certain bug right away.
Starting point is 00:46:07 And there were other bugs that the NSA reported, which made me think that they might have intelligence that some other enemy nation might be actively using that exploit to hack into our stuff. And it becomes even more difficult to navigate all this when so many of the tech giants are also U.S.-based? I'm not saying there's any sort of collaboration between the NSA and the U.S. tech giants, but it makes sense to me that there is a closer relationship than other nations might have with U.S. tech companies. I kind of see
Starting point is 00:46:40 it as sort of an arms race. While nation states around the world want more exploits and zero-day vulnerabilities to carry out their objectives, Maddie is over here trying to neutralize those and build up the defenses for everyone to be able to defend against nation states better. I don't really think of it as a race unless we're talking maybe in single vulnerability case like, oh, we know this bug is being exploited. It needs to be fixed as fast as possible. That's really the only area that I sort of view as a race. Maybe also around the, this was just patched and we want to make sure that the patch is sufficient. We complete variant analysis before the attackers are able to. But at a longer haul, I don't think of it as a race as much as making smarter decisions. Because ultimately,
Starting point is 00:47:34 what we want is that it is so difficult, so expensive, requires so much expertise that attackers really hold onto their zero days close to the vest. And they're so valuable to them that they only use them in really, really special cases. I think we're still at the point now that yes, while it tends to be a smaller population of people targeted globally, I think we're still seeing too broad usage of these zero days to believe that attackers find them as valuable as we would hope. And so that looks like making it that much harder for them to find vulnerabilities. So let's say they cannot use variants of a previously public vulnerability. They instead have to come up with their own. They have to come up
Starting point is 00:48:31 with a whole new bug class that we've never seen before, not using these use after freeze and buffer overflows. They're not able to use a public exploit technique that someone has, we've seen before, or they use before and just want to plug and play a new vulnerability in because we as an industry are not only fixing the vulnerability, we're mitigating the exploit technique. They don't need three zero days, they need six now to maintain the same capability they have before. That's really the way I think about it and what makes me hopeful, where I know a lot of people can feel down in that zero days are this sort of impossible
Starting point is 00:49:10 problem to solve, is the exciting part is iterative progress. We will see the return on investment from. So it's not that you have to do steps A through J, and that's the only time you will begin to see this return on investment, every little step we take forward in this to make it just that much harder, they just can't use the variant on this bump. We fix this exploit technique. Every single one of those actions make it harder. So that's sort of the way that I view this whole problem. So with all this effort, is it working? Is Project Zero actually making it harder for people to make zero-day vulnerabilities?
Starting point is 00:49:51 I think on a long scale, like definitely since 2014, zero-day has become harder. But I think what's hard is that to me, at least, it's pretty obvious that it's not hard yet. Like for example, for the first six months of the year through 2022, what was it? There was a huge percentage of the zero day in the wild zero days were variants of previously patched bugs. Okay. 50% of the in the wild zero days from 2022 as of mid June were variants of previously patched bugs. That makes it really hard for me to look at of we had chances to block one in two of these zero days that we as an industry didn't take. And 27% or 22% somewhere in that range of the in the wild zero days from 2020 or even variants of in the wild zero days from 2021. So the attackers could come back,
Starting point is 00:50:50 you know, less than 12 months later and just use a variant of the bug again. So I think there's, I'm more focused on what we can do and the opportunities we have rather than smirking at the news as much. But of course, we've got to take the wins when we can get them. Do you course, we've got to take the wins when
Starting point is 00:51:05 we can get them. Do you use this term before private state of the art versus public state of the art? What does this mean and how does it apply to you? So in vulnerability research, publishing, like what's the new attack surface? What's the new bug class or exploitation technique that we consider state of the art in terms of novel, a great way to bypass new exploit mitigations, et cetera. And so offensive security researchers like my team, we published a lot to show, oh, we found this new way to bypass X to help show this is why it has to get fixed. And this is where its weaknesses lie. And so that would be the public state of the art because it is offensive security researchers talking about it publicly of this is where these techniques stand right now. Private state of the art is,
Starting point is 00:52:00 but what techniques do the attackers actually have? And so part of the reason why I focus on zero days that are actually exploited in the wild is because it can help us close that gap between public state of the art and private state of the art. Because a lot of time we use public state of the art to help inform what is the next area of research that we should focus on. But if that's diverging too far from what the attacker is actually doing, then this research is not as useful to us because we're not having what we call those collisions with attackers and trying to fix bugs and vulnerabilities, or we're not putting our resources in areas that are super useful. So that's what we mean by when we say or I say public city art versus private.
Starting point is 00:52:48 That's a really interesting concept to me. We know what's out there when it becomes seen, but we don't know what hasn't been discovered yet. And what hasn't been discovered yet could be a hugely overlooked use of technology or capability that we just haven't been creative enough to imagine that scenario. So it becomes almost a theoretical question. What theoretically could attackers do today? And how can we look into those areas to try to figure out what they are working on to stop them to make us all more secure? Well, one of the things I think is most promising is that in 2021, there were the most in the wild zero days ever since we've been tracking since mid-2014 detected and disclosed as in the wild.
Starting point is 00:53:39 And that might sound sort of not make sense why I think that's promising to some people. But I think it is because I didn't say we can't track the number of in the wild zero days used. We can only track the number of zero days in the wild that are first detected by someone and then disclosed as, hey, in the wild. If folks are finding them and reporting them to other vendors and never saying, hey, this is in the wild as well, it's not just another volume, then there's no way for us to know about it. So I do think in the last three or so years, there have been huge improvements across the industry of people working on detection and trying to find zero-day exploits, not just brushing it off and saying this is an unsolvable problem. And I'm also really hopeful of the trends and transparency around these. I think there's still plenty of
Starting point is 00:54:38 progress to make in the transparency space around these zero-day vulnerabilities and exploits. But I'm hopeful that we're having more and more vendors transparently disclose when something is being actively exploited, that some vendors are making it easier to figure out which patch in open source software goes with a CVE and giving more robust descriptions of it. And my hope is then we get to areas where they're doing these detailed and publishing root cause analyses and doing more variant analysis on their own rather than sort of third parties like myself and my team and some other security researchers coming in and doing that work.
Starting point is 00:55:29 Yeah, I think I would like to see on my phone whether or not I was exploited. If there's some sort of Play Protect feature that says, oh, we've updated this. Oh, wow. Somebody was actively exploiting you. Big notice there. I want you to know. Yeah. I think that would be super interesting. And that is one area that's been growing of lots of different researchers trying to figure out what type of forensics do we look for? These are sophisticated actors, so they're also pretty good at cleaning up traces. And zero-day explates don't always leave a lot of traces. So how do we figure out if someone had spyware running on their phone, if they had an exploit delivered to their computer or device?
Starting point is 00:56:18 And Citizen Lab and Amnesty International are also doing some really awesome work in this space as they also work closely with the targeted populations. Thank you. Jack Recider. Editing helped this episode by the reverser, Damien. Mixing is done by Proximity Sound. And our theme music is created by the botnet known as Breakmaster Cylinder. I saw a really big cell tower the other day and I just like walked up to it and I looked up all the way at the top and I was like, whoa, that's really high tech. This is Darknet Diaries. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.