The Changelog: Software Development, Open Source - Big breaches (and how to avoid them) (Interview)

Episode Date: March 24, 2021

This week we're talking about big security breaches with Neil Daswani, renowned security expert, best-selling author, and Co-Director of Stanford University’s Advanced CyberSecurity Program. His boo...k, Big Breaches: Cybersecurity Lessons for Everyone helped to guide this conversation. We cover the six common key causes (aka vectors) that lead to breaches, which of these causes are exploited most often, recent breaches such as the Equifax breach (2017), the Capital One breach (2019), and the more recent Solarwinds breach (2020).

Transcript
Discussion (0)
Starting point is 00:00:00 This week on The Change Law, we're talking about big security breaches with Neil Daswani, renowned security expert, best-selling author, and co-director of Stanford University's Advanced Cybersecurity Program. His book, Big Breaches, Cybersecurity Lessons for Everyone, helped to guide this conversation. We cover the six common key causes, aka vectors, that lead to breaches. Which of these causes are exploited most often? Recent breaches, such as the Equifax breach in 2017, the Capital One breach in 2019, and the more recent SolarWinds breach in 2020. Big thanks to
Starting point is 00:00:31 our partners Leno Fastly and LaunchDarkly. We love Leno. They keep it fast and simple. Check them out at leno.com slash changelog. Our bandwidth is provided by Fastly. Learn more at fastly.com and get your feature flags powered by LaunchDarkly. Get a demo at LaunchDarkly.com. Linode is simple, affordable, and accessible cloud computing the developers trust. Linode is our cloud of choice.
Starting point is 00:00:59 We trust them and we think you should build anything you're working on, a fun side project, or that next big info move at work with Linode. The best part, you can get started on Linode with $100 in free credit. Get all the details at linode.com slash changelog or text changelog to 474747 and get instant access to that $100 in free credit.
Starting point is 00:01:18 Again, linode.com slash changelog. So we are here and excited to talk about some big security breaches, cyber security breaches with Neil Daswani. Neil, thanks for coming on The Change Log. Thanks for having me. It's a pleasure to be here. Well, I thought I would steal a couple of facts from your book to set the stage here. A couple of things you say right in the beginning. You say, in a series of breaches, key background data of over 20 million U.S. government employees and a large fraction of U.S. consumer financial and social media records have been stolen.
Starting point is 00:02:05 And in the past 15 years, more than 9,000 data breaches have occurred. This is something that's going on all the time, isn't it? Yeah, that's right. If we go back to 2015, for instance, the government's Office of Personnel Management was breached. And that's the breach in which the 20 million government employees' identities were stolen. But that's just one of many, many breaches. And if you could kind of go a little bit earlier in that paragraph, I talk about America has been hacked.
Starting point is 00:02:35 But the hacking of America has not been a singular event. It's through a series of breaches, like the office personnel management breach that targeted government identities, and like the Equif management breach that targeted government identities, and like the Equifax breach in which the consumer financial records of over 140 million Americans were stolen. If we look at some of the abuses and breaches at Facebook, a large volume of social media data about consumers has also been stolen. So you put all these things together, and it really makes up an attempt at hacking the country overall.
Starting point is 00:03:15 So let's rewind back to 2007. You were working at Google, and you co-wrote this book, Foundations of Security, which was focused on web app vulnerabilities. And back then you saw that security on the internet was bad and going to get worse. But then you say you wouldn't have been able to predict how bad it was going to get over the next 13, 14 years. And so you've cited a few things, but maybe just in plain words, just how bad is it? I mean, are we screwed or what?
Starting point is 00:03:53 So back in 2007, back when I was an engineer at Google, the main concern that myself and my co-author at the time, Christoph Kern, had was that software vulnerabilities could be used to conduct cross-site scripting attacks, SQL injection attacks, and plague a whole bunch of online properties. At the time, MySpace had gotten taken down for a few hours because someone wrote a worm that spread through the social network so fast, affected millions of profiles that they had to take the service down in order to deal with it. Another thing that was happening back at that time is worms, worms like Code Red and Nimda and SQL Slammer, typically written by maybe one author, an amateur, you know, caused a lot of disruption on the internet. And so, you know, when I joined Google, Christophe was one of the folks, my father was one of the folks that influenced me to join the company. And after I joined the company, I had the absolute pleasure of meeting people like
Starting point is 00:04:55 like Vint Cerf. Vint was one of the two co-inventors of TCPIP, the set of protocols on which the internet runs. And, you know, serendipitously, we identified that I was his academic grant student, because he was on my PhD advisor's reading committee back when my PhD advisor was getting his PhD. And Vint was also concerned about how software vulnerabilities could be used to take down online properties and could be used to take down malware or result in malware propagation. And so he was kind enough to write the forward for the book. And that was what we were concerned about at the time. And I think what we've seen now fast forwarding to 2013 and afterwards, given the number of mega breaches that have taken place, it's pretty clear
Starting point is 00:05:48 that software vulnerabilities and malware are only two of the root causes that have led to these breaches. If we look at other major causes of breaches, things like phishing, unencrypted data, inadvertent employee mistakes, third-party compromise and abuse have grown to be additional root causes that have resulted in even bigger breaches than the kinds of things we were worried about back in 2007 when I was an engineer at Google. So how do we get here? Was it just focusing on too little? Because like you said, there's six different causes or vectors and maybe the focus of the InfoSec community and those in software are trying to solve or route around these particular two things. It was a much bigger surface area that we weren't securing. I just wonder how from 2007 to today, how we got to this point where there's been so many breaches and not just minor breaches, but these mega ones.
Starting point is 00:06:52 And they all seem to happen for different reasons. How do you think we got here? So the way that we got here was not a, it was a gradual sort of thing. When we look at things like phishing, for instance, phishing was an issue prior to 2007. The word phishing was first coined on a news group on America, on a news group called AO Hell, America Online Hell or whatnot, in, I believe, the late 90s. And phishing was always a concern because of the fact that the initial protocols that the internet was built on, the email protocol, for instance, SMTP, the Simple Mail Transfer Protocol, would allow anybody on the ARPANET, the predecessor to the internet, to send an
Starting point is 00:07:39 email. Anybody could send an email to anybody else claiming to be whoever they wanted to be because all the organizations, the initial universities and military organizations on the ARPANET trusted each other. But as the internet got commercialized, phishing started getting used more and more. It was initially used to try to lure people to fake banking sites, for instance, and try to get people to enter their username and password credentials for banking sites. But what we've seen is that phishing has also evolved. Many attacks that take place these days are spear phishing attacks, where the attacker wants to break into an organization.
Starting point is 00:08:21 They figure out who is the administrative assistant to the CEO. They figure out how the the administrative assistant to the CEO. They figure out how the email addresses are crafted, and they send in these phishing emails to them. So phishing was always an issue, but in terms of what you could do with phishing attacks, it grew over time and has led to bigger and bigger breaches. Now where we talked about software vulnerabilities, we talked about unencrypted data has become more of an issue. You know, back in 2003, when California was the first state to pass a data breach notification law, the law was structured such that if somebody's name and some sensitive identifier about themselves was inadvertently exposed or stolen, then that needs to be reported as a breach. And so there have been a whole bunch of breaches due to unencrypted data
Starting point is 00:09:12 that have been getting reported since 2003, but most of them have been smaller in nature. I would say that if we look at another one of the root causes, third-party compromise and abuse, that really started becoming an issue in 2013. So when Target got breached back in 2013 and over 40 million credit card numbers were stolen, the attackers initially broke into a company by the name of Fosio Mechanical Services, which ran the heating and air conditioning for all of the Target retail stores. That was a bunch of other retailers as well, but the attackers stole network credentials for Fosio Mechanical Services.
Starting point is 00:09:57 And then because the Target and Fosio network were tied together, it was one flat network, the attackers were able to pivot from Fosio's network into Target's network. If we look at just the following year in 2014, the JPMorgan Chase breach occurred because they had a third party by the name of Simcoe Data Systems that ran a website that was used to manage their charitable marathon races. The attackers leverage vulnerability at Simcoe Data Systems to then break into JPMorgan Chase. And JPMorgan Chase was spending $250 million annually on their security in their bank.
Starting point is 00:10:37 And the attackers were able to get out with 70 million names and email addresses, JPMorgan Chase customers, and then could target those consumers with spear phishing attacks. But it was another example where there was a third party that was leveraged in order to conduct the attack. And so third parties have become more and more of an issue. And then finally, there's all kinds of other inadvertent employee mistakes that can be made. If we look at cloud services these days, if you just have one misconfiguration of your
Starting point is 00:11:10 Amazon S3 buckets and you misconfigure the bucket with important data in it, sensitive data in it to be public instead of private, well, that counts as a breach. So the root causes that I was concerned about did indeed grow over time, and you can kind of see the progression over the years. But what has been fairly stable over the past seven years is pretty much the overwhelming majority of breaches have been due to these six root causes. And they have gotten far more sophisticated. Just thinking about fishing alone, I remember when you would look at a phishing attempt and somebody would post a screen grab or whatever it was of somebody trying to phish them back in the day, and it was laughable how pathetic the attempt was to fool
Starting point is 00:11:54 somebody. It would only work on, let's just say, the most vulnerable of internet users. And the phishing attempts today are so pointed, so well done. I just got one the other day, which was acting as if it was Google contacted me about something with my account. And it was actually, of course, a spoof Google email address, but it linked back to a Google form. So it was a Google domain, right? A Google doc that has a form. And I got there as I just followed it, you know, with curl and stuff just to see where it was going to take me because I could tell but I still wasn't 100% sure. I was like,
Starting point is 00:12:29 could this possibly be Google? I don't think so, but maybe. I'm going to follow this trail. It took me a while. I had to go three or four curls following redirects to find out, nope, it was definitely just a phishing attempt. The sophistication, I think, of the bad actors, maybe because there's so much more to gain, has really ramped up over the years.
Starting point is 00:12:51 That is exactly right. In fact, when I think about, you know, sophisticated phishing attacks, yeah, and in my book on big breaches, I talk about a lot of big reaches, but one that comes to mind is back in 2016, probably one of the most interesting phishing attacks was conducted against John Podesta, who was Hillary Clinton's campaign manager. And what happened there at that time is that the Democratic National Committee was under attack by the Russians. And so John Podesta got an email. They were using Google Apps. He gets an email, and it perfectly crafted, looks exactly like the Google Apps password reset email. Basically, it told him, hey, look, we think somebody is trying to attack you. You might want to change your password. And so John Podesta or one of his staff members gets the email and does the right thing. He doesn't or she doesn't just kind of click on the phishing link in the email at the time, but rather forwards it to the IT department at the Democratic National Committee and ask them, hey, is this legitimate?
Starting point is 00:14:04 Should I reset my password? So the IT department responds and said, yes, we are under attack. Please reset your password. Except what John Podesta or the relevant staff member did was they didn't go to the link that the IT staff member told them to go to, you know, google.com slash security or whatever it was. Instead, he went back to the original email that they got from the attacker and clicked on the link there. And the attackers were then able to log in to John Podesta's Google Apps email account and make off with 60,000 emails that they then released. Pretty interesting phishing attack.
Starting point is 00:14:49 And, you know, today, as you can imagine, the Democratic National Committee uses two-factor authentication so that simply stealing the password is not good enough to steal emails in droves. We actually linked out to something recently that says that's not how 2FA works. And they were reporting essentially that 2FA was a security measure when really it is a security measure, but not in the way they were saying it was. Basically, the 2FA is meant to prevent attackers from masquerading as you, not to prevent fake sites masquerading as real sites.
Starting point is 00:15:22 It's sort of a backwards thing, but 2FA enables you to be you rather than somebody else being you because it requires multiple factors. I'm speaking to a security expert here, of course, but it disables the ability for someone else to be you if they have multiple factors that say, this is you, this is who you are, because these devices have consensus
Starting point is 00:15:43 that they agree they're you. That's right. And there's actually many different flavors of two-factor authentication, some better than others. So when you log into a website with two-factor authentication, you have to present your username and your password, but you'll also have a, say, two-factor code sent to your mobile phone and you have to enter the four digit, six digit, eight digit code, whatever it is. But there's still many ways for the attackers to beat that. And so for instance, if a online site is relying on SMS, in order to send two factor codes, one of the things that the attackers can do is what's called the SIM swapping attack, where what they'll do is they'll call up your cell phone provider,
Starting point is 00:16:29 wireless carrier, and they will post it to you, and they'll use publicly searchable information about you, your name, your address, your phone number, how many pets you have, how many kids you have, whatever, right? Whatever they can gather from Facebook. And if they can figure out what is the verbal passcode that you use for your account with your wireless carrier, they can convince your wireless carrier to switch your phone number to use a SIM card and a phone that the attacker owns instead of your actual phone. And once the attacker does that, once they have SIM swapped and taken control of your wireless carrier account, then they can get all the two-factor codes
Starting point is 00:17:15 that are sent to you when you try to log into your bank or whatever. So there are many ways of doing two-factor. One way is SMS, but it does have that vulnerability that makes you susceptible to SIM swapping. There are additional ways to do two-factor authentication where you use an app like Google Authenticator or Duo or whatnot, where you get a six-digit code that's generated by an app on your phone. And, you know, attackers will not be able to say if they can steal your SMSs, they won't be able to
Starting point is 00:17:53 get visibility into what is the code there. But it ends up being, you know, more secure against that channel of attack. On the other hand, though, there's this concept of what is a completely non-fishable defense. To an extent, whenever you go to a website to log in, if you have to enter your username and your password and say a two-factor code, you can imagine that any attacker can start up an imposter site that will ask you for the same three things, the username, the password, and the two-factor code, irrespective of how that two-factor code got to you or was generated. So one of the challenges with these authenticator apps
Starting point is 00:18:31 is that attackers can still set up imposter phishing sites. But there is a way, there is an even better two-factor, which I could tell you about. Does that make sense? Oh, yeah. Yes. So the apps like Google Authenticator, Authy, etc., they are on a
Starting point is 00:18:45 rotating key, right? Like that, that code rotates every N seconds. I'm not sure how I've never implemented it as a developer. I'm not sure how the server side syncs up with that code and, and setting that aside, if somebody were to phish you and get you to enter all three things, they would have N seconds to then go and sign into you, sign into your actual email or whatever it is with that code before it expires. I think that N is like 60 seconds. I'm not sure what it is, but yeah, absolutely. Please tell us the best way because I'll just do that.
Starting point is 00:19:18 Yeah, that's right. So, and by the way, 60 seconds is a lot of time for an attacker site to, you know, receive the two-factor code and then do an automated authentication into your account, right? And then transfer money out or whatever it is. So, 60 seconds is a lot of time. And by the way, for those folks that are software developers in the audience here as listeners, there's actually, you know, two different standards, two internet RFCs that are used to design that kind of authentication. There's actually two different standards, two internet RFCs that are used to design that kind of authentication. There's what's called HOTP and TOTP.
Starting point is 00:19:50 HOTP uses an HMAC, a hashed message authentication code, to generate the two-factor codes. And then TOTP uses time. So it doesn't just rely on a seed, but there is a synchronized clock in between the authenticator and the website. So there are two ways to do that. But at the end of the day, it is possible to get phished by having any form that accepts a username, password, and a two-factor code. So the most secure way to do two factor authentication is to use what's called
Starting point is 00:20:27 a security key. A security key is a piece of tempers hardware, which you have to either plug in to your laptop or mobile phone, or many mobile phones have, you know have secure enclaves and whatnot on it that can be used to generate the appropriate two-factor authentication information. is the ability for your security key, whether it be something like a Yubico YubiKey or whether it be your mobile phone, to generate the two-factor code, but not in a form field that you have to manually enter. And that is a non-phishable form of defense against phishing. And if you look at that particular set of security key technology, when Google deployed that,
Starting point is 00:21:34 I believe in 2017 or 2018, I can't remember exactly which it was, but they deployed it for tens of thousands of employees. And when they looked at it the next year, there were absolutely no phishing attacks. And both Google and Salesforce have used security keys to eliminate phishing as a root cause of any potential breach against themselves. And I really hope that more organizations, you know, learn from that experience and leverage security key technology to eliminate phishing instead of having it continue to be a major root cause of breaches. This episode is brought to you by our friends at Retool. Retool helps you build internal tools fast and easy. From startups to Fortune 500s, the world's best friends at Retool. Retool helps you build internal tools fast and easy. From startups to Fortune 500s, the world's best teams use Retool
Starting point is 00:22:28 to power their internal apps. Assemble your app in just a few minutes by dragging and dropping from pre-built components. Connect to most databases or anything with a REST, GraphQL, or gRPC API. Retool empowers you to work with all your data sources seamlessly in one single app. Retool is highly hackable,
Starting point is 00:22:44 so you're never limited by what's available out of the box. If you can write it in JavaScript and an API, you can build it in Retool. You can use their cloud service or host it on-prem for yourself. Learn more and try it free at retool.com slash changelog. Again, retool.com slash changelog. So now there's several breaches we could talk about, big ones, small ones, a couple of recent, but there's some ways in. What are the common ways in and what are some of the most recent breaches? I know Capital One happened recently, Equifax and SolarWinds is ongoing, but where do you begin to sort of break down the vectors into these breaches in particular? Sure. So the six major
Starting point is 00:23:35 technical root causes of attacks and breaches are phishing, malware, software vulnerabilities, unencrypted data, third-party compromise or abuse, and inadvertent employee mistakes. And if we talk about, for instance, the SolarWinds hack, and I'll mention that for those of you that have heard of the SolarWinds hack that occurred in, or rather that was announced in December 2020, you may have heard that it's being compared to a digital Pearl Harbor, but I would say that there's some major differences about the SolarWinds hack from Pearl Harbor. So first of all, Pearl Harbor was a complete surprise when it happened. And if we look at the SolarWinds hack, the way that attackers broke into many government organizations was using SolarWinds and their software as a third party. If we look at third party compromises,
Starting point is 00:24:47 there have been third party compromises going back to 2013 and 2014. Like I happened to mention, the Target breach was initially caused by a third party. The JPMorgan Chase breach was initially caused by a third party. Facebook has had a number of hacks and breaches over time, in part due to third parties like Cambridge Analytica. So third party compromise and abuse is nothing new. In addition, if we look at attacks against the government, if we go back to the Office of Personnel Management breach,
Starting point is 00:25:26 in which 20 million government employees were stolen, the government getting targeted and hacked by foreign adversaries is also nothing new. And then thirdly, if we look at hacks that have been attributed to the Russian government or organizations that there were four Russian hackers responsible for that, two of which were ex-FSB agents. FSB is the new KGB. So if we look at those aspects of the hack, there have been components of that taking place for years. And I think if there is anything that is new and novel, it is that the scale of the attack was probably third party was leveraged to hack multiple government organizations. Whereas in the past, there's typically been one third party used to hack some major target, not multiple major targets. So I'd say that the SolarWinds hack is not a digital harbor
Starting point is 00:27:07 because it shouldn't have come as a complete surprise. I think the other aspect of the SolarWinds hack that's interesting is that beyond it having all the previous components, the SolarWinds hack, the carnage of it, right, or the after effects, you know, comparing it to Pearl Harbor, when Pearl Harbor got attacked, all the carnage was immediately observable. And if we think about the SolarWinds hack, I think that the impact of it is going to be understood over time, months or years, not immediately the day after. So those are just some of my thoughts on the
Starting point is 00:27:55 SolarWinds hack. And I think the other thing to keep in mind, I'd say if there's a third thing to talk about with regards to SolarWinds, it's that based on new information that has come out, 30% of those organizations that were impacted were impacted by channels other than solar winds. And it just happened to be the case that we are discovering the hack and attack in a particular order. The order in which the foreign nation-state adversaries conducted the attack may have been wildly different. So the SolarWinds hack is certainly interesting, but the components of it are not new. The scale has been larger.
Starting point is 00:28:40 We'll learn more over time, and we'll also learn how much Solar solar winds was or was not at the heart of it over time once everything gets pieced together it must be difficult to go back forensically and uncover the truth i mean surely there'll be things that we will never know for sure order of events you know how things went down But I guess with digital, in the digital world, we can at least timestamp and get that kind of chain of custody stuff a little bit better than they used to. But just the work of going back and forensically discovering what all went down and by whom and et cetera
Starting point is 00:29:19 has to be deep and tedious and probably rewarding work if you can dig anything out of that history. Yeah, the forensics involved in understanding how breaches have occurred and attributing them to particular attackers is indeed very interesting, painstaking work. And if I think back to, you know, a bunch of the breaches that we discussed in the Big Breaches book, there's certainly some attacks, for instance, the attack against Yahoo, the attacks against OPM, there weren't enough, there wasn't enough forensic information to piece together how the attackers even got in. It's suspected that phishing and malware were the two of the key vectors,
Starting point is 00:30:07 two of the key root causes, but it's unclear. Now, there's other breaches. For instance, we looked at the Capital One breach in which a single former Amazon employee was able to leverage server-side request forgery vulnerability and a firewall misconfiguration. The investigation there was very speedy and happened because of the fact that Eradic,
Starting point is 00:30:39 which is the codename handle of the attacker that got in, she left her resume in the same GitLab repository where she archived the 100 million credit applications that she stole out of the Amazon S3 buckets. And so obviously with her resume there, investigators were very easily able to follow up and make the attribution. You know, I think whether it's cyber criminal or whether it's the foreign nation adversaries, you know, the authorities are always looking for the breadcrumbs and they're always looking for
Starting point is 00:31:20 the mistake that the attacker makes because nobody's perfect. And so any attacker, you know, if you study them long enough, you study their trail long enough, you'll find something. But sometimes they just make a mistake. Yeah, that's a big mistake. The whole time we're having this conversation, I'm a gigantic fan of Mr. Robot and Elliot. And so that, I just think about how Elliot would act in terms of a hack or a root kid or a malware attack or a 2FA spoof or all these different things that he did during the show. And I just think about it like that.
Starting point is 00:31:52 Like he's the kind of person in that show at least where he didn't make mistakes or not many mistakes. But that is the truth though. You can follow somebody long enough and you see – because they got limited time. They got limited time to do a breach or to steal that code to you know to two-face spoof that person or whatever it might be you know and they're going to slip up somehow some way i'm curious though about that resume if it just wasn't good enough like it wasn't really her because forensically as a hacker you can fake mistakes too and you can frame somebody i'm not saying she was framed i'm saying that just seems so obvious like my resume is chilling in this gitlab repository it just seems
Starting point is 00:32:30 too pointed i don't know it seems like yeah like she couldn't have been that right dumb you know so i'll tell you about a couple things first i am as we're having this discussion i'm reminded of a story that someone once told me about an interview with an FBI agent. And the FBI agent says, you know, to catch most criminals, we wait for them to make the mistake, and then we catch them. And then so the interview asks, well, so what about the criminal that doesn't make a mistake? And the FBI agent pretty says, oh, well, we don't catch them. So that's one story that comes to mind. But going back to this particular Capital One breach and erratic, you know, I agree that if it was just the resume being in the GitLab repository,
Starting point is 00:33:19 you could look at that as somebody might have tried to frame them. Of course, in this particular case, she was tweeting about the attack publicly on Twitter as she was doing the attack. And there was some concern also about just the mental stability of the attacker in this case. But it appears that she created enough evidence and even posted things on Twitter saying, it's the equivalent of I've strapped a bomb to my chest or something like this.
Starting point is 00:33:51 What was her MO? What was her motivation? Why was she doing it? I do not know. Okay. I think she was probably technically capable of doing it. She might have been mentally unstable, might have wanted attention. This seemed like a great way to get it. I don't know.
Starting point is 00:34:08 I wouldn't want to speculate. Well, I thought a lot of people will write masterminds, you know, in the fictions at least. They like to explain why they're doing what they're doing, especially if the goal is attention a lot of times. The monologue is famous. I wonder if she came out and gave
Starting point is 00:34:24 her monologue. Yeah, because I didn't follow the, I didn't know this was a single attacker who was also tweeting and leaving a paper trail on GitLab. So I was just wondering if maybe she published her motivations. Well, I haven't heard them yet as of, you know, us writing the big breaches book and the chapter on the Capital One breach, which I thought was, you know, pretty technically interesting as well. I don't believe the monologue appeared, but who knows? Maybe if it does show up, then we'll have to post something on the book's website.
Starting point is 00:34:51 Yeah, second edition. So Capital One, this was an ex-employee who had, did she have insider information about this particular vulnerability, or did she just find it by, you said there was a misconfigured firewall, and then there was also a server-side vulnerability that she was taking advantage of. Was this a case where she had knowledge of that system and so it gave her the advantage or was she just out there fuzzing it and seeing what she could find? So I think in this particular case, she was an ex-Amazon employee and she probably had technical skills and knowledge based on my research and
Starting point is 00:35:26 study of the attack. I don't believe she had any insider knowledge about Capital One. I think it was revealed that she was probing not only Capital One, but a bunch of other companies as well. And simply knew enough how cloud systems work. And what she had identified is that Capital One had an Amazon EC2 instance, pretty much a virtual machine, that was running an application that had a server-side request forgery vulnerability. And basically what that means is that she was able to send requests to that EC2 instance, and the EC2 instance would query Amazon Amazon metadata service is required so that things that are running on Amazon and EC2 instances can even ask things like, well, what's my IP address? You know, it's running on cloud. It shifts from machine to machine. You need to know things like your IP address. And the intention is that, well, only the EC2 instances should be able to query the metadata service because they can also ask for things like security credentials.
Starting point is 00:36:53 And what happened in this case is that because Eradic, she identified that the EC2 instance had this server-side request for vulnerability, she was able to ask questions like, hey, could you please give me the security credentials for a whole bunch of Amazon buckets that are part of Capital One's deployment? And the EC2 instance would query the metadata service. The metadata service would say, sure, I'm happy to ask, give you the security credentials. It would give the security credentials to the EC2 instance, to Capital One's legitimate EC2 instance. But the problem is that it would relay the information back to the attacker, any random person on the
Starting point is 00:37:34 internet. And once Erratic was able to get those security credentials, she pretty much cached those in her local Amazon client command line. And once she had those credentials, any queries that she made to the Capital One Amazon S3 buckets, there was no way for S3 to tell the difference between the attacker versus a legitimate program that was trying to access the 100 million credit applications. So once she got the credentials, she asked the Amazon S3 service for Capital One for all that data, and it happily handed it back to her. And that's how that effect happened.
Starting point is 00:38:17 Fascinating stuff. How about the Equifax breach? Because that was also credit records. I think that was more, like 150 million, something like that. And I had the pleasure of being a part of that one. That was one of the millions who got their stuff leaked. So yeah, I'm happy to talk about the Equifax breach. The Equifax breach, if you have read about this particular breach in the media, what one associates and attributes to it is the Apache Struts vulnerability that was used to initially get into 12-core facts. So basically, in March of the year that the breach occurred, there was a Apache Struts vulnerability. It was
Starting point is 00:39:01 a high severity vulnerability, which allowed, you know, any attacker on the internet to request that the server run commands of the attacker's choice, and it would happily do that remote code execution vulnerability. And there was a patch available very, very quickly. But you know, there's a lot of interesting technical details. And there was a lot more to the breach than just how they got in. So what had happened is within a fairly short period of the vulnerability, the Apache Struts vulnerability being announced, it was quite observable that the vulnerable server at Equifax was getting queries from Chinese attributed IP, probing as to whether or not it's vulnerable.
Starting point is 00:39:47 And the particular probes would do things like inject a HTTP header that had a command to run instead of having typical header information. And these probes would do things like change the current directory to a shared memory device, drop a file into shared memory so that it wouldn't touch disk, so they couldn't get picked up by antivirus scanners, and then change the permissions of that file and shared memory to be executable and then run the thing. So those probes occurred. You know, I think at around the same time, the Equifax vulnerability management team had basically scanned Equifax's servers to see, okay, what servers do we have
Starting point is 00:40:34 that are vulnerable to this thing? But the problem was they were using a McAfee vulnerability scanner that was end of life and was not being as actively maintained. And that scanner was also only scanning the root directories of the servers. It was not scanning subdirectories. And the particular server that was vulnerable at Equifax, the vulnerability was present in a subdirectory. So the vulnerability scanning team at Equifax, while they sent out the notes to say, you know, please patch our vulnerable patching strut servers, the scans came back negative saying there's no vulnerabilities. So their team might
Starting point is 00:41:16 have thought, oh, this must all be patched. In reality, the scanner was having a false negative. So what happened a couple months after that is that additional Chinese attributed requests hit the still vulnerable Apache server. And, you know, basically establish a footprint, they, you know, got some files. And that formed their beachhead for their attack. They got in that one machine, they started scanning, they identified that there were, I don't know, 60 other machines or databases that they could query. But they didn't have credentials for those databases. So what happened is the attackers found a file, a configuration file that had unencrypted credentials for the databases,
Starting point is 00:42:06 and that was one way that they got information from the databases. Another thing that the attackers did is they took advantage of a SQL injection vulnerability. So while everybody knows of the Apache Struts vulnerability with associated with Equifax, what fewer people know is that there was a SQL injection vulnerability in one of the databases, and the attackers used one of their web shells that they planted to exploit that SQL injection vulnerability and steal data out of one of the other databases. So there were many things that had to go wrong. It was not just the Apache struts-party issues. It seems like that's where the trust is, where it breaks down. You've got this trust between the primary party and some sort of third party and some sort of silly mistake, but it seems like the third party, I suppose, vulnerability is just the entry point, not the problem. It's part of the problem, but they find
Starting point is 00:43:02 some sort of vulnerability there, and then they have experience and know how to masquerade into query databases or find files or set something into RAM instead of on disk. You know, like a lot of inner workings of how security measures are watched, monitored and whatnot. Not just simply, oh, I, you know, hacked a open source dependency and boom, I'm in.
Starting point is 00:43:25 It's much more than that. Yes, that's right. So I would say that third-party compromises, third-party abuse is a very significant point of entry. And as we've talked about, Target, JP Morgan Chase, Equifax were third-party components and third-party companies that were leveraged as part of the attack. I'll mention, though, that it's not always third-party. So, for instance, if we look at Facebook, for instance, there was a breach that they suffered in 2018 where tens of millions of access tokens were stolen, access tokens which
Starting point is 00:44:00 would allow people to log in as various Facebook users. And in that case, there were three vulnerabilities that were used altogether, not all third party. The three vulnerabilities that came together was, one, there was a, so the feature that got abused at Facebook was the Facebook view as functionality, which allows you to view your Facebook profile as a member of the public. And, you know, in the first vulnerability, the view as feature allowed somebody to incorrectly post a video. The second vulnerability was one in which the video uploader incorrectly generated an access token that had the permissions of the Facebook mobile app. And then the third vulnerability was that, you know, the access token was generated not for the user as a viewer, but for the user, you know, whom was being looked
Starting point is 00:45:00 up. And so all those three things came together in a much more sophisticated attack in which three vulnerabilities had to be leveraged together, not all of which were third party. I believe there were first party code there. So, you know, I'd say that both first party and third party vulnerabilities are important and significant when it comes to breaches. And by the way, let me mention that Facebook, you know, did a very nice and thorough investigation in that 2018 breach. It was great to see the transparency that Facebook had when they investigated that. You know, I honestly think to just put a little bit of a view on this, perspective on this,
Starting point is 00:45:41 I think, you know, looking at Facebook 2016, 2017, and before, there were certainly a bunch of abuses of the platform that were taking place, where attackers were able to use the fact, you know, APIs at Facebook could just be queried very easily. And once they shut all of those other paths down, like if I had to, if I had to guess, and this is just a guess, that this particular hacker that used these three vulnerabilities in the Facebook US profile bug, if they wanted to steal
Starting point is 00:46:10 information from Facebook profiles, they were forced to then do something much more sophisticated. But I really like the fact that Facebook was very transparent and posted some of the technical details of that in a blog post and so i'd say that's the authoritative information yeah and you're like yeah if i if i was even slightly
Starting point is 00:46:29 inaccurate my apologies for that but there's a great facebook post on it it's amazing how more sophisticated is a nation state or a a highly motivated actor in 2021 or even back when this one happened as compared to where we started this conversation with the Sammy MySpace hack of 2005. You know, the era of worms and viruses that were either accidental or for fun and they got out of control or they were maybe malicious in some cases, but just sure growing up since then, you know, this three vulnerability combo to get into the Facebook thing. It's that's an amazingly impressive hack, isn't it? That is that is a much more sophisticated hack.
Starting point is 00:47:16 And I think that I mean, I was impressed with that Facebook's speed at which they were able to diagnose and debug and troubleshoot and identify that particular, you know, the root causes behind that. You know, I would say that, you know, thinking about all these breaches over the years, there's certainly been a bunch of breaches where, you know, as in the case of the semi-worm, one cross-site scripting vulnerability could have been leveraged to pretty much take down my space for hours. Or the one Apache Shorts vulnerability that led to the Equifax breach was an initial point to get in. We have seen the attacks become more sophisticated as per the Facebook example that we talked
Starting point is 00:47:58 about. But if I think about the Capital One breach, a server-side request forgery vulnerability, and, you know, firewall misconfiguration, like that was, you know, perhaps, you know, not as sophisticated and done by one person. So there's a saying in the security community that attacks only get better. And I'd say, you know, the simple attacks and what people can do with one vulnerability, like those issues still exist. But now on top of that, we have to deal with more sophisticated attackers at the same time. This episode of The Change Log is brought to you by Render. Render is a unified platform to build and run all your apps and websites with free SSL, a global CDN, private networks, and auto-deploys from Git,
Starting point is 00:49:05 they handle everything from simple static sites to complex applications with dozens of microservices. If you're a developer or a founder that's frustrated with AWS's complexity or Heroku's high costs, you owe it to yourself to use the $100 in free credits they're giving our listeners to give Render a try. Render is built for modern applications and offers everything you need out of the box. One-click scaling, zero downtime deploys, built-in SSL, private networking, managed databases, secrets and configuration management, persistent block storage, and infrastructure as code.
Starting point is 00:49:39 Heroku customers running production and staging workloads typically see cost reductions of over 50% after switching to Render. Here's the best part. We work closely with the team at Render to ensure you have zero risk. By giving you $100 in free credits, plus they're going to assign a world-class engineer to your account to offer guidance and answer any questions you have. When you're ready to transition your infrastructure, they'll be there to help you with that too. Automate your cloud hosting with Render at render.com slash changelog.
Starting point is 00:50:07 Get $100 in free credits to try the Render platform, plus a world-class engineer assigned to your account to guide you along the way. Just send an email to our special email, changelog at render.com, to get access to those free credits. All that begins at render.com slash changelog. vlog so neil you have painted a bleak picture of swiss cheese out there with all these holes and a world of just cyber criminals doing what they do and breaching all of our large and small organizations. Where do we go from here? What do we do about it? You have in this book, a list of seven habits of what you call highly effective security people organizations. So it's not all just storytelling. You have some prescription here as well. How can we route around or solve the problems
Starting point is 00:51:07 that we're seeing out there? Yes, yes, thank you for asking. So in writing the Big Breaches book, it's not all about how these breaches have happened, but if you look at it, the book focuses, half of the book focuses on the breaches. The other half of the book focuses on what do we do about it and how do we get to a better state of the book focuses on the breaches. The other half of the book focuses on what do we do about it? And how do we get to a better state of the world? And, you know, I think that, as you
Starting point is 00:51:32 mentioned, we have a chapter in the book, the second half of the book starts off with a chapter on what are the right habits. And so myself and my co author, Rudy Albaire, we're both fans of Stephen Covey, and his sevenits for Highly Effective People, which you can use for personal development. And so what we thought we'd do is come out with what are the seven habits of highly effective security for organizations. And so some of our habits are similar
Starting point is 00:52:03 and build on what Stephen Covey talked about. For instance, our first habit is to be proactive, prepared, and paranoid. And Stephen Covey and his work also focuses on being proactive. But we think that being prepared and being paranoid is the right way to code. One of our other habits is to make sure that you build and design security in. Security is a property, it's a characteristic similar to quality. You can't exactly build a product and then launch it and then try to make it a quality product afterwards. Quality is something that's got to be inherent and built in. And security is just a type of quality and needs to be built in from the beginning.
Starting point is 00:52:48 We also believe that in order to achieve security, one of our habits is that you've got to automate. I think if you try to rely on your users or developers or employees to try to get things right, and they have to manually take some right step every time, it's going to be very hard. So we believe in heavy automation. And so, you know, relying heavy automation and finding vulnerabilities is very important, which I can talk about in just a second. Another habit that we believe in is to measure, measure security, measure it both quantitatively and qualitatively.
Starting point is 00:53:21 And then finally, we also have a habit around continuous improvement. In Stephen Covey's book, he talks about sharpening the saw, make sure that you're always getting better and always sharpening that blade. In our corresponding book chapter, we talked about the importance of embracing continuous improvement and make things 1% better every day. And over time, that'll compound like you wouldn't believe. So those are some of the habits. But I'd be also happy to talk a little bit about, you know, in the second half of the book,
Starting point is 00:53:52 we give advice for how to go about addressing root cause of software vulnerabilities. Let's start with a couple of these habits, and then we'll go from there. So measure security. Can you just, for instance, what does that look like for a software team? So for a software team, one thing that you can measure is how many vulnerabilities are you say finding in your code with a scanner, whether it be a static analysis scanner, whether it be dynamic analysis scanner,
Starting point is 00:54:26 whether it be based on penetration testing that you do, whether it be based on bug bounty programs where you have external researchers trying to find vulnerabilities. I think that one thing that you can look at is what are the number of vulnerabilities that you're finding, say, using the automated means. And the way to look at that is that's the tip of the iceberg. You know, the scanners that we have are better than they've ever been before, but they're still not as sophisticated as, say, a cryptography expert reviewing the guts of your authentication code. And if the scanners are finding vulnerabilities
Starting point is 00:55:06 in your code, it means that you probably got a lot under the tip of the iceberg to worry about. You know, it's also like another example is that, you know, if you think about the scanners, a flashlight, you shine a flashlight in a room, you see a cockroach, chances are that there's a lot more cockroaches than just what you see with the flashlight. And so it's important, of course, to get to a point where, you know, you get where scanners are not identifying vulnerabilities in your code. But once that's done, you know, chances are there's still more security bugs in your code. And you've got to then start, you know, using white box pen testers, and or bug bounty programs and other things where you have sophisticated humans looking at the code
Starting point is 00:55:54 to find the additional vulnerabilities. So one thing about security is that you're never finished with it just because you're never finished with the software, right? I mean, any successful software company has more software coming down the pipeline every single day. Let's say we get past the shine the light in the room phase. And you don't make or you keep shining the light on a routine basis. And there's no cockroaches there. And you're in the phase where you're saying, well, we need a sophisticated third party auditors, pen testers that we're going to hire. What are best practices around that? These can be expensive things. The software is changing, right? They could finish their audit and then you introduce a vulnerability the next day that you don't know about. Is there like best practices around measurement? Like, well, you should have a third-party audit once a year or
Starting point is 00:56:38 six months, or you should have a part of your team that's like the security team that goes around the rest of the organization and test things. What are people doing out there that is working well? So I think that security audits are, you know, good activities. Do them, you know, once or twice a year. They'll test for basic hygiene. You know, that said, if you really want to have a handle on things, I think taking a continuous approach is indeed the way to go. Because like you said, you could have an audit, you could do a pen test, and then a new vulnerability can get introduced the next day.
Starting point is 00:57:13 And so there's a set of new tools that are available on the market where they don't take the approach of, say, doing security tests after specific parts in the development pipeline. Rather, you know, we're in a world where we want to have agile development. We want to be continuously releasing. We want to be continuously pushing code. And so, you know, the point-in-time test model of security is becoming an old model. And a much better model is to take the approach that you want to have continuous monitoring for the security of your code, and you want to have observability that provides you with kind of constant security monitoring. So for instance, DeepFactor is an example of a observability tool that will monitor your code for security vulnerabilities pre-production
Starting point is 00:58:08 so that as you're going through your development, as you're going through your test, as you're going through your staging, if you link in some new library and that library is old or unpatched, like it'll let you know right away. You don't have to wait to the point that you go get your software pen tested to find that out. You can identify that much earlier. And by the way, the cost to fix it is much, much less when you identify it earlier and right away rather than waiting for a penetration test. There's a divide between infosec people and developer people.
Starting point is 00:58:45 And I think that's part of the problem. And I understand there's only so many things that you could focus on as a human. And so there are people who are generally considered infosec. These are your penetration testers, your security researchers, your audit firms, cryptography people. They're kind of in this group. And then there's developer people who are focusing on like JavaScript and Node.js or they're writing Go code,
Starting point is 00:59:10 they're talking about new features and APIs and stuff like that. And there are those who float kind of back and forth. But when you talk about security first, building it right in, you know, starting with it similar way you start with software quality or application quality. A lot of times the people who are doing that coding just don't have that expertise.
Starting point is 00:59:37 They don't understand what is the best practice around how to do SQL queries in a way that's not injectable? Or whatever it happens to be. Whatever that particular attack surface is. How do we bridge that gap? How do we get these people to be the same people? Or at least sitting next to each other,
Starting point is 00:59:57 digitally speaking maybe. Because I do see kind of separate communities and sometimes they even look at each other in ways like with the side eye which is kind of strange but like there are those who float in between i feel like i've kind of done a little bit of that sit in the middle but i feel like if if we can get the software developers more equipped with the security knowledge either at the outset or ongoing and maybe get the infosec people more equipped with the ability to write some software,
Starting point is 01:00:26 not saying y'all can't write software, but you know, so we can have one big group. Is that something you think would be advantageous to the software community? Yeah. So I think you asked a great question and you started providing an answer in the right direction. Okay. Right. So I think the old traditional view is you have the development team
Starting point is 01:00:47 and you have the information security team and they're separate teams. And, you know, in that kind of old model, like the information security team can be perceived as like the department of no, which is not what you want, right? I think a more modern view, a better view, is that the information security team exists to serve enablement of the business. And the philosophical approach should be yes and how. Yes, we want to launch. How do we do that in a way that mitigates risk? Right? So I think that
Starting point is 01:01:23 the mentality should be yes and hell. And within an information security team, you may have an application security team or a product security team. And I think that team should be staffed with people who used to be engineers and developers solely. I mean, I myself, I'm a software engineer. My background, my first job at Bell Communications Research years ago was as a software engineer. And, you know, in the Foundations of Security book that I authored together with Chris Dockern years ago, you know, the focus was, look, I'm somebody who's developed software for a living, but now I just want to make it secure.
Starting point is 01:02:01 And I think that's the right kind of team that you need. And the goal of that team, I believe, should be that the goal of the application security team should be to enable the developers to be able to write code securely and give them the tools and frameworks that they need to write code securely such that they can monitor it, but don't necessarily need to be a approval gate. Right. team can say, you know, bring observability tools like DeepFactor and, you know, get folks to use those tools, then developers can monitor the security of their own software themselves.
Starting point is 01:02:54 And perhaps all that stuff can be aggregated together so that, you know, a CISO can look at the full picture and try to understand, okay, what is the security posture of our code base? How likely vulnerable is it or not? What kind of additional tools should we invest in to further help the developers? Another model that I've seen work well is where if you have a large engineering organization and a relatively small application security team, what you can do is encourage one of the developers in each of the development teams to kind of be the local securities R, where they go through some training, maybe they know more about some of these tools, they may be able to exploit,
Starting point is 01:03:38 identify and exploit SQL injection or process suiting vulnerabilities of their own. And they kind of serve as the local security DNA in that dev team, but just coordinate with the more central application security team. So I think that we can set up models like that. It's a much more progressive, collaborative way in which to go about achieving secure or more secure software than would exist otherwise with a more traditional model. Yeah, I like that. I know. I mean, that's inside specific organizations, which is really where we mostly operate. I've seen also where we operate kind of on the online spaces out there in the communities. There's been efforts to bridge
Starting point is 01:04:22 these gaps. I appreciate people who are trying to do that work. We know that there was a similar dev and ops gap, you know, where the developers write the software and the operations people put it into production. And then there was, you know, DevOps, hey, let's get together and let's break down that barrier a little bit. And now we're seeing DevSecOps, which is a terrible term, but it's kind of like, right? Developers, security, and ops. Let's bring everybody together
Starting point is 01:04:49 and work together. I think I don't like that particular DevSecOps term, because it's kind of strange, but I appreciate the movement there and the efforts being put in place to really break down that divide and build better products together. I think the collaboration is key. The collaboration is key in place to really break down that divide and build better products together.
Starting point is 01:05:06 I think the collaboration is key. The collaboration is key in order to result in more security and better resilience and more tolerance and all kinds of other good things that comes out of that collaboration. Well, something you mentioned earlier, Jared, I think to this divide between these two camps or maybe three camps based on DevSecOps, is this, you know, the collaboration happens when respect and empathy are in place.
Starting point is 01:05:32 So if as a developer I can empathize with my security counterparts or as a security counterpart I can empathize with my developer counterparts to have respect for one another's sort of surface area of concern, so to speak. If there is that, then collaboration can take place. But when you get the side eyes you'd mentioned, well, that shows a lack of respect and a lack of empathy. And what we need to work on is like those, I guess, fundamental human traits like empathy
Starting point is 01:06:00 and respect for one another's work to collaborate better. Yeah. like empathy and respect for one another's work to collaborate better. It's a difficult situation because on the face of it, one person is writing the code and the other person is trying to break into that code. Just by what you're tasked to do, you're kind of set at odds with each other, aren't you? Because one person's exploit is somebody else's vulnerable software
Starting point is 01:06:25 in the case that we're talking about software vulnerabilities and not these other ways. We find out people get in many other ways anyways. So there's lots to think about. What about these other vectors, ways that we can fight things like unencrypted data, third-party compromise? That one seems so difficult
Starting point is 01:06:40 and something that is happening more and more now that we have all these mergers happening. I just can't imagine bringing in a third party through a merger or something. Now you have these two disparate companies and code bases and infrastructures, and now they're acting as one. I could just see how there's so many problems with that. Even when the third party becomes a subsidiary to the first party, these things are happening all the time in startup companies and enterprises all around the world. What are some ways that you can
Starting point is 01:07:08 combat or guard against third-party compromise? I'd be happy to comment on that. Before I do, I just wanted to comment. We talked about things like respect, empathy being important in between teams. I just wanted to chime in with one additional characteristic.
Starting point is 01:07:24 I think the characteristic of accountability is important. I think that if the application security team and the security people are accountable when there's a software vulnerability or a compromise, I think that's the wrong model. I think that developers should be held accountable for the security of their code, just like they're held accountable for the quality of their code. And if pretty much the application security team are there to support them and help them, then even though to an extent, it might seem like their fundamental goals might be at odds, if you set the accountability on the software developers, then it in fact merges them back together.
Starting point is 01:08:17 Because then in order for them to achieve the secure software, which they're accountable for, they'd love to get the help of the third, of the application security team. Right, because they're accountable to secure software, not putting a security flaw in there, right? Their accountability, so putting a security flaw in there is a byproduct of just making software. It's going to happen. Bugs, right?
Starting point is 01:08:37 They're going to happen. A flaw's going to happen. Something's going to happen. But the accountability isn't on being a human being who can write code and not create a security flaw. It's the accountability to a secure application that requires a team, not just an individual without flaws or problems. It's an individual that has counterparts that can help them through that and create secure software. Yes, yes, that's exactly right. And I think though, if you look at very large organizations like banks, for instance, where they're regulated by all kinds of regulators, I think there is another aspect of this where the security team usually becomes the one that has to report information into what eventually gets to the regulators. And so, you know, there has to be
Starting point is 01:09:27 some monitoring, there has to be some validation, because, you know, at the end of the day, everyone's butts are on the line. But I do think that where you set that accountability, and how the monitoring and validation is viewed and perceived is important. It's not that the security team wants to do that, because they want to be a pain in the neck. They want to keep the company out of regulatory trouble. So I think there's a lot of interesting aspects here. To go on to the second part of the question, with regards to some of the other root causes,
Starting point is 01:09:59 we chatted about various kinds of third parties and we chatted about malware. Let me just give one example that comes to mind. So in the book, we have a chapter on the Marriott breach in which 383 million customer records were stolen. There were 5 million passport numbers that were stolen. The reason that occurred was because Marriott acquired Starwood. And the combination of the two basically was going to make the world's largest hotel chain. And it turned out that Starwood had been compromised by a piece of malware a year before the acquisition, not during the acquisition, but pretty much after the acquisition. So you've got to keep in mind that any third party company that you're thinking about acquiring is going to become a first party. And if they're breached,
Starting point is 01:10:58 well, you're breached too. And in both Marriott and Starwood's case, there was a lot of susceptibility to malware. But I mentioned that because it's an example where there's many kinds of third parties. Third parties are not always just suppliers. Third parties can be entire companies. advice to deal with that. One of the things that I talk about in the book chapter in which I give guidance to technology and security executives, it is that when you're thinking about acquiring a company, you know, sometimes one might do a penetration test of the company that you're acquiring. And, you know, that might get done and it might tell you about, well, what's the potential susceptibility that that organization might get exploited and breached. But the other thing that I think is really important, and I've, you know, done this for some of the acquisitions that I've been involved in, where if there's enough, you know, if there's enough things that you're worried about, what you can do is don't do a penetration test. Do some active, proactive threat hunting where you don't just look for potential vulnerabilities.
Starting point is 01:12:17 You look for indicators that the company has actually already been breached or compromised or penetrated in some way. You're looking for different kinds of evidence. You're looking for, are there encrypted RAR files somewhere in the environment that might already have been aggregated by attackers and have all this stolen data in it? Are there binaries that have hashes that might be indicators of attack or indicators of compromise? Even if the company is not aware of a breach, or even if their penetration test findings look good. You've got to do that threat hunting as well. Perhaps if Marriott had done such an exercise on Starwood before the acquisition, they might have acquired, they might have identified rather that there was already a breach that took place,
Starting point is 01:13:04 and that would have impacted the acquisition discussions. Maybe a good place to close would be this word you mentioned, accountability, and you mentioned it from a teammate aspect. But I would imagine that there's some accountability in terms of due diligence to say, in this case, Marriott and Starwood. Marriott being accountable to do necessary due diligence to confirm Starwood's potential threat vector, etc., whether they've actually already been compromised. But accountability to companies that sort of just do business and sort of like don't pay attention to or don't do enough on the security aspect. And Jared's personal information or my personal information, your personal information, Neil, is taken. So how do we – what's the accountability level?
Starting point is 01:13:53 I know we're sort of speaking just at security, but like what's the accountability to these companies to do security right and respect their customers and their information? Like is there jail time involved here? Is there any, are you familiar with the law system, the legal system around security and companies and vulnerabilities and exploits and stuff like this? What can you speak to in terms of accountability there? So first of all, I'd be happy to speak to it.
Starting point is 01:14:18 I'll, of course, caveat what I say here that I'm not a lawyer. As a former CISO, I've worked together with a lot of attorneys and general counsels at both companies at which I have been a CISO. And let me mention that I think that there have been strides in accountability over the years. So if we go back to the Target breach in 2013, after the Target breach, you know, it was not just the CISO that was fired, it was the CEO. And that was the first breach where that had occurred. So the accountability now goes up to
Starting point is 01:15:02 the top, you know, and in one of the book chapters, you know, I encourage folks to have their CISO, their chief security officer report to the CEO, because at the end of the day, if something goes wrong, breach. CISO should get fired. Well, that's not necessarily the case. The CEO can get fired too, if the breach is big and bad enough. And in fact, if we look at other breaches, if we look at the Equifax breach, there was a change in CEO and CISO there as well. So the accountability has gone all the way to the top. I think that board members need to be asking their CEOs the right questions. The CEOs need to be also asking the right questions if they don't have a CISO but have a CTO. Well, there better be somebody that should be accountable for ensuring the security of products, as well as the IT organization. Security is not just an IT problem. I think that it could be the case that if some companies have a CISO that's still reporting to the CIO, well, the important thing to realize there is that security is not just an IT problem. It's much broader than that. So there has been an increase in accountability. CEOs have gone fired because of breaches, and that accountability has gone all the right way to go. I mean, I think that, you know, for instance, in the Equifax breach, the CEO tried to pin it on somebody who was supposed to patch that Apache strut server.
Starting point is 01:16:54 But I think that's heading in the wrong direction. Because if you're any reasonable sized organization, you've got thousands, tens of thousands, hundreds of thousands of servers. When you go to patch all your servers, inevitably, some are going to be down, some are going to be crashed, whatever. Everything's not going to get patched right the first time. Anyhow, by the way, you should have automated patching for that number of machines. I think relying on humans to do patching is likely to fail. So you've got to also then understand that when you're
Starting point is 01:17:28 operating at any level of scale, you need to have automated technical verification that the patch got successfully deployed. And the verification should take place, for instance, before any ticket about the vulnerability should be closed. So I think that in terms of human accountability, I would put the onus on the CIOs and the CISOs to say, look, you guys need to have a scalable, systematic, automated approach to things like patching with technical verification because I think that the days where we should expect humans to get every single detail right is the wrong direction to go.
Starting point is 01:18:10 Definitely a troubling scenario given the breaches that have happened, the data breaches and whatnot. It's an interesting, I suppose, ever-changing world, cybersecurity, cybercrime. But Neil, thank you so much for writing the book for everyone. This isn't just simply for security researchers or security experts. It's for everyone. There's a bit for everyone in there. And we need people like you out there sharing this kind of message to more people to increase the ability for accountability to occur. So thank you, Neil, for coming on the show and sharing all you have. Yeah, well, thank you. Thank you for having me.
Starting point is 01:18:50 Both Moody, my co-author, and myself, we had a great time writing the book on Big Breach and Cybersecurity Lessons for Everyone. My hope is that the book will be a good contribution to the field that will help bring more people into it. I think there's enough security books out there that are written for security people or for developers or, you know, for different discrete audiences. But I think we need to also bring boards, business executives, tons of folks into the fold. And so in the book, there is something indeed for each of those audiences. And my hope is that folks read the book, use the book within their organizations, and follow
Starting point is 01:19:36 up and act on some of the advice that we provide so that we can achieve a stronger cybersecurity posture across many organizations. I agree. Change begins with awareness and awareness begins with books. So thank you. Thanks, Neil. This was awesome. Welcome. Welcome. That's it for this episode. Thanks for tuning in. Make sure you check out Neil's book, Big Breaches, Cybersecurity Lessons for Everyone. And as we demonstrated in this show, it is literally for everyone. So Neil is doing his best to share wisdom with the developers out there who are
Starting point is 01:20:09 security minded. If that's you, pick up the book and check it out. Let us know what you think. If you haven't heard yet, we have a membership. It's called changelog plus plus because, hey, why not increment things? It is better, as they say. You can subscribe at changelog.com slash plus plus, get closer to the metal, make the ads disappear, and of course, support all of our podcasts. Again, changelog.com slash plus plus. And of course, huge thanks to our partners, Linode, Fastly, and LaunchDarkly. Also, thanks to Breakmaster Cylinder for making all of our awesome beats. And of course, thanks to you next week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.