Software at Scale - Software at Scale 24 - Devdatta Akhawe: Head of Security, Figma

Episode Date: June 17, 2021

Devdatta Akhawe is the Head of Security at Figma. Previously, he was Director of Security Engineering at Dropbox, where he led multiple teams on product security and abuse prevention.Apple Podcasts |... Spotify | Google PodcastsIn this episode, we discuss security for startups, as well as dive deep into some interesting new developments in the security realm like WebAuthn and BeyondCorp. We wrap things up with slightly philosophical points on the relationship between security and regulation.Highlights0:00 - What got Dev interested in computer security?4:00 - Security for a startup. What framework should a CTO use to think about security as their startup gets its first customer?7:30 - Trends in the security space. Increasing customer demand for security due to the multi-tenant nature of the cloud. Lateral movement attacks.12:45 - BeyondCorp. “There’s BeyondCorp, and YOLO NoCorp”. NIST’s paper on it.25:00 - How should I think about a Bug Bounty program as a startup founder? - Having a good “Vulnerability Disclosure Policy” is an extremely valuable first step26:30 - Why would anyone report bugs if they weren’t being paid for them?30:00 - Interesting security products that companies might want to buy :)34:30 - What is WebAuthn?39:00 - How security and usability shouldn't be a trade-off43:00 - Security regulations47:00 - A repeat question - as a startup, what should I do to keep myself secure? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.softwareatscale.dev

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Software at Scale, a podcast where we discuss the technical stories behind large software applications. I'm your host, Utsav Shah, and thank you for listening. Welcome to another episode of the Software at Scale podcast. Joining me here today is Dev Akhave, who is the head of security at Figma. Previously, he was the director of security engineering at Dropbox, where he managed several product security teams, as well as abuse prevention. And before that, he was doing his PhD at UC Berkeley. Thank you for being a guest today. Thank you for having me.
Starting point is 00:00:40 Super excited for this. Yeah. I want to start off by asking you a little bit of personal questions. What got you interested in security? You can do so many things as a computer science graduate or an engineer. And what got you interested in just working on security, researching in security? Yeah. Yeah. Funny thing you say, you can do so many things. So yeah, when you join a PhD program, they ask you, what do you want to work for in a way? And I think it's like, what are you interested in is the real question.
Starting point is 00:01:14 And to me, everything is interesting. Everything I can do with technology, with computers, all of it is interesting. And security is really the only field where you can do everything. I worked during my grad school years at UC Berkeley on systems, on programming languages, on formal methods, on measurement studies, on usability. And that's rare and unique to security and it's just so much fun. The intellectual joy and excitement of covering so many different things is really great. And so that sort of breadth of things I could learn and touch was really what attracted
Starting point is 00:01:49 me to security. Okay, interesting. And then as soon as you graduated, you moved to Dropbox, right? That's your first job. Is that accurate? Yeah, yeah. Okay. And can you tell me a little bit about your journey there, right?
Starting point is 00:02:03 What did you start off with doing? What are some interesting initiatives that you can talk about publicly? Yeah. Yeah, I joined as like an early engineer in the security team, Dropbox was relatively big, like midsize, I think 500 people or so when I joined, but, but you know, the security engineering team as a function was pretty small. And so I joined like, you know, a three or 4% team and, and yeah, like grew with the team as the team scaled. I think like, you know, as in a company in hyper growth, like Dropbox, a lot of it is just, you know, staying on top of the growth and like knowing what's happening and reviewing
Starting point is 00:02:41 things and making sure we stay secure. Dropbox was great in that sense because leadership really valued security. Thought was something really important. And so I think like, I feel pretty lucky. Like I learned how to do security in an environment where leadership really values security, feels it's important. And so I could focus on the like, how do I do security when a company is growing very fast?
Starting point is 00:03:03 How do I do security in a hyper growth environment? And I was surrounded by super smart people that I could learn a lot from. So yeah, that's kind of like the main thing in hyper growth is just keeping up and going where the ball is going. So that was the big one. But I think there were many other like technical directions and things we did. But those are details, right? And were you around or was it like before your time when there was like that security bug where nobody needed a password in order to log into any Dropbox account? Yeah, no, I was not around.
Starting point is 00:03:37 That was like, I think before even there was a security team at Dropbox, I think that was a long time ago, I think 2011 or something like that. Yeah, I don't think I was around. I was definitely not around. And yeah, I mean, that was some bug. It's a classic like at Dropbox, at all places, I always use that as a great example in trainings on the need for negative testing. That endpoint had a bunch of great tests
Starting point is 00:04:05 to make sure it worked as intended. It didn't have a test to make sure it did not work when it was not supposed to work. Yeah. And I want to pick on one point you spoke about earlier, like you experienced the company and when it was growing really quickly and a lot of security work was just keeping on top of things when the company is growing to make sure that things are not getting too bad and you can drive actual improvement. So I want to ask you about security for a startup. Let's say tomorrow that I want to start a company, I'm a CTO and I have two engineers and I have my first customer buying my product. So let's say I've reached that milestone.
Starting point is 00:04:46 I've built an MVP. Somebody has bought my product and they're using it. At what point should I start thinking like as the CTO or CEO about security? Like, is it too early for me at this stage? Should I start thinking about already? What should I do? What do you think?
Starting point is 00:05:03 In general, I think like let's separate out thinking about security as a thing to care about, about like, you know, quality and correctness. Like, like I think security is just one aspect of quality and correctness. And in the same way you would want your software to be correct and of high quality, you would want it to be secure. I've never met a CTO who was just like, oh, I don't care if unauthorized data access happens or something like that. And so I find that almost everyone cares about security in that sense, about doing the right
Starting point is 00:05:30 thing. And so I think that sort of correctness and availability, integrity of your data, confidentiality of the data, everyone should care about and usually does. Like that's never been controversial in my experience. And then there's separately, like when should we think about a security program? When should we hire security team members? When should we have someone who's focused on security? I think those depend a lot on who you are and what sort of product you are selling, which market you are in, what are your customers depending. And in a heavily regulated
Starting point is 00:06:02 environment, you might need a governance risk and compliance expert much more so than, let's say, a security engineering expert. In other products, maybe a social network, you might need someone who's much more comfortable with abuse prevention and sort of thinking about that problem. So really, I think it depends on sort of your product. And so I think the main advice I give startup founders is, if you care about security, that's great, but you really need to understand what are the problems
Starting point is 00:06:31 you would want to solve with a security team and why you need a security team. I think anyone who hires a security person or a security team as a magic box that, I'll hire a CISO and then all my security problems I'll stop thinking about, that and then all my security problems, I'll start thinking, stop thinking about that's not really an effective pattern. I think hiring a security team as a partner based on your business needs and defining what those goals are, and then telling that to the person you're hiring is probably the most effective way. And so there, it just becomes like, you know, when do you hire a marketing person?
Starting point is 00:07:02 When do you hire a comms person? When do you hire a marketing person? When do you hire a comms person? When do you hire a CRO? I think in that same sense, when do you start doing particular function like security is similar. I will say though that in the mix of regulations, the mix of the legal environment, the expectation of different companies around security means that you need to start thinking about it earlier than you would have thought. Like I can say, six years ago, a startup might not have a security team at even like, say, 200 people. But nowadays, I hear startups with like 40 people having a security team.
Starting point is 00:07:39 And if you're selling in the enterprise space, every company would expect you to have a security program and have security best practices. And so having a security program with a security team in place can be really effective in sort of achieving your business outcomes. And you've noticed that trend is that now companies are introducing security programs early and that's potentially due to regulation. Is there anything else you think that's contributing to that trend? Is there anything interesting that regular software engineers might just not know about that's happening here? Yeah. So I will say in addition to regulation, it's also just expectations of every company. An enterprise software company that let's say is offering a cloud SaaS-based service,
Starting point is 00:08:29 any other company that is buying their software expects a particular security posture. How common breaches have become, especially in cloud with like S3 buckets being open and stuff like that, has meant that customers are really asking to make sure that their data that they give to a SaaS software company remains secure. And so I think that's one reason that the cloud is super powerful to build software on, but it is easy to make mistakes and mistakenly open up things to the internet. I think that is a complete sea change from,
Starting point is 00:09:04 let's say, on-prem software, where even if your software is insecure, it's on-prem. It's behind multiple firewalls. It's only accessible to people who are on the VPN. So the expectations your customers might have around its security is very different versus software that is SaaS, cloud-based, accessed by your customer directly, the expectation and the need of security is much higher. Well, that's really interesting. And it sounds so simple, but I hadn't thought about it. The fact that more and more software is going into the cloud
Starting point is 00:09:38 necessarily means that the cloud is inherently multi-tenant. And that just means that your resources are more likely to just have issues with, you know, making sure that they're encrypted and making sure the right people have the right access to them. Before, like most applications, I guess, just used to be on-prem. So you wouldn't have to think about all that. It wasn't that your security is necessarily better. Interesting. better interesting and you're actually seeing customer demand in for more secure products after reading all of the security breaches and all of that okay yeah i mean uh every customer now demands it i've seen security questionnaires from companies as small as like 50 startups a lot
Starting point is 00:10:20 of them of course like you know if you're a bitcoin startup you know you're being targeted right so you're gonna have a pretty thorough questionnaires. And attackers have become smart, these like class of attacks that I would call lateral movement, where you get a foothold in one piece of software and use that to then move to worry about the posture of all of their vendors. And so that has also become pretty important. I think there was a breach just recently where Slack was used to... Someone's Slack account was compromised and then they messaged the local IT team on Slack to give them privileged access and then move from there. So no VPN involved at all.
Starting point is 00:11:08 And they got to full source code and all sorts of access. That is an example of a thing that was not possible on like 10 years ago. And so that's the class of attacks that companies are more and more worried about. Would you classify that as a supply chain attack? Are there these different categories of attacks that these can be classified into? How do you think about categorization and classifying and then ultimately the risk from each one? I personally would not classify this particular attack as a supply chain attack in the sense that Slack itself wasn't compromised. What happened was an employee's account on Slack was compromised.
Starting point is 00:11:59 And so I would classify them as similar to lateral movement in the cloud. I think one of the most fascinating evolutions that's happening right now is in a world of VPN, the attacks that companies worried about was that, okay, someone in, let's say, HR, their computer gets compromised, and then that computer is used to move to the VMware controller, and from there, everyone is compromised and stuff like that. That was all inside the VPN. And that was called lateral movement because you're moving laterally inside the trusted network. But today, that's what's happening, but in a way, it's in a pure cloud SaaS-based application. One account on Slack was compromised,
Starting point is 00:12:34 so the whole machine wasn't, but one account on Slack is compromised, and from there, they move to other things and other things available on Slack, and they ask IT for help, and they move to more privileged locations like access to source code and so on.
Starting point is 00:12:48 So I think of these as lateral movement, but in a sort of beyond corp, post-VPN world, which is very interesting and I see change in sort of the security needs and the sort of things that the security team needs to look for. Yeah. And yeah, this might not be completely on topic, but I'm just curious about what your thoughts are on BeyondCorp and the whole idea of, you know, there's a good network and there's everything else
Starting point is 00:13:14 and just getting rid of that notion. As a software engineer, like who doesn't know that much about security, it's a little worrying to me, even though it sounds great in theory, right? That everything's on the same network and you have ACLs for everything. What do you think? Like, is it just the future? And is it just much better than what it used to be? Because it doesn't have that facade of security anymore behind the trusted network.
Starting point is 00:13:38 And you have to assume everybody's unsafe? Or what are your thoughts? Like mixed opinions? Yeah, I mean, so it's tricky. I think the like, you know, it is true that, you know, let's just not have a VPN is scary. It's scary to everyone. There's a joke that a friend of mine, Alex, always says that there is BeyondCorp and there is YOLO NoCorp.
Starting point is 00:14:02 And so, yeah, YOLO NoCorp is definitely scary and it's not something I would recommend. So the core concept of BeyondCorp, which is that there is no boundary, there is no line you cross and suddenly you're fully trusted is something I'm a big fan of. I think the core idea of BeyondCorp
Starting point is 00:14:19 and a lot of the hype and marketing sort of hides that, unfortunately. But the sort of the core concept of BeyondCorp is everyone has to prove trust through different signals. And so depending on what you recently did might change the level of trust. It isn't the case that, okay, now you've authenticated. And so now you're on the ACL and you're in, there's no VPN. The concept of BeyondCorp goes that like, okay, you've authenticated in the last 24 hours. And so maybe you're allowed to do thing X, Y, Z, but if you want to do a more privileged
Starting point is 00:14:49 action foo, we require more signal. Maybe we require authentication with a second factor, or we require authentication again with a trusted device and MFA, or we require authentication by checking that the recent OS query check-in didn't have. So the concept of let's use many, many more signals and let's not make it a binary signal of on VPN versus not on VPN to make a decision based on trust.
Starting point is 00:15:12 And then based on those trust decisions, allow a set of actions versus not is something I'm a big fan of. And I think it's something that is actually pretty common if you look at day-to-day web applications, like if you use Amazon, PayPal, any of these, right? Like depending on which action you're taking and depending on how old your session is, you have to maybe re-authenticate or maybe they send you a text message on a phone.
Starting point is 00:15:34 And so each of those steps increases that trust and you're allowed to take different actions. So I think I'm a big fan of BeyondCorp. I think it's now best practice. The NIST recently came out with best practices and it's a fantastic document on how to do beyond corporate. So I'll say anyone who's interested in it should actually read what the beyond corp NIST document says and actually internalize that. I think that's the way networks should be done. And it applies everywhere. Like even in production, you can think of that sort of beyond corp framework that you don't say trust shouldn't be a binary that you cross this line and now you're trusted i think that concept is something
Starting point is 00:16:09 i'm a big fan of that makes sense i'm gonna definitely add the both those things the yolo no corp and the beyond corp this document to the show notes and i think i should read up more so that i can like get over this fear of no VPN. And that makes sense, right? Like the immediate example when you brought up Amazon was when you want to look at your previous orders, it often asks you to re-authenticate. But when you're searching for stuff, until you actually hit buy, there's no need for you to authenticate because that's something that anybody can do. Even though you've already logged in, it can ask you to re-authenticate, which is pretty
Starting point is 00:16:44 interesting. even though you've already logged in it can ask you to re-authenticate which is pretty interesting i think the other example that makes like is really fun with amazon is the you know you can place an order and buy anything even with less auth when you are shipping it to the same address that's already registered but if you change the address you have to re-enter your credit card and the cvv code and stuff like that because, like changing the address is a change in risk. And so now Amazon requires extra authentication, right? So there's just like, you know, Amazon is continuously optimizing this like two things,
Starting point is 00:17:14 which is how do we ensure growth and a great customer experience while also like reducing fraud. And so I think they do a great job. I mean, I like using Amazon. So yeah. Yeah. I've noticed with Door with door dash which is another service i use if you change the address you need to do to fa and i haven't and if you use the same address as before it's not a big deal so if you're sharing somebody's account it can be pretty painful if that other person's not awake and you're trying to deliver to a new address but i'm not going to say i did that so all of this makes sense but clearly there's been a lot of thought put into this right like
Starting point is 00:17:52 deciding which flows are should be should require more authentication should not require more more authentication it brings me to like a slightly broader question around just like security thinking about security and security best practices. The first one is really, you mentioned that if I'm a social company, I might have to think more about abuse prevention. Whereas if I'm an enterprise company, I might have to think much, much more about unauthorized data access, at least. example how transferable are is your knowledge of security and like security best practices and your actual security program from like company to company or to like vertical to vertical like um for most companies what the security best practices are kind of the same or are they pretty malleable and fluid and like which one should it be? What do you think? Well, I think skills like particular technical skills,
Starting point is 00:18:51 like do you know AWS? Do you know GCP details are not easily transferable, but learnable. So I think the thing that is not transferable, the thing that is critical is really the willingness to learn, the willingness to go into a problem that you're not already familiar with, learn it, and really take in this security mindset of testing your assumptions and double-checking your assumption, having multiple layers of defenses and stuff like that. I think those are the skills that are actually key. And those transfer across companies, across products, across features pretty well. I think the good news is that, you know, the concept of transferable skills is anyways a problem in our industry, right? Like, I don't
Starting point is 00:19:37 think it's just security, right? If you're an infra engineer, you know, maybe, like, how long has it been since Terraform and AWS became a standard? Like what, five years? But when we hire engineers at a company, we don't worry about whether or not they know Terraform that well. We make sure that they are sort of technical and able to think about first principles. And Terraform is a learnable skill. So I think security is very similar in that sense, that there are a bunch of first principles,
Starting point is 00:20:03 multiple layers of defenses, privilege separation, minimizing trusted computing base, and testing your assumptions, and really willingness to learn and willingness to be wrong. Because I think there is no other field that is as brutal as security where you can convince yourself you're right, and then it just takes one attack to prove you're wrong and so that humility and that willingness to be wrong and wrong and learn from it and change your plans is is really the other critical skill uh and and that's hard that's that's an emotional journey for a lot of people and and that's hard so you know if you're someone who like despite all evidence insists that your design is secure uh man that's going to be rough.
Starting point is 00:20:46 So yeah, I think those fundamental skills, the first principles, are pretty transferable across companies. You just need to be able to learn and reapply them in different contexts. The nice thing is that you have to do that anyhow. Even if you don't change companies, the technology stack for every company
Starting point is 00:21:01 is continuously evolving. The parts of AWS you used today are very know, the parts of AWS you used today are very different from the parts of AWS you used 10 years ago. So, you know, we're just in an industry where you have to just learn and improve continuously, which makes it exciting. Yeah. Yeah. And I think that makes sense.
Starting point is 00:21:18 But again, I want to poke you a little more. So there are some things which are like pretty standard, right? Like somebody tells you, you should have MFA enabled. And after the second or third time that, you know, you find out that your password was reused and hacked, you really start thinking about, you know, using a password manager and MFA. That's an easy thing. But, you know, as somebody who doesn't write that much front-end code, I always forget
Starting point is 00:21:39 concepts around XSS and all of those like terms, CSRF attacks and all that, you're not that familiar with it. And then you keep on, you don't understand the nuances and it's easy to make a mistake. When your company is small enough, you can have somebody who knows what they're doing to prevent those kinds of issues. Plus you can use frameworks to ensure that those bugs aren't easily called like you don't easily introduce such bugs but is there a point at which you start thinking more from like a platform perspective how do we prevent these kind of issues rather than so we can try teaching everybody that these are common problems but how do you solve it from the
Starting point is 00:22:20 other end like how do we prevent them through automated ways as much as possible? Yeah, absolutely. That's the skill. I would say that mindset first that I'm personally not a big fan of training as the answer. I think of training as, oh, I could not come up with a technical solution to prevent this, and so please help me. So like whenever I have a training slide for developers, I try to frame it as I don't have a better answer right now, or I haven't had the time to like fix this at a more fundamental level at the framework level. So please help me. But in general, I think the mindset of a security, a solid security engineering team is to like build frameworks and solve these at the platform level. I would say not just prevention, but how do you prevent, detect, or respond all three? Some bugs you can detect, but you don't need to prevent them outright, but maybe you can detect
Starting point is 00:23:14 when there's a bug like that. But XSS is a great example. I think in a modern... And again, I'm lucky enough to have worked in relatively modern, young codebases. But in a modern, young codebase, it's actually really easily possible to completely stop thinking about XSS. Because if you use React and Rails and you have a strong content security policy, it is a lot of work to have XSS. CSRF, if you use same-site cookies in strict mode and check for the right headers for requests, you can prevent CSRF pretty thoroughly in a modern app. And again, Rails and frameworks automatically check for those anyhow. So I think it's more about having that mindset that I want, going back to the title of the podcast, software at scale.
Starting point is 00:24:02 The only thing that works at scale is software that automatically protects itself rather than relying on trainings. One, because it's one hard to train. And second, even if you're trained, people forget we are writing, you know, software engineering is just such a hard job. You're balancing so many priorities.
Starting point is 00:24:20 You know, there's design, there's product needs, user feedback, accessibility, privacy regulations, needs, user feedback, accessibility, privacy regulations, performance, correctness, reliability, availability, durability. It's hard. A lot of words. So how can the security team make it so that the software is just secure? And the classic example of this is that
Starting point is 00:24:42 you don't think about memory safety because you're always writing in memory safe languages. Right. And that's the right way to solve it. Right. Like, and but if you talk about security, like if you went back 20 years ago today and like ask someone about security, they would say, yeah, I mean, you know, all about like memory safety and all the functions. Like, you know, the difference between strcpy, strncpy, strlcpy. Right. Like, you don't know any the difference between strcpy, strncpy, strlcpy, right?
Starting point is 00:25:07 Like, you don't know any of that because we just use safe languages. And so that can happen to other things, right? Like, how can we make languages safer? How can we make frameworks safer? And I think that's sort of the highest leverage thing a security team can do. Yeah, and just to add to that, the flip side of using open source stuff,
Starting point is 00:25:24 you don't even get features for free, but you also get security patches and all of these prevention at like, you know, the Railsies like I care a lot about my security at what point should I start thinking about you know can I start outsourcing this in a sense of can I start managing a program where I hear about bugs security bugs from the good guys rather than the bad guys like at what point should I start thinking about that what's my framework I should use to decide that I should introduce something like that? I think like personally, I think of like, let's separate out bug bounty in two steps. There is the having a vulnerability disclosure policy and having a bug bounty. The vulnerability disclosure policy for those who are not aware is basically saying, we will not like get angry at you and sue you if you, you know,
Starting point is 00:26:32 try to find bugs in our software, that if you come and find bugs in our software, security bugs, and like report them to us in good faith, we will not like sue you, we will thank you and we'll fix the bugs, right? I think that mindset is really important. And as soon as possible, as early as possible, every company should do it, right? Because I think of it again, like, you know, security has a lot of things shared with safety culture. But one of the things
Starting point is 00:26:53 that the safety culture and lessons from safety engineering have taught us from like in the last 50 years is that if you, the best way to have safety is to make it really easy to find and talk about safety issues. If you make it taboo to talk about and find safety issues, people will just stop. It was
Starting point is 00:27:12 actually much worse for safety. And so similarly, security is like that, where what's the problem with people reporting to you that you might have a security bug? Any company that refuses to do that is a big red flag for me. Now, separately, a bounty is one where you say, okay, now if you report a bug to us, we will pay out money. I think that is then encouraging and incentivizing people to start looking for bugs in your software. I think there the question is more about, can you handle that volume? Can you handle those reports? Can you give meaningful answers to questions? I think it is very easy to end up fixing a lot of bugs that don't make sense or don't produce realistic threats. The joke, I mean, if you search on Twitter for bug bounty, you'll see a lot of example bugs that are super lame. And if you don't know what you're doing, you'll
Starting point is 00:28:01 end up wasting time on fixing bugs that are not critical. And so I typically say have a bug bounty if you have a security engineer. Don't have a bug bounty if you don't have a security engineer. But that said, I think the earlier the better when it comes to bug bounty. It's just that you should be willing to say these are the bugs that we think are real risks. And these are the bugs that we don't think are real risks. And you should have the ability to handle that volume. The nice thing is that with startups like BugCrowd and HackerOne, you have someone who will come in and sort of like triage
Starting point is 00:28:32 and help you run the bug bounty. So that can be really helpful. But I would strongly recommend bug bounty to every SaaS software company anywhere. It's been the most... You want continuous security testing, right? We are continuously developing software. We are continuously updating. We're pushing every day. So the concept of once a year pen test is weird. Like that doesn't make any
Starting point is 00:28:53 sense. Like, like for most startups, like your product will completely change within a year. And so doing a pen test in once a year, it's just like a weird outcome. You should do that still, but, but that's just not sufficient. Yeah, that makes a lot of sense. I did not realize that, yeah, that's how you break it up, right? Like you have a vulnerability disclosure program and a bug bounty program. But then what's the incentive of anybody finding bugs
Starting point is 00:29:16 if you're not going to pay them for it? Maybe a stupid question, but what do you think? I mean, why do people report bugs to you in sort of uptime or corner cases right now? Like I think maybe your users find something that is useful or maybe someone is trying to learn something about your software. There are many security researchers
Starting point is 00:29:33 who are just trying to learn. There's also just like credit points. You can give them t-shirts, you can give them a thank you, but sort of, I think a lot of people out there who are trying to learn and do the right thing. And so I think the sort of, don't underestimate the value of like good Samaritans.
Starting point is 00:29:48 And the reason I explicitly wanted to flag vulnerability disclosure policy is that historically there were companies that just said, trying to find bugs in our software is illegal. And that's just not cool according to me. I'm not going to name the company, but I think I have an idea of who
Starting point is 00:30:05 you're talking about um so the the idea of you bringing up like you know hacker one and bug crowd as these companies that triage bugs for you i didn't know that that's a thing i thought they just provided like a software platform where people can upload bugs so that's already interesting are there any other both like, you know, technical, but also like interesting products that you notice in the security space that are like, you know, growing and can help you like, fix some of that, like, one thing that I'm familiar with is like material security, the fact that they scan your email, and tell you, you know, it's a phishing link, and you probably don't want to click on it.
Starting point is 00:30:41 I think that's pretty interesting. Is there any other interesting technology that's coming out that is being maybe even productized by startups or there's just open source technology that's like pretty new and engineers should know about or should start thinking about? Yeah, I mean, I would first say the startups like HackerOne and BugCrowd
Starting point is 00:31:02 aren't just about triage. I would say they help you on your bug bounty journey. A typical journey would be, hey, launch in private mode with only invited hackers for 15 days and then invite 10 hackers, then 30 hackers, then do it for continuously open private bug bounty for 50 hackers, then 200 hackers, and then you go to public bounty and then they help you with triage and stuff like that. So I think just having an expert who knows how to launch a bug bounty can make that experience
Starting point is 00:31:30 much more easier to roll out. And so I see them as partners, right? Earlier, it was only a few people who had actually done this bug bounty journey. So I think these partners can be really powerful there. I think there's a lot of technology. I mean, security right now is super interesting. I mean, you work at Vanta, but making compliance automated is super, super powerful and can be something really attractive based on the company you're at.
Starting point is 00:31:58 If you're an enterprise product company, you want compliance to be relatively painless, especially if you're a startup. So Vantage is super exciting. I think material security is a product that I really love. It's super interesting what they're doing. Email is one of the most common ways companies get hacked. I think the biggest one that has relatively less awareness is WebAuthn as a way to authenticate. The most common way companies get hacked is password reuse and account takeover, which is that one of your employees has reused passwords in some other site, and then that password
Starting point is 00:32:33 was used to log into your employee's account and then breach data and stuff. So I think phishing and password reuse are the most common ways companies breaches happen. And WebAuthn is this technique where it's a way to do multi-factor auth that is unfishable. The technical details are not important, although they're pretty cool and I would recommend reading up on it. But basically, WebAuthn ensures that you cannot be phished in your MFA. And that's the one thing that I tell everyone to enable because it's just a very strong boundary. Google had a study where they said,
Starting point is 00:33:08 once they switched on WebAuthn internally, they just have had no phishing incidents. And that's at Google scale with so many employees and a company that is continuously targeted. So I think WebAuthn is probably the coolest tech. And with modern Macs and phones, you can use Touch ID as a WebAuthn provider. You can use Touch ID as a web authoring provider. You can use Face ID as a web authoring provider.
Starting point is 00:33:29 And so it has become really usable in both G Suite, Microsoft, Okta. And so that's another technology that I think is a game changer that everyone should enable. And then the other one is just detection and response. There's a lot of tools around detecting weird activity on your networks. So Datadog has a security monitoring product. Panther Labs has a security monitoring product. There are lots of products in that space. But just anything that slurps up your CloudTrail, Okta, G Suite logs and looks for suspicious
Starting point is 00:34:00 patterns is something that's really powerful that everyone has. And there, again, everyone should have. patterns is something that's really powerful that everyone has. And there again, everyone should have, but there again, like a sort of mindset change I encourage is, there is looking for badness, which everyone should do. But another thing to do is also like enforce good that this we intend our configuration to look like this. And so you can look at the logs and say, our intended goal is the only thing listening on the internet is ALB, right?
Starting point is 00:34:26 Like that is a goal that I think most products can aim for. And so if you intend to have only ALB listening on the network, then you can have an alert that says, if my cloud trail shows anything other than the ALB getting packets from the internet, that is an alert that I should page on. So sort of, that might not be malicious activity, but still page on the fact that like an intended invariant is being broken. So yeah, those are some of the technical ideas that I think are actually pretty interesting nowadays.
Starting point is 00:34:54 I mean, I can go on forever. There's so much happening. It's a great space to be in right now. Yeah. Maybe you can talk for two minutes on the technical details of WebAuthon. The fact that you mentioned is an unfishable MFA. And like the only thing I know I'm familiar with is like SMS and simjacking and how easily that's breakable and like your authenticator apps. How do you make an unfishable MFA? It sounds so interesting. Yeah. The core idea behind webauthent is, so, so let's talk about why
Starting point is 00:35:19 MFA is fishable, right? So forget simjacking, right? But let's say you go to like phishing site.com and you get tricked and you think you're logging to google you type in your username password and you type in the six digit code you got on your sms and the attacker just forwards those along to google and they can reuse it uh to log into google right like there's nothing really stopping you stopping uh the attacker from just using whatever you type in your password and the six digit code and sending them across to Google and just logging in as you. So that's why most MFA is seen as fishable. Maybe it's the, you know, the authenticator, maybe it's SMS, it doesn't matter. It's fishable. What WebAuthn did was say, you know, the second multi-factor auth step is done
Starting point is 00:35:59 through a hardware cryptographic step. And so what happens in the second MFA step in WebAuthn is the site asks your computer, your YubiKey or some WebAuthn provider like your TouchBar or your YubiKey to sign a secret. And in the TouchBar or the YubiKey, the key is stored tied to the domain that you're logging into. So when you register your WebAuthn, you register a key, hey, I'm signing it with this key. And that key is tied to the domain that you're registering it on. So your YubiKey will remember that this is the key to sign things for, you know, accounts.google.com. And so if you're on some other website, even if you think you're trying to log into Google, when you tap your YubiKey, the YubiKey will just say,
Starting point is 00:36:47 well, it's another website.com or phishingsite.com. So I can't unlock the accounts.google.com key used for signing the login request. And so that's the key idea behind WebAuthn that we don't like, like what domain you're logging into is tied to the WebAuthn protocol. And it doesn't matter what the human thinks what's the current website is.
Starting point is 00:37:04 The URL of the current website is the like, the URL of the current website is sent from the browser to the YubiKey or the touch bar. And the only way to get to the accounts.google.com key is by being on accounts.google.com. Now, it's not magic, again, like malware on your computer can always lie to your YubiKey again, that hey, I'm actually on accounts.google.com. But malware on your computer is game over anyhow. But the case of you're on a phishing site, there's just no way the phishing site, your browser knows you're on phishingsite.com. Your browser doesn't get confused that it's on accounts.google. So it will always tell your YubiKey, hey, it's on phishingsite.com and the YubiKey will refuse to unlock the accounts.google.com MFA token. That is pretty interesting. And it's pretty smart.
Starting point is 00:37:47 The fact that MFA is always fishable, I just never thought about it like that, but that makes a lot of sense. Yeah. Yeah. I mean, the famous hack of 2016 of the Democratic Committee, Democrats, was one of the things that's not mentioned is they had MFA and the person who was actually like phished, he had emailed support being like,
Starting point is 00:38:10 hey, is this legitimate? And got a response from support saying, yes, it is legitimate. You can log in. So they'd done everything that you would expect them to do in 2016, but they still got phished and hacked, right? Because it's hard. This stuff is hard. And the only way to really do it is through these flows that are phish- proof. So I think turning on WebAuthn is just the most powerful thing you can
Starting point is 00:38:30 do for security. And like, you know, all political campaigns last year, and some of the best companies you can think of, like Google, you know, they have this turned on. And the nice thing is, in the last four years, it has become really usable. So I encourage everyone to turn it on. Yeah. So anybody with a new MacBook and the latest version of Chrome should be able to turn it on for like G Suite and maybe Okta? Yeah, G Suite, Okta. New MacBook, you can use in Safari, Chrome. On Firefox, you'll need a YubiKey.
Starting point is 00:39:01 And the really nice thing that makes it really usable is on your phone, you can use Face ID as a web authent provider. So it really nice thing that makes it really usable is on your phone, you can use Face ID as a web authent provider. So it's Touch ID on your laptop, Face ID on your phone. And so it's really usable. You don't even notice it as painful. I mean, I like to use it much more than any of the other like, you know, I don't have to pick up a code on my phone,
Starting point is 00:39:18 type six digits, it's just one tap and you're good to go. It's just fantastic and so much more secure. It's very rare to have tech that's like orders of magnitude more secure and more usable and faster. So yeah, super
Starting point is 00:39:29 exciting. Yeah. And that's a perfect segue to what I wanted to talk about next around like the balance of security
Starting point is 00:39:35 and usability, right? As you said, like WebRT is both more secure and more usable. So it's like a no-brainer.
Starting point is 00:39:42 But a lot of times, one of the examples you brought about earlier was nothing should be listening onto the public internet other than ALB. That's like a very simple assertion that you can have. But then often you'll find engineers,
Starting point is 00:39:56 I want to SSH into prod in order to debug my containers and stuff like that. And you have to think about this trade-off and you also have to deploy like what's practical like if 70% of your engineering team is associating into containers regularly it's very unfortunate but how do you I guess align organizational incentives and how should you think about security like deploying security best practices practically in your organization like you know that there's a gold standard you should move towards like how should you think about moving towards that basically yeah i mean i think there is not really a trade
Starting point is 00:40:30 off there right like if something that is secure is not usable it's not going to be used right so it's not and then it's not secure right so uh it doesn't really matter right like the classic i worked on this problem in grad school but like ssl warnings and browsers right those are the classic example of like yeah, we are showing warnings, but everyone was clicking through. So you haven't actually achieved security. You can talk all you want about the crypto math you did around SSL verification, but if everyone is clicking through every warning, how much have you actually achieved in terms of security? So I don't think there's a trade off there. You have to make your secure technology actually work for day-to-day needs. So the concrete example you gave around SSH,
Starting point is 00:41:07 you can have ALB listening and still manage SSH. So you can SSH in through SSM through the AWS console, and then it's pretty transparent. And then SSM gives you a bunch of other security guarantees also. So that's become increasingly the sort of secure default that we encourage organizations to use with AWS. And so, yeah, that's doable.
Starting point is 00:41:27 And it's actually more usable than having to SSH and figure out configs and then check that the public key matches and all that stuff. You can do much stronger forms of authentication using that. So I think I always look for how can I make it... Usability is not a thing I trade off against security. It's like the solution has to be usable. And personally, that's the intellectual joy. look for how can I make it... Usability is not a thing I trade off against security. The solution has to be usable. And personally, that's the intellectual joy. That's the intellectually hard problem is that how do I make a solution that is secure and usable? Making a solution
Starting point is 00:41:55 that is not usable and secure is easy. Just disconnect from the internet. So yeah, I don't see it as a trade-off. I think like a practical, meaningful, deployable solution is just table stakes. Like figuring out how to secure that is the fun part. It's the challenging part. So yeah, I mean, I can keep talking forever for any example you give, but like I think in the abstract,
Starting point is 00:42:16 that's the best thing I can say. Yeah. Yeah. I guess another way to frame what you said, it was like customer obsessed security in a sense where you have to develop stuff that customers like using the same or even in a better way and are secure by default rather than thinking of it as a trade-off. Yeah. I mean, I can, there are many, many ways to call
Starting point is 00:42:37 it right. It's customer obsessed or you can call it like solve for yes. You know, it's like someone comes with your problem as security team should solve for yes, or security teams should enable. The other one is just like an outcome obsessed security team. It's not about saying, oh, I have a security check or I have a policy. You actually look at the outcomes. And going back to the SSL warnings example, that it's only when people started measuring, oh, how many times are users actually clicking through the warning and the numbers came back at like 80%, 70% that the security community
Starting point is 00:43:12 in browsers realized, oh, we should redesign these warnings, try to reduce the number of warnings and now they're down at 10%. So that's been one of the biggest win when it comes to TLS security is the click-through rates have crashed. So I think outcome obsessed, that the real security comes when people are actually using the secure method and telling yourself that, oh, we have shipped the secure method, but everyone's ignoring it, then that's not outcome. Then the shipping of the secure method was just a cost. So yeah, it's outcome obsessed, customer obsessed, values of enabling and solving for yes,
Starting point is 00:43:46 call it whatever you want. But to me, it's just the right thing to do. Okay, that's great. And then adding a little bit to one more point which you brought about previously was around, how there's this trend in like security and regulations. And that might be one of the reasons why there's like increased demand of security from companies as well as customers themselves getting more interested do you think the path forward is customers who are more interested
Starting point is 00:44:14 in security and that's why security requirements and security technology and security products get better or do you think it's better to actually have some kind of regulations like more like SOC 2 type things more GDPR type things do you think it's better for the free market to kind of solve for this or do you think it's better for like regulation to pop in and like have things like HIPAA so that people have workstations locked within 15 minutes of using them? I think there's space for both I think think regulations are just hard, so I don't claim to have an answer here. I think it is very easy to have regulations that achieve the opposite outcome of what you intend, which is that it actually puts incumbents inside in a very secure way that only the incumbents can meet all the checkboxes and stuff. And they actually stop caring about security. They only care about meeting regulations
Starting point is 00:45:05 that are the things they require. So I think that's the tricky bit. I think there are some regulations, for example, that I'm a big fan of. I talked about the vulnerability disclosure policy. And I'm a big fan of what some people call the good Samaritan regulation, which says that it shouldn't require a company to come out and say, oh, we'll have a vulnerability disclosure policy. That should
Starting point is 00:45:28 just be the default. That should just be the law that informing someone that, hey, you like, it's imagine that like in the physical world, you see something really sensitive and there's no lock or key and it's open gate. Like, you know, if you went up to them and say like, hey, you don't have any gates, anyone can walk in and steal this. That wouldn't be seen as like a crime, but, but historically that was seen as something companies would push back on. But like, I think the good Samaritan regulation or law would say, you know, it's okay to be a good Samaritan and come and tell people, Hey, you need to fix this or, Hey, you have this bug where anyone can do this bad thing, stuff like that. So, so I'm a fan of those regulations, but I think, I mean, the question you asked around
Starting point is 00:46:04 regulation versus free market is like just a, such a deep question that I don't think I can give an answer there. But just being sort of cautious about which regulations and sort of the secondary and tertiary effects of that. Because I think a lot of the times the regulations have actually ended up meaning that incumbents stop caring about security and start caring about meeting regulations and then the outcomes for everyone are bad. So yeah. I think the favorite example of Twitter is the GDPR cookie banners. They thought that it would help with privacy, but now everybody just has to click okay and you still have those cookie banners. And I don't even know if half the companies who've implemented cookie banners and i don't even know if half the companies who've implemented cookie banners if you say no like if they actually respect that setting or not
Starting point is 00:46:49 so yeah there's always a double-edged sword with regulations yeah i mean i think that that's the thing and it's hard work like i empathize with people and it's with good intentions from the people who wrote the regulation. It's hard to regulate something as complicated as tech. But, you know, something needs to change. Like, I don't know what, but something needs to change. But I think just, you know, we should be careful and think about the secondary and tertiary effects. But I don't have an answer.
Starting point is 00:47:20 It's a very, very hard problem. Okay. And maybe a final few questions to wrap up and maybe this is actually the last one if i'm a startup today are there is there like a concrete set of tools or like a list that you recommend people follow like like if somebody asks you you know i want i'm a startup and what should i do for security i care about security i just do know where to start what would you tell them i mean going back to what i said there's a level of security which is correctness that like you only return the data you intended you should just
Starting point is 00:47:55 start and care about it from the very beginning well beyond that i think there's just so many aspects to security i think understanding what your business needs and what is going to be the risk that your business will have is really important. And then mitigate those. There's a class of risks that I think everyone can worry about, which is, you know, have WebAuthn, have SSO so as to minimize phishing risk and account takeover risk. But you know, whether or not you need to worry about abuse prevention, or hate and harassment on your platform, it depends on what product you're building, whether or not you need to worry about abuse prevention or hate and harassment on your platform, it depends on what product you're building. Whether or not you need to worry about or think about SOC 2 compliance and ISO compliance and PCI compliance depends on the product you're building. So really taking a moment to talk to an expert.
Starting point is 00:48:37 You don't need to hire a CISO, but talk to someone experienced. Talk to your VCs around what are the things and outcomes you want a security team for. And then go from there is something I would recommend. But other than that, I think there are a lot of lists. There's a SOC 2 starting seven. There's a lot of guides. I mean, I'm happy to share with anyone who reaches out, but there's a lot of standard checklists of things to do. But you really have to understand what's needed for your product and what's the risk and what's important for your product. And the good thing is no one's better at it than you as the startup founder. Someone else can inform you about potential things, but what are the big risks? What are
Starting point is 00:49:22 the things that will matter to your customers? The best person to know that is you and your customers, not a security expert who doesn't understand your product. So I think just take some time to think about it and think about what's critical. That makes sense. So you have to basically understand what your key risks are, understand what kind of issues you want to protect. You can read some of these checklists just to maybe get the bread and butter and the standard stuff in. And then you really have to think about the specific risks that you're going to deal with and how you're going to mitigate those and like have some kind of plan for that. Exactly. Exactly. I mean, there's a bunch of basic checklist items, bread and butter, as you called it, which is like, you know, have HTTPS everywhere. Don't use HTTP,
Starting point is 00:50:00 use a safe framework. I would say, you know, don't use memory unsafe languages, use Python and Go and Rust. Don't use C and C++ on your server side. You know, there's a bunch of things that you're unlikely to do anyhow, but, you know, there are a lot of resources online talking about it. You know, I talked about the SOC 2 starting seven.
Starting point is 00:50:17 There is the like starting up security from Magoo online. That's also like really good. So a bunch of ideas there, but really I think beyond that, beyond the bread and butter, what your security program needs to focus on and what is important really depends on your business. And that's fine. There's nothing wrong with it. I'd say, and Magoo has a blog post talking about this too, which is don't over-fixate on getting a CISO
Starting point is 00:50:41 hire. Very often for a startup, a CISO might not be the right hire. You likely don't need a CISO hire very often for a startup, a CISO might not be the right hire. You likely don't need a CISO. You need someone who can like help your engineering teams. And that might be an external advisor. That might be a consultant. You don't necessarily need a CISO. And so think more about what you would want from a CISO, especially an executive level hire.
Starting point is 00:51:01 And yeah. Yeah, I think this is a good stopping point. I have hundreds of more questions I want to ask you about at what point you should start thinking about red teams and all of that stuff and why should you have a red team? Is that mainly for incentives? But I feel like this is a good listenable
Starting point is 00:51:18 amount of content for somebody so I'm just going to say thank you for being a guest this was a lot of fun and I hope to have you again on the round 2 because I have so many more questions

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.