The Changelog: Software Development, Open Source - Lessons from 5 years of startup code audits (Interview)

Episode Date: June 24, 2022

Adam and Jerod are joined by Ken Kantzer, co-founder of PKC Security. Ken and his team performed upwards of 20 code audits on well-funded startups. Now that it's 7 or 8 years later, he wrote up 16 sur...prising observations and things he learned looking back at the experience. We gotta discuss 'em all!

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome, everyone. I'm Jared Santo, and you're listening to The Change Log, featuring conversations with the hackers, leaders, and innovators of the software world. On this episode, Adam and I are joined by Ken Koncer, co-founder of PKC Security. Ken and his team performed upwards of 20 code audits on well-funded startups, and now that it's seven or eight years later, he wrote up 16 surprising observations and things he learned looking back at the experience. Ken was gracious enough to sit down with us and talk through all 16 of his findings, which warms my completionist heart. I think you're going to enjoy this one. Quick mention of our partners
Starting point is 00:00:45 at Fastly. Everything we ship here at Changelog is fast because Fastly is fast. Check them out at Fastly.com. Okay, Ken Koncer on Changelog. Here we go. This episode is brought to you by Sentry. Build better software faster. Diagnose, fix, and optimize the performance of your code. More than a million developers in 68,000 organizations already use Sentry, and that includes us. Here's the easiest way to try Sentry. Head to Sentry.io slash demo slash sandbox. That is a fully functional version of Sentry that you can poke at. And best of all, our listeners
Starting point is 00:01:26 get the team plan for free for three months at Sentry.io and use the code changelog when you sign up. Again, Sentry.io and use the code changelog. so ken you've done a lot of audits why don't you tell us what that audit process looks like what's the step-by-step of an audit? Yeah, sure. And the process kind of evolved over time as we did more of them. So I'll kind of focus on where we ended up as the most evolved form of the audit. You know, at first getting access to source control is really important. You know, making sure that we had
Starting point is 00:02:24 all the repos within scope of the audit was super important. Usually know, making sure that we had all the repos within scope of the audit was super important. Usually our first step in the audit was, if you've ever seen Lord of the Rings, you know the scene in Helm's Deep where they have that guy who like kind of catapults over the wall, like the berserker guy. So we'd always nominate one person on a codot
Starting point is 00:02:43 to be the berserker. And their job was to get a local dev environment running as quickly as possible for whatever code we were testing. And rather than have everyone kind of simultaneously struggle through that, we'd have one person do it, write up instructions how a lot of these companies that we were auditing were, like I said, series A to C. They didn't have good processes in place a lot of times. And so we'd have that berserker come up with the local environment. We actually started having them build it within a VM, like VirtualBox or something so that they can kind of share it with us and we could skip that part. And then at that point, you know, I think first step was just to run some like very basic reps on the code bases just to see what type of thing we were dealing with.
Starting point is 00:03:31 Like, is this going to be a pretty messy code audit? Is this going to be pretty clean? And once we had that general context, we'd usually meet with the lead engineers on the project, get them to walk us through the structure and architecture of the code base, the big moving pieces, the big third-party services that were being used. And then after that, we were pretty hands-off with them, with the engineering team. We usually try to avoid getting too much into their daily cycles, letting them focus. And at that point, our job was basically, let's cover the OAuth's top 10 as quickly as possible, and then learn as much as we can about this code base and start finding bugs. And you never really could know what to expect. We didn't focus on any particular framework. So a lot of where our
Starting point is 00:04:17 research went was led by simply what framework we were dealing with and kind of rolled the punches at that point. What was the purpose of the audit itself? Was it security focus? Was it contextual focus? Was it how fast could you get the dev environments focused? What was some of the main points you were trying to gather from the audit? What was the, I suppose, the deliverable, so to speak? What was it like, here's your 10 facts about your code base? What was the... Yeah. So all our audits were focused on the security side. The output for the audit was a report typical to what you'd see in like a pen test report. I would say as we went along, more and more people asked us for kind of just like our
Starting point is 00:04:58 independent assessment on things outside of security as well. So that's when things got really interesting. We usually would produce a security-focused report and then usually a report just of general observations. Like, hey, you guys are doing this really well. That seems awesome. Or like, hey, I noticed you're using CK Editor and it's a complete disaster for you in terms of security. This is what we'd recommend there.
Starting point is 00:05:26 It would get more into the consulting, consultative side. I think people were just interested in our thoughts after having looked at the whole code base. That's about perspective too, right? The perspective changes because you've got the team that produced the code base. Some there still, some not there still. Early stage of the company.
Starting point is 00:05:44 And their focus is on direction of product, the code base. Some there still, some not there still, early stage of the company. And their focus is on direction of product, not so much like overall holistic health of the code base at large. And the perspective you all bring as a third party is like all this contextual knowledge about security, but then also best practices because security kind of comes from best practices depending upon the argument you might be in or not. But that perspective is like, it's a different perspective. It's like, I'm too close to the problem. I can't see the problem. And you're more like, I'm farther away so I can see all of the problem. And I can give you a more detail oriented, as you said, output to what's actually going on.
Starting point is 00:06:19 Yeah, absolutely. I would say the other thing is there's a lot of insecurity that we saw in early stage startups. And that is kind of the genesis of some of the observations I made. Like a lot of people are like, oh, like we have a pretty small team. Like, is that, is that going to be okay for us? Or like, oh, like microservices are really hot right now. We're like this kind of boring monolith on like a pretty boring tech. Like, should I be doing something else? So a lot of it was, was actually that kind of boring monolith on like a pretty boring tech like should i be doing something else so a lot of it was was actually that kind of concern and uh yeah that was kind of the genesis of some of the observations of just yeah seeing that in practice and be like no actually you're on a monolith and this is awesome you guys should keep doing this forever because this is a great product sometimes you need somebody else to give you that confidence boost just to reaffirm like, yeah, you're doing all right, you know, because you're so in the weeds and you're all internal focus. And it's like, am I doing this right?
Starting point is 00:07:10 I don't know. I'm just going to keep heading west. And it's like having somebody else tell you, affirm your decisions or tell you that was a terrible decision. Let's change it. Like, that's very helpful. Right. How do you guys know when you're done?
Starting point is 00:07:21 Like, do you just set like a maximum number of labor hours you're going to spend? Or when you like have a trickle of new findings versus a lot of new findings, how do you know when when you're done? Do you just set a maximum number of labor hours you're going to spend? Or when you have a trickle of new findings versus a lot of new findings, how do you know when an audit's finished? We sold the audit in blocks of hours of 40. So you could choose minimum 40 all the way up. Probably no one really went above 120. And what we found is at that point, it's really diminishing returns. It's somewhere between 40 and 120. So you wrote up this awesome article on your observations,
Starting point is 00:07:51 your findings. As you said, you've done a bunch of these always startups. It's been seven to eight years since you've done a lot of these. And now it's like a time to reflect and look back. And these are your lessons learned. You shared a bunch of lessons. We're going to talk through as many of them as we can. We'll see how it goes. And I would love to just dive right in. So the first thing that you talk about, your number one finding, and I believe these are in somewhat of an order, not like best to worst, but maybe highest level to lowest level, and maybe sensational to less sensational. We'll just work vertically down. Listeners can check the show notes if they want to read along.
Starting point is 00:08:27 You don't need hundreds of engineers to build a great product, which you've also wrote about this. But I'm sure you had a lot of startups that had a bunch of engineers and you probably had some that had a handful. And it wasn't like, you couldn't draw that correlation.
Starting point is 00:08:41 Is that what you're saying? Yeah, exactly. There was no correlation between the number of developers working on a product and almost anything about the product in terms of quality and features. I might even go so far to say that maybe if anything, and there was a slight correlation, again, this is not statistical, but it would be that the smaller teams were really punching above their weight. And I was kind of surprised by that. I think there's probably an organizational aspect to this one too, which is, I don't know if you guys feel this, but like, especially with like just the crazy startup scene in the last, you know, maybe 10
Starting point is 00:09:20 years, I think a lot of engineering organizations really felt very pressured to grow rapidly. They felt like if you didn't have a big engineering team, you weren't successful, like a priori. And I think that was something we came across pretty often. And that's also kind of what I'm trying to speak to in this first observation. There's also the more people involved there's more of an opportunity for let's just call it low quality contributors to kind of slip through the cracks and not have to perform at a level that they would have to if there was less people on the team just out of pure necessity now maybe those people also end up burning out
Starting point is 00:10:02 because they're working too hard and etc. There's a lot of different factors, but I can see where in large engineering teams, you'll have certain contributors who carry maybe the whole team, maybe a few people on their team. Whereas if there's just less people around, that just doesn't work. Yeah, less places to hide. So in this output you put back to them, as part of the audit completion, so this is a learning for you in retrospect, how would this point permeate into the report? Would you tell them, hey, you have way too many engineers or you have these security issues or these concerns because you just have too many? like how would this learning permeate back into
Starting point is 00:10:45 a report, for example? Yeah, so it wouldn't. And this is an example of something that now that I'm that I'm observing now that it's been seven or eight years since a lot of these audits were done. I don't know if during the time I was auditing, I would have come up with this one, honestly. I think it was something that at the time I was like, oh, like, like maybe it was like scratching the back of the mind, but I certainly didn't feel confident enough to be going to, you know, the CTO and being like, you have 50% too many engineers. It's just like, probably out of scope for like, that's a cataclysmic observation to be making. And I think it's really retrospect that's at least for that one, making me reach that observation. So, and it seems like that
Starting point is 00:11:30 retrospect can look back and see which companies were successful and which ones weren't. Exactly. Are you talking about the current state of the product when you audited it? No, you're totally right. It's a lot of it's like now that seven or eight years later, these companies have kind of evolved. Some of them have faded away. Some have been acquired. Some are now very successful. Gotcha. Can we clarify the distance to in time from the last audit? You say some of them were seven to eight years ago, not all of them. So like what's the closest in distance and what's the furthest in distance just to give a clarification on time distance from when these took place.
Starting point is 00:12:08 Yeah, so we started doing code audits in 2014. So the furthest out we are from these companies would be eight years. And I left PKC for my current role two and a half years ago. So I would say probably three to seven years later for the majority of these audits. Okay. Now, could you draw a correlation on team size to product surface area? Like maybe not like lines of code count, but maybe like number of microservices or maybe in the case of a monolith lines of code or like, is it bigger team, bigger product surface area, or does that not even correlate in your experience?
Starting point is 00:12:46 It sometimes did, but you would think it would always, it didn't always. That was what I'm still kind of scratching my head a little bit on. Sometimes like we'd start the audit and there were like a lot of developers. And then we'd look at the code and be a little bit surprised. We're just like, there's not that much code here. Like a little bit, like literally, what are these people doing? And what do you do here? You didn't want to ask that obvious of a question. No, no.
Starting point is 00:13:14 I mean, like, I think we had maybe even if anything, a little bit of too much respect for what was going on. We're like, I'm sure there's a good justification here. There's got to be because developers are expensive. It's not like, you know, it's easy to have a lot and not notice or something. So I would say, I mean, almost going back to your point earlier, Adam, when you have more developers, it's more likely that you have microservices, for example, because there's some extent to which choosing your architecture is informed by your actual engineering organization.
Starting point is 00:13:45 Like the larger teams you have, the more microservices and the overhead that that requires begins to make sense. So maybe there was a little bit of correlation with like complexity of infrastructure more so than code. I almost wonder if there's like a wasteful hiring too, because when you're in startup, like in your Series A, Series B, there's a mantra, always be hiring, right? Like you're always hiring and so maybe you're hiring too much and there could be this aspect of wasteful hiring. And these types of audits, while they may be security-focused in origination, maybe it's a wisdom practice for some of these startups to consider this as like a must-do-it after every series, like Series A, Series B, or every raise to sort of like get a glimpse of a holistic approach of what's happening. Because this is a retrospective learning in your part. You didn't learn it.
Starting point is 00:14:39 I guess you didn't uncover it in the process of the audit but you can say well you might have too many engineers or you might have too much of this or too much of that because right that's a learning you've kind of examined from this but just wondering if there's like a wasteful hiring aspect of this because i mean always be hiring it can't always be good it seems like that's the kind of assessment that not a infoseSec specialist would make. Maybe they could try, but that seems like the kind of consultant that would be doing other things at maybe a higher, more organizational level and could use the data from an audit to help inform that.
Starting point is 00:15:19 But I don't know. Isn't there even a law about shipping your org chart, which is what you're referring to there, Ken, with microservices? Conway's Law? Yeah, that Conway's Law. Yeah, the propensity for your product to basically be an outgrowth of the shape of your business, which is just kind of a weird phenomenon that seems to hold true. At least it sounds like it. Is that what you found? Yeah, I think that Conway's Law is a really deep statement about how organizations and technology kind of interact with each other. And it's definitely very informative.
Starting point is 00:15:54 I think you also get really interesting organizational dynamics, like maybe between two teams, two engineering teams, front and back end, two different services that may have a lot of overlap that they need to resolve. And you can see kind of almost by looking at the architecture and how the code bases are laid out, who's worked on what, you get a little bit of the history of the organization as well as just the straight-up technical situation at the present time. Well, link up Conway's Law in the notes. I just found the Wikipedia and first coined in 1967. To me, it's just amazing that he could have that insight and it could hold true for so long.
Starting point is 00:16:38 Most of my insights don't hold true for more than 30, 45 seconds. But Conway sure drilled that one. Yeah. If we're trying to get through all 16 of those, we're doing a poor job because we're on one so far. Yeah, let's move on. Let's move on. Not trying to rush this, but good conversation, but we're on one.
Starting point is 00:16:55 Okay. So two, simple, outperformed, smart, counterintuitive, probably an ego check for many of us. But tell us about this. Even yourself, as you call yourself a self-admitted elitist, turns out smart. Maybe that aligns with clever, which tends to bite us. Tell us about this finding. Yeah. So this one's interesting because I think reading through some of the comments on this blog post, I think this one was actually a little bit misunderstood. I got a lot of comments that were like, oh yeah, keep it simple, stupid, like kiss, totally right. Yeah, let's just go with that. I was saying something that I thought would get a little bit more
Starting point is 00:17:35 controversial, which is I was actually talking about engineering cultures. So like not just like a engineering principle, but cultures that valued simplicity and maybe to put it really bluntly, like kind of scorned and had a little bit of like a chip on their shoulder for things that were complicated were better than organizations that I think valued what I'll call like rigor. Which is, this should be controversial because like rigor is like, you want a rigorous engineering culture, don't you? Like, why would you not want a rigorous engineering culture when you want people who are very careful and who are planning ahead? So like, that's kind of my form, my current formulation of it. And I think that is like, I think we can all agree that keep it simple, stupid is a great principle, but I think it's less clear that you want a culture of simplicity over a culture of rigor. I think that might rub people the wrong way to put it that way. But that's what I found. That's why it was so surprising. likes working with smart people likes and is drawn to really complex problems that's where
Starting point is 00:18:45 i'm like oh i don't like that but uh truth is sometimes not exactly what you like yeah i don't naturally draw those as antithesis though simple and rigor i think simple and complex rigor to me is like applied strictness or thorough i think you can be simple and thorough. So maybe that's my disconnect from what you're saying. I do know that simplicity is difficult. And so we think, keep it simple, stupid. But it's actually a lot harder. You actually have to maybe rigorously keep it simple
Starting point is 00:19:19 in certain ways. Because moving fast, as startups do, and changing a lot, as startups do right you're trying to find that product market fit those things are like against simplicity right they're against fast moving changing often switching directions and that usually leaves a wake of either complexity or impedance mismatches or bad API designs that never got deleted or whatever it is that end up being complex. So just kind of a stream of consciousness there.
Starting point is 00:19:52 But I'm not sure if I think of rigor and simple as against each other necessarily. Definition alone, though, agrees with you, Jared, that rigor is not the opposite of that. It says the quality of being extremely thorough, exhaustive, or accurate. So being extremely thorough is, as you said, Ken, is a great quality for an engineering department.
Starting point is 00:20:12 Simple, I think, is not the same as rigor. Or not the opposite. Yeah, exactly. It's not the same. It's the opposite. They can be simpatico. Yeah. So maybe, and we're getting hung up on a semantic debate about a word ken but uh that's what we do here welcome to the change law we do often but i see i mean i definitely understand the desire for smart clever and complex architectures maybe the what makes you feel
Starting point is 00:20:41 like you're being rigorous perhaps is like we must do it right the first time. Which usually, to which I as a simpleton will say, yag me on that most of the time. Maybe I'm not an elitist. But I've been down that path many times and it's like we're designing this microservices architecture which is the example you put in the blog post. Which I think is a good one with regard to this topic.
Starting point is 00:21:05 And how do we know if we're ever going to need these things? We're being too rigorous. Now maybe I'm coming around to your word. We could just start with a simple thing. Adam, you and I were just kind of debating this on our weekly meeting today about what we do here. And one of my other sayings, which I don't make up any sayings,
Starting point is 00:21:21 I just repeat other people's, is perfect is the enemy of good. And we desire to be perfect. We desire to have it all thought out and planned out and no mistakes and sweat the details. And sometimes that just paralyzes us from making progress. And so I have to tell us that sometimes, like, well, let's just ship a thing, see what happens. Well, momentum creates the motion, right? So it's the exact word.
Starting point is 00:21:46 So if you get a little bit of momentum, sometimes you can start moving. You start to see the promise of the possibility. And the details you sweated was just like, that didn't matter so much. It's better to just get it out there. You know, it's better to get it out there, even imperfect. Because I think, you know, perfection actually is thoroughly unachievable. There is no thing as perfection because the moment you achieve it, somebody else has done something more or better. So it's always a moving target.
Starting point is 00:22:10 So to pursue perfection for perfection's sake is just a fool's errand. It's not going to happen. So gist ship is almost kind of smart. Yeah, for sure. One thing you say in that post, Ken, is that the people that really impressed you as smart engineers, either that opinion changed over the course of the audit or over the course of time,
Starting point is 00:22:34 now they haven't succeeded. Maybe their startups failed or have languished. That's part of this too. Actually, the ones that correlate with success are the ones that were more hyper-focused on simplicity and less perhaps, I guess, intellectually impressive. Is that true? Yeah, it is. And you know, that bothers me a lot. And that's not to say that on the teams that really focused on simplicity as kind of a core engineering principle, that's not to say that they didn't also have smart people, but those smart
Starting point is 00:23:05 people were very disciplined about their smartness and didn't view engineering as like an purely an exercise, intellectual exercise. Like I forget, Adam, you mentioned like wisdom. Like I think a lot of engineering decision-making is as much about wisdom as it is about intellect. And so maybe like, that's also what I'm getting at there is like, you know, the value of simplicity is, is like a, is a wisdom thing to knowing when to stop knowing when to be like, okay, I've gotten really deep into this problem. It's time to pull back out and like, look for the plugin that does this, like in three lines of code instead of, you know, the 200 lines I've started to write so far.
Starting point is 00:23:59 This episode is brought to you by our friends at Influx Data, the makers of InfluxDB. In addition to their belief in building their business around permissive license open source and meeting developers where they are, they believe easy things should be easy. And that extends to how you add monitoring to your application. I'm here with Vojcek Kajan, the lead maintainer of Telegraph Operator for Influx Data. Vojciech, help me understand what you mean by making monitoring applications easy. Our goal at Influx Data is to make it easy to gather data and metrics around your application. Specifically for Kubernetes workloads, where the standard is Prometheus, we've created Telegraph Operator, which is an open source project around Telegraph,
Starting point is 00:24:41 which is another open source project that makes it easy to gather both Prometheus metrics as well as other metrics such as Redis, PostgreSQL, MySQL, any other commonly used applications and send it wherever you want. So it could be obviously in FluxDB Cloud, which we would be happy to handle for you, but it could be sent to any other location like Prometheus server, Kafka, any other of the supported plugins that we have. And Telegraph itself provides around 300 different plugins. So there's a lot of different inputs that we can handle. So data that we could scrape out of the box, different outputs, meaning that you can send it to multiple
Starting point is 00:25:14 different tools. There's also processing plugins such as aggregating data on the edge. So you don't send as much data. There's a lot of possibilities that Telegraph Operator could be used to get your data where you are today. So we've permitted metrics, but you can also use it for different types of data. You can also do more processing at the edge and you can send your data wherever you want. Vojtech, I love it. Thank you so much. Easy things should be easy.
Starting point is 00:25:40 Listeners, Influx Data is the time-series data platform where you can build IoT, analytics, and cloud applications, anything you want, on top of open source. They're built on open source. They love us. You should check them out. Check them out at Influxdata.com slash changelog. Again, Influxdata.com slash changelog. number three was that your highest impact findings would always come within the first and last few hours of an audit i think that's probably just like an interesting tidbit for those who are interested in doing audits or those who are doing audits. And I think probably kind of a fact of how things often work, but not too much
Starting point is 00:26:34 meat on the bones there for us. Let's go to number four, which I think is very interesting. Writing secure software has gotten remarkably easier in the last 10 years. What has contributed to that, do you think? I mean, that's a good sign, first of all. Yeah, yeah. I was surprised that more people didn't pick up on this one too and kind of challenge it. I don't know if this one is true, but it really feels like when we audited older code bases,
Starting point is 00:27:04 let's say before 2012, it's kind of an arbitrary date, but I had to put something in there. We would find tons of problems, a lot of very basic cross-site scripting, SQL injection, really weird homegrown authentication and authorization code. And it seemed like at some point, I think two things happened. One is open source really started to become heavily used in these startups. I'll put it that way. Maybe, I mean, open source has been used for a really long time, but like, for example, I think people started to be like, Oh, instead of writing my own authorization authentication logic, I'm going to use the device plugin for Ruby on Rails, for example. And so frameworks took off. People started fixing bugs in frameworks. And then when all the thousand startups that use that framework upgraded, suddenly this class of vulnerability disappeared for them. I think the second thing is developers started knowing a little bit more about security. And we found that older developers tend to not think as much about security as maybe the younger generation, just because like the new cycle, really, I think in 2010 and 11, security as like just a new cycle that entered our public
Starting point is 00:28:23 consciousness really picked up. You had things like Stuxnet early on, which was the whole Iranian centrifuge bug thing that got a lot of public attention. Snowden happened. People just started thinking about security more and got more interested in it. Yeah, I think those are both insightful. I think for sure the proliferation of libraries that implement best practices for you, whether it's inside a framework or out, has saved a bunch of us from a lot of the very common mistakes. I'm thinking specifically of things like SQL injection,
Starting point is 00:28:58 where we used to rely upon ourselves to concatenate together strings in order to put SQL statements together. Most good database libraries that you would use nowadays that's a solved problem. They are built in such a way that you cannot
Starting point is 00:29:16 possibly get that wrong. And then in addition I do agree that I think younger developers maybe more modern developers have grown up in an age where it's crystal clear that this is a problem, and one that we've maybe been trying to educate ourselves in order to not fall into that problem,
Starting point is 00:29:36 where I think the previous guard, so to speak, lived in a simpler time, more of an innocent age, so less concerned things. Which is kind of interesting to think back to the more recent show we did with Schneier. Like Bruce, he was like, hey, open source doesn't mean that it's more secure. Like I severely remember him saying that like on the show, because I was like, you know, we're, I wanted to go a little deeper on that, but it was just like, well, you know, the more eyeballs and proprietary code, et cetera, et cetera, you can pay somebody, Microsoft could pay somebody he had said
Starting point is 00:30:07 to audit their code or whatever. Bruce talks very fast, so it's hard to go deep on things because he's already on to the next subject. We had an agenda and so that one wasn't worth going deeper on, but it's kind of against that. So if this is your finding, Ken, and his finding was, or at least opinion, maybe it wasn't a finding, maybe it was an opinion based upon findings, who knows, was that open source doesn't necessarily mean it's more secure. I disagree for the reasons you've stated. Like, you know, this may be anecdotal because you said you don't have any evidence to back this up, but your anecdotal evidence, which is the more proliferation of open source being used, more people seeing it,
Starting point is 00:30:44 more people leveraging existing frameworks and building upon wisdom rather than everybody recreating the wheel, totally makes a lot more sense to me. And what that means for the world in the last decade is like, wow, we can actually go into these years with a more security mindset. And I think leveling up devs on the security aspects, that comes from open source because you may be solving one problem, which is build a web app, not necessarily trying to build authentication. And you're like, well, I've learned about security because of what device does and how it works.
Starting point is 00:31:14 And these things you sort of like by osmosis learn about security. And in many ways, large part is because of the proliferation of open source. Yeah, I don't know if I agree with Bruce Schneier on this. I think if I was to maybe, I think open source can be not very secure. I think maybe what I would say is not all open source is created equally. If you're using an open source package that doesn't really get maintained and kind of falls into abandonment, then yeah, you're probably maybe even a little bit worse off than if you had built something in-house because now, you know, this thing that you haven't really
Starting point is 00:32:00 looked at or scrutinized from the security perspective or any other perspective for that matter is now, you know, cone that essentially you, you own, whether you think you do or not, but certainly for the big open source projects, I would a thousand times over recommend people stick with them from a security perspective, then, you know, try to write their own. There's a lot of scars, especially on stuff like Rails, the JVM, authentication plugins. And a lot of times we look at those scars and we're like, oh, they're insecure.
Starting point is 00:32:33 But each one of those is an example of a mistake you probably could have or would have made had you coded it yourself that you got to get for free, essentially, by not coding it yourself. I just think about how many apps, GitHub being one of them, Twitter being one of them, that was built on Rails, sold for billions, worth billions, being bought for billions, whatever, however you want to shake it up, that built on top of Rails that solved the security problems once and for all, or at least exposed a lot of them. And someone didn't have to go to recreate that wheel. And that happened for Twitter, that happened for GitHub, and many others that have used rails, Shopify, for example, even. You know, these are IPO-ed billion-dollar companies or acquisition for billion-dollar companies. And they never had to really learn those mistakes. They got to borrow them essentially or
Starting point is 00:33:26 inherit them, the learnings from them. You know, that's such a blessing to the world, really. Yeah, you're absolutely right. I think one of the big challenges that I see with the Node JavaScript community in general is how difficult it's been for them to kind of standardize in the same way that a Rails has, for example. And I think maybe when we started moving a lot of stuff to Node and JavaScript, we may have underestimated how much water was under the bridge for things like Rails and Django and how much tremendous amount of work had gone into solving some of the foundational problems. Maybe we took it for granted in some ways. So to play devil's advocate a bit on this point about the shared Ruby on Rails framework
Starting point is 00:34:19 across all these startups turned large tech companies, doesn't that also then create a shared attack surface? Doesn't that make Ruby on Rails itself the focus of attackers where they can get one exploit and go after all these high value companies versus had GitHub rolled their own internal proprietary framework
Starting point is 00:34:47 for web apps, then they wouldn't be able to attack. They could get a Ruby on Rails vulnerability, and everybody would be vulnerable except for GitHub because they got their own thing over here. Maybe that's an argument for security through obscurity and therefore not a great argument, but there's something there, isn't there? Yeah, I think that's kind of the defining question of the whole move to the cloud too, isn't it? Like the big question that everyone had was, are the things you're talking about, Jared,
Starting point is 00:35:19 going to outweigh the better security of those things from having more people and more resources scrutinizing them and looking at them. Right. And I guess this kind of goes along with the point about security getting better. I feel like that question, it feels like it's been definitively answered that yes, like the trade-off is definitely in favor of centralization on these large platforms. I think the better, the place where it's less obvious is
Starting point is 00:35:46 for maybe like those mid-tier things that aren't like ruby on rails or like aws and whatever you know linux distribution they choose to centralize on and it's kind of like those second tier things where maybe they don't get quite as much attention but they're still pretty heavily used by some large players i'm trying to think of maybe Log4j could be a good example of that where it's like how many Log4j's libraries are there out there where they're foundational to certain things but they just aren't high profile enough to get a ton of people looking at them constantly.
Starting point is 00:36:18 And so you kind of break that trade off there. Right. Yeah, which is sort of the tragedy of the commons in that case. It's like there's certain open source projects that kind which is sort of the tragedy of the commons in that case. It's like there's certain open source projects that kind of break out of the tragedy of the commons and they get the resources and the attention and all of the, and Ruby on Rails is a great example of that. And then
Starting point is 00:36:35 there's a lot of them, which still are foundational infrastructural things that we require and share. And then like, you know, one guy in Nebraska is maintaining it as the XKCD comic. So well pointed out. So yeah, interesting. Interesting trade-offs.
Starting point is 00:36:52 Well, trade-offs, pros and cons, right? Totally. The other argument to play devil's advocate one layer deeper before we go on to point five and six is would GitHub be GitHub if GitHub didn't use Rails? Because maybe they would have made their own thing, didn't move as fast, didn't innovate. Maybe they burnt out their best players early.
Starting point is 00:37:13 Maybe they focused on the wrong thing and GitHub would be a framework creator versus the place that open source lives because they got defocused on their priority, which was the main thing. Jared, you know this, the main thing, the main thing. Maybe they made Rails X or whatever their priority versus just leveraging Rails. Yeah, but now you've moved on to a productivity conversation and not a security conversation.
Starting point is 00:37:36 So I agree with you wholeheartedly there. Right. Like open source for the win. I mean, I'm with you. But I think on the security front, I can see how in certain circumstances there are drawbacks and there are trade-offs. Okay, let's plow forward
Starting point is 00:37:48 because we're never going to make it. We're never going to make it. Too many layers. Okay, so point six we're going to skip because we've covered it. Secure by default features and frameworks and infrastructure massively improve security. You covered that one.
Starting point is 00:38:00 Let's hop back to point five because it's somewhat cool. All the really bad security vulnerabilities were obvious, which is kind of obvious once you say it but also it's probably not very obvious when you find it but like there's some really bad stuff out of it basically like the like the low-hanging fruit you probably found it fast and it's like holy cow is that what you're saying yeah i think that what do i want to say here on this one? I think there's like this myth amongst maybe securities, but also probably devs as well, that like, oh, like a hacker is like this brilliant mind and comes up with like this
Starting point is 00:38:35 crazy hack that no one could have anticipated. And there's some evidence, like there's some examples to back this up. Like Heartbleed is a good example. Or like that one where it was like, Ied um is a good example or like that one where it's like i forget what it's called but like basically they they figured out how like branching prediction worked on like intel processors and then like yeah and they broke all intel based things yeah i don't remember what that's called but like i think most people have this conception of security research as always producing like that kind of like 400 IQ
Starting point is 00:39:07 problem. But the reality is like most security researchers and most hackers are looking for the lowest hanging fruit. Like they want to find the easiest things to exploit. And so those are the things that are going to pop up in practice when you actually do get hacked. It's going to be the cross-site scripting vulnerability that would have picked up on a scanner, but you didn't run scans. And I think that's something we found in practice, too. There were a few things that we discovered that I would say were more tricky, but they didn't end up seeming like the really high impact ones. It was more like, oh, like your password reset response.
Starting point is 00:39:50 This one is something that every dev should immediately check on their thing, because it's way more common than it should be. But like, make sure in your password reset response, you don't include the token in that response. That for some reason tends to like be a very simple gotcha, a very obvious one, but talk about high impact. Having anyone be able to reset anyone's password
Starting point is 00:40:12 has got to be at the top of the list. Did you guys perform physical security audits at all? We did one, and it was very interesting. Can't talk too much about it in terms of what exactly we did one and it was very interesting can't talk too much about it in terms of okay uh what exactly we did but uh we mostly focused on on the code side of things um physical security has its own unique gem well the reason why i asked because i would go all the way back to like the kevin mitnick days i'm talking about obvious and easy and low-hanging fruit it seems like probably to this day like just asking just asking somebody for the thing
Starting point is 00:40:45 still probably works way too often. And I wonder how much that stuff is audited. I know there are firms that do physical security, like on-premise things. I wonder how many call into help centers and stuff and try to see if that works. But I mean, you're just one untrained help center, you know, employee away from the keys to the kingdom in many cases.
Starting point is 00:41:07 People bow down to authority too. If you seem authoritative and you ask for particular information, you may give it up or you may get duped into giving it up. It's happened. That reminds me of this awesome social engineering
Starting point is 00:41:22 thing which people actually tried. You want to know how to get into any event for free whether it's a movie theater or a concert it's simple you walk in with a ladder it works best with two people carrying a nice like a 10-foot ladder and and they'll just let you right because everybody assumes if you're carrying a ladder like you work that you're there to fix something or hang a thing they'll let you right in and so like there's actually some videos on whether it's instagram or tiktok i don't know of people trying that and it works flawlessly like they'll just let you right in because you're carrying a ladder yeah and so it's a little bit of that assumed authority right assumed everything's
Starting point is 00:41:58 okay here you belong here clearly they're carrying a ladder they must be working it's the same with like in in tenant that the character was wearing that vest that particular vest it's like you wear it in an airport because you're directing the planes or whatever like you must be authoritative if you got this vest on that's i forget what the name of the vest is called so that's why i'm not being specific as i can't recall what the vest is called but it's this vest it's orange it's flashy yeah if you seem authoritative then then you don't get questioned. Or if you ask for certain information, you might just give it up because, like you said, you're an untrained person
Starting point is 00:42:34 just trying to do their job. It's like, well, this seemingly authoritative person just asked me for my password, so I gave it to them. So I hope it goes well. Oh, by the way way I looked up that Intel x86 hack. Meltdown and Spectre. Now it probably rings a bell.
Starting point is 00:42:51 Yeah Spectre and Meltdown. If you all recall those. All the most non-obvious high IQ hacks have cool names. There's no cool name for the one where you forgot to not send the token back on the password reset form that one doesn't get a name but it happens probably way more often
Starting point is 00:43:09 all right let's move forward here number seven mono repos are easier to audit that one seems like i don't know that one seems obvious when you think about it because like well everything's in one place but is there more to it than just that yeah i mean i'm curious on your guys thoughts on this but this is uh there is like a more generalized and non-security angle of this too where i i just feel like ergonomic developer ergonomics on a mono repo or are easier like anytime i would audit a microservices you know multi-repo setup it's just i'm talking about like the pretty like literally downloading all the repos to your local environment like you now you gotta write a script that like like we
Starting point is 00:43:53 would write scripts that like scraped like the github page and like pulled down the repos that way maybe i'm missing something maybe there's a clever button in github where it's like download everything but like just stuff like that is like it's just extra overhead and sometimes i wonder you know there's a ton like the debate about mono repo versus not is like very intense and i think there's good points being made all around but like i wonder if stuff like simple everyday ergonomics sometimes wins the like wins the day everything's like if i want to search for something and i'm in my ide yes you can have multiple folders open but like searching across a lot of different repos is tough if you want to find like you lose the ability to like control right click on like a function and like get that nice little sweet pop-up of all the
Starting point is 00:44:43 places it's been used because right a lot of times your ID isn't smart enough to know that, you know, there's like 10 different projects, just simple stuff like that. Sometimes like there becomes so much of it. That's like, you wake up one day and you're like, oh man, like the overhead here is high. Right. I'd say from a simple human perspective, it's visibility into things that aren't your problem. Not my code, not my problem kind of thing. Maybe you care about the org or whatever, but it's easy to not care because it's not in your visibility. And it's easy to just forget about it because it's so many services or so many things you can manage. That's not my problem.
Starting point is 00:45:20 So you almost don't pay attention or can't pay attention because productivity means you're focused on your problems and the things you can control. And so therefore everything that's outside that view becomes not a concern. And so if your people, if your engineers are the ones that are sort of the visibility into the health of your code, the holistic health of your code, and if they're not viewing it all, then it's kind of hard to secure it all or be concerned about security practices. Now that may be a CISO's job or somebody up higher, maybe not an IC, that's not their in quotes job, but I think if you have a lot of Lego, it might be challenging to manage where they go. Bringing Lego back in, Jared. Yeah, I mean I'm a monorepo guy, but I'm always on small projects, small teams.
Starting point is 00:46:05 So I feel like I don't have the perspective of somebody who would make the other side of the argument. We've never fully prosecuted that debate on the show or on any of our shows that I know of. Well, we went there and back again with Segment, right? We were microservices. Yeah, but that was more monolith even more than just monorepo
Starting point is 00:46:27 which is related but not identical I'm definitely, I'm also a monolith I'm just mono so I get all the monorepo arguments and I'm with you on them I just don't have, I can't represent well the other argument besides separation of concerns
Starting point is 00:46:44 perhaps or some of the stuff Adam's been talking about. So yeah, I don't know. It makes sense that they're easier to audit. Everything's in one place. One of the benefits of not having a monorepo is you don't have to deal with high volumes of commits to any branch or weird branching strategies.
Starting point is 00:47:00 It's a smaller team. They can adopt something very simple. A lot of the really large monoliths start running into, well, like if you're, I forget if it's Google that still had a monolith and have like millions of lines of code in it, but you really run into some problems with Git itself at that point. Like they started a whole work stream. I think it was Google to like basically improve the performance of Git. But like you just run into like, what happens if there's 30 commits an hour?
Starting point is 00:47:29 And each time you want to rebase, how does that work? How do conflicts work when you're just streaming a lot of commits through one monorepo? I think that's where people are like, I don't know if that's as good an idea. Yeah, I can totally see that as well, which is that I don't have that perspective to represent, but I can see where that could become, at scale, way more cumbersome than splitting things out and letting separate teams work separately.
Starting point is 00:47:53 Which is not a Series A through Series C company that's being audited by Ken and his team is likely not going to be at that scale. There's obviously some Series C companies that are pretty scaled. But at that point, maybe the monorepo versus, I guess, multirepo was the opposite of monorepo. It's probably just less of an issue, the scale problem. You're not Google scale at Series C. Let's move on to number eight.
Starting point is 00:48:20 We're never going to get to the end, And this is a big one, I think. You could easily spend an entire audit going down the rabbit trail of vulnerable dependency libraries. This is a big problem. Supply chain security is hot topic. And especially in the JavaScript world, or maybe just call it the front-end world,
Starting point is 00:48:40 because no matter what your back-end is, most of us are running NPM-based front ends. We have so many dependencies. And auditing, the main thing, is a lot of work. And what you're saying, Ken, is going down that node modules folder, or whatever your depths folder is if you're not in the front end world, is just like, there's no end.
Starting point is 00:49:05 How could you possibly audit all those things? Right, yeah. depths folder is if you're not in the front end world. It's just like, there's no end. Like, how could you possibly audit all those things? Right. Yeah. And I think, so part of it is like, usually what we security researchers do is we just decide to limit the scope to code written within that company. But if you think about it, in some ways that's not correct, right? Like why? Like why is it that a function that I imported is out of scope, but a function I didn't import is in scope? Like it's all code that gets run in the runtime environment. And so that's something that honestly, like I haven't fully processed. Like it just kind of worries me about how like the state, like maybe if there's a counterpoint to the above observation that security is generally improved, the counterpoint would be, well, we just are running a lot more code now than probably ever before. And because of how
Starting point is 00:49:58 easy it is to go out there and find an open source library, there's a lot of code running that hasn't been looked at very carefully. And that's the rabbit hole that if you go down, it gets kind of scary. It's a hard problem because how do you solve it? It's like a network-wide problem. No single individual, unless you just go completely non-invented here syndrome. And you're like, we're only going to use code that we wrote internally that fits inside of Ken's scope of work, we're going to have zero dependencies.
Starting point is 00:50:28 That's the only real way, and that's not realistic, I don't think, or wise even. When we talk about productivity versus security trade-offs, it's just not wise to do that. We actually had to solve that at a network-wide level. One thing you brought up that's helped is Dependent Bot and systems like this.
Starting point is 00:50:48 Because they kind of have been, well at least in the sense of a particular organization, I don't know, as far as they can pass the buck or have somebody else look over a certain area of their code, the known vulnerabilities that Dependent Bot will alert you of, kind of keep you
Starting point is 00:51:03 upgrading and keep you current at least, not keep you secure, but have helped move the needle a little bit towards more secure. Yeah. The interesting thing is, I think Dependabot gives you a very concrete target as a developer to aim towards. It's like, okay, very black and white. These are the packages I need to look at. These are the things I need to upgrade. But there's a lot of false positives there. And it's incredibly hard to know as a developer, like say there's a, I don't know, like a JSON parsing NPM package that you use and there's a vulnerability in it and it's critical. As a developer, one path you could go down is just to blindly fix it. And that's probably 90% of the time the right path. But say you try to upgrade and it breaks something. Now, how do you figure out whether the way that your code uses that library is impacted? That is a problem that Dependabot does not solve.
Starting point is 00:52:02 I don't think anyone solves that problem. And someone wants to make a great startup idea, that would be a great one to do. Because I think that problem is really hard. The closest we came so far to the solution really is what Feras is doing with and Team is doing with Socket. I don't know if you've heard of this yet or not, but the antithesis essentially is that you know the cve is like too late if it's a cve which essentially was what dependable does right like it's cve related it's sort of like documented known issues but they take a more proactive look at it where they look at the supply chain issue which is like have install scripts been added to the repo is there native code is there bin script? I'm just reading the list, by the way. Is there file system access?
Starting point is 00:52:47 Is there network access for things that shouldn't have network access? Is there shell access, debug access? All these different things that could be in a dependency that wasn't there before that could have been a source of social engineering. Hey, let me find a way to get your GitHub repo keys, change the thing on NPM, millions of people download it, and now I've got bin access or I've got a shell access or network access to this thing that never had it before.
Starting point is 00:53:12 Now I'm in your cloud or wherever I'm at and I'm doing my thing. They're taking a more proactive look at it, which I think is pretty interesting. It's the most interesting thing I've seen thus far in the supply chain attack issue. The only issue I see really is that they're only focused on JavaScript right now. So there's some things we talked about Frost with on this show. In that episode I think once they get past JavaScript and they do open source at large or Rust, Go, etc. things get more interesting. But JavaScript is a big footprint but it's not
Starting point is 00:53:44 all of it. Small steps in good directions. I've not heard of that. That sounds fascinating. It's still retroactive. It's more proactive, but it's still going to miss stuff. They're trying to provide a holistic view of your dependencies and provide a score. It's going to get more sophisticated
Starting point is 00:54:03 as they continue to advance their algorithms, but it's still going to miss a lot of things. Anything that gives you more visibility into it, you can miss a lot of things if you get a lot of things too. If you can prevent 20% of attacks, it's better than zero. It is. The challenge with security is it just takes that one hole. It's so much harder on the defensive side.
Starting point is 00:54:27 Because the offensive actor only needs one way in. And you have to secure all the ways. Ken knows this very well as a pen tester. Really all you need is the one. You may report on your audit, here's the 17 things we found. But only one of those. I mean, okay, there are better and worse hacks. Some things will not allow you to escalate privileges, etc., etc.
Starting point is 00:54:46 So it's not like they're all created equally. But generally speaking, once you get your foot in the door, that's pretty much all you need. And so it's really a hard problem from a defensive angle. It's such a hard problem. I think one of the things we can do is try to limit the surface area as much as possible. I'm curious what you guys think
Starting point is 00:55:05 about this, but sometimes I feel like we get a node module happy, especially on the node JavaScript side. Sometimes I wonder if we should find the thing that works, like the node module we want to import, and then like, just take the function you want, like the one function you, you went down the path to get it. And just like, it's almost like don't abstract prematurely. It's like, don't use the open source prematurely. Like if you can only use that one function, maybe like the 10 lines you need to, just do that and you kind of avoid the huge,
Starting point is 00:55:38 like the massive amount of code that you would import otherwise. Yeah, absolutely. Especially for the simple ones. I know a lot of companies would not have gone offline had they just copied left pad into their code base versus depending upon it, because when it disappeared,
Starting point is 00:55:54 they would not have their builds broken, as one example. That being said, a lot of your dependencies aren't so simple as copy-pasting a single function. But I agree. If you can copy and paste a couple functions, or even just take the thought there and rewrite it for your specific needs and own it that way, then at least when Ken's company comes by, they got to audit it for you. They can't just say, it's a dependency off limits. Well, speaking of limiting surface area, number nine is about untrusted data. It seems like a common thing, especially in the PHP world, is people kind of willy-nilly
Starting point is 00:56:32 deserializing stuff that they shouldn't. Sounds like this is a way that you get a lot of compromises. Yeah, I think we saw a lot of this on the PHP side. I don't know why PHP developers like serializing and deserializing objects and storing objects and serialized objects and databases and then using them. There must be something that's difficult otherwise. But the problem is that when you allow a user to have any control whatsoever about that, the contents of that serialized object, you suddenly basically give them the equivalent of like remote code execution. Right. And I think
Starting point is 00:57:12 it's not quite as obvious in the case of serializing and deserializing and some of these other things like prototype pollution that you're, that you're really giving them that much control because a lot of times the path to getting control is a little, like you use like weird features of a language. Like I think, like in, I know there's some weird prototype pollution like thing in Rails. And, but the way to exploit it is like,
Starting point is 00:57:38 you have to know that like Rails objects have like some pretty weird functions that get inherited um and that that are like pretty powerful functions and like you wouldn't see these functions in your everyday use of of rails as an oracle developer and so maybe that's why it's so prevalent yes well ruby is highly dynamic and so any language that is that dynamic and has features not just reflection introspection but also things like method missing. Where in Ruby, if you call a method on an object that's missing,
Starting point is 00:58:10 there's a special function or a special method called method missing where it can still execute other things, which is very handy when you're creating DSLs and doing all sorts of cool metaprogramming. But it's not super handy when you're trying to build a lockdown secure system. Now it seems like this PHP problem or this deserialization problem probably would also be
Starting point is 00:58:30 a situation where if you're using some sort of a library that people have worked on in order to handle the edge cases of this problem, you might be better or, Hey, just use JSON, right? Just, just use JSON and reconstruct those objects on your own. I think the thing, so PHP, I think has like several attempts of this in the standard library to like successively solve the problem. Like they keep on trying to fix it once and for all. Yeah, exactly. And so that's why we eventually just started. It got complicated to recommend something there. And so that's where we ended up with our recommendation being, I know it's a little bit more work. Use JSON, pass user data as JSON, in your own code, do the right checks and construct the object on your side. It's a little bit more work, but it gets very hairy otherwise.
Starting point is 00:59:29 Number 10, business logic flaws were rare, but when we found them, they tend to be epically bad. Yeah. You probably can't speak in specifics here. Name names. Yeah, I think free accounts, man. Free accounts. We just experienced this.
Starting point is 00:59:43 Free for a little bit because of a business logic issue probably. Oh, yeah. We had a free account from one of our service providers for a while because bad business logic put us into some weird state where we were both trialing. It's like we were half enterprise, half trialing. And things were working that shouldn't have been, and they weren't billing us.
Starting point is 01:00:02 And we had to actually contact them multiple times and be like, will you please bill us because we're not paying for this thing because of business logic flaws. We won't name names either, but if you want to Ken, we're not going to stop you. I'll just give the example, like the classic example you pick up in security trainings, which is banks that would accept negative deposits.
Starting point is 01:00:23 Oh. Yeah. Or negative withdrawals maybe. Yeah. Or negative withdrawals, maybe. I forget which one it is. But like, that's like the classic canonical, like business logic leads to people literally being able to create money for themselves. And the funny thing is like a lot of these,
Starting point is 01:00:37 I do not profess to be a Web3 or smart contracts expert. We never did any smart contracts auditing but i have been following with like this like mix of fascination and horror some of these um smart contract heists that are happening and really what it boils down to is that exact bank scenario like maybe a little bit more complicated these days but people find um it turns out that like code is, could be like perfectly correct, but can still be exploited and manipulated. And, uh, it's just been fascinating to see those take off. And, uh, yeah, I would say like the handful of times where we like called up clients and, um, let them know that something was horribly, horribly wrong. It tended to be a business logic thing
Starting point is 01:01:26 rather than an exploit in some weird function that they had. In the smart contract case, what happened? What made the exploit happen? Was it poorly written code in the smart contract? Or was it the person, the human error, didn't pay attention to the details? What was the true flaw? So the one that happened the most recently that I think I linked to in the article is related to DeFi.
Starting point is 01:01:54 And essentially, the smart contract had a certain, had logic in it for being able to almost like index a lot of different cryptocurrencies and auto balance them. And so what the guy did was he found a way to like inject a lot of like a very cheap cryptocurrency into the pool that was used for calculating the balance of the index. And like through everything that was completely legal within the smart contract was able to like extract tons of money from the system and you know it maybe it's similar to like how the stock market gets manipulated sometimes like pumping dumping stocks it felt a lot like that and you know it was legal perfectly legal within the system but it just you know was not the intention of the original intention of the developers.
Starting point is 01:02:47 It's kind of like when you test a system, and the tests prove that the system works as specified, but the system is not designed correctly. And so the test is actually not helping you at all. It's just telling you that it works as it's written. So 100% test coverage does not mean that your bank is not going to let a negative deposit add to an account well I think when it comes to auditing I think smart contract auditors are probably
Starting point is 01:03:13 making it pretty good money these days they're well employed aren't they Ken? you probably have a better view into that world than we do in terms of the auditing side that's good business right now? oh yeah, auditing and also just bug bounties. I think I read somewhere that someone found a bug in Ethereum and the bounty was like $10 million, which is, that's a lot of money.
Starting point is 01:03:38 It's definitely dwarfs. The interesting thing is it dwarfs a lot of the bug bounty money in like traditional software. Like I think Apple three or four years ago made huge news in that they upped their like top bounty if you find a vulnerability in iOS or something to like a million. And that was huge news because it was like 10 times more than anyone else was giving out. And now you have these smart contracts where there's tons of money flowing through and the bounties are even higher. This episode is brought to you by Honeycomb. Find your most perplexing application issues.
Starting point is 01:04:34 Honeycomb is a fast analysis tool that reveals the truth about every aspect of your application in production. Find out how users experience your code in complex and unpredictable environments. Find patterns and outliers across billions of rows of data and definitively solve your problems. And we use Honeycomb here at Change. Well, that's why we welcome the opportunity to add them as one of our infrastructure partners. In particular, we use Honeycomb to track down CDN issues recently, which we talked about at length on the Kaizen edition of the Ship It podcast. So check that out. Here's the thing. Teams who don't use Honeycomb are forced to find the needle in the haystack.
Starting point is 01:05:09 They scroll through endless dashboards playing whack-a-mole. They deal with alert floods, trying to guess which one matters. And they go from tool to tool to tool playing sleuth, trying to figure out how all the puzzle pieces fit together. It's this context switching and tool sprawl that are slowly killing teams' effectiveness and ultimately hindering their business. We'll be right back. the swarm and try honeycomb free today at honeycomb.io slash changelog again honeycomb.io slash changelog and by our friends at sourcecraft they recently launched code insights now you can track what really matters to you and your team in your code base transform your code into a queryable database to create customizable visual dashboards in seconds here's how engineering teams
Starting point is 01:06:02 are using code insights they can track migrations, and deprecation across the code base. They can detect and track versions of languages or packages. They can ensure the removal of security vulnerabilities like Log4J. They can understand code by team, track code smells and health, and visualize configurations and services. Here's what the engineering manager at Prezi has to say about this new feature. Quote, as we've grown, so has a need to better track and communicate our progress and our goals across the engineering team and the broader company. With Code Insights, our data and migration tracking is accurate across our entire code
Starting point is 01:06:38 base and our engineers and our managers can shift out of manual spreadsheets and spend more time working on code, end quote. The next step is to see how other teams are using this awesome feature. Head to about.sourcegraph.com slash code dash insights. This link will be in the show notes again, about.sourcegraph.com slash code dash insights. All right, we got to move on. You know I'm a completionist, Adam. Are we going to make it? What do you think?
Starting point is 01:07:30 Five more to go. Okay, number 11. Custom fuzzing is surprisingly effective. Can you first describe what custom fuzzing is for us and our listener? And then why is it a surprise that it's effective? Sure. for us and our listener? And then why is it a surprise that it's effective? Sure. So fuzzing is when you programmatically send random or pseudorandom inputs into the code that you're testing.
Starting point is 01:07:55 And you have some mechanism on the other end to kind of judge whether that random input produced an unexpected result. So that's what fuzzing is. And the thing that we would do for custom fuzzing is usually against APIs. So, you know, we would go into a company, we'd have a limited time to audit and we'd have like 400 API routes that we wanted to cover. And rather than painstakingly review each single one completely thoroughly, what we would first do to kind of target our assessment is we would send bad input to all those APIs. A really great example of that is we'd send an authorized request and an unauthorized request.
Starting point is 01:08:40 And if our authorized request got a 200 and our unauthorized request also got a 200, that would probably be a bad sign. You don't want 200s for requests that should be authorized. You should be getting like a 403. And so really it was as simple as that. It's like look at status codes that come back and see if there's anything that was weird. And then those are the areas you could focus on later on in the audit. And that was surprisingly effective. Are there any toolkits or auditing things that you would use regularly?
Starting point is 01:09:12 I'm sure you'd take that suite of custom buzzers and probably run it against the next audit. You'd kind of build up a cache of things that you run all the time. Because why not? Once you've written it once, why not run it against the next endpoint? But were there common tools that you run all the time? Because why not? Once you've written it once, why not run it against the next endpoint? But were there like common tools that you just recommend all auditors put in their toolbox? Yeah, so I know Burp Suite does custom fuzzing, but like, to be honest with you, actually we would build it custom each time. Oh yeah?
Starting point is 01:09:39 That problem, like in retrospect, maybe there was something we could have done. But I think the reason why we did that was because there was just a ton of different authorization methods. And we just never found a tool that was like, it turns out every app in some ways it's its own unique gem pun intended. And, you know, you, you wanted to write custom code and it turned out to not be super hard to do that. And, um, it worked for us at for us, at least on the scale of 20 audits. If you think about it, 20 audits was enough to start forming interesting observations.
Starting point is 01:10:11 But if you're a full-time pen tester or code auditor, there are companies that do hundreds a year. So I'm sure it would make sense for them at that scale to write and have a support at home framework on this. Number 12, acquisitions complicated security quite a bit that's the common thing in startup land is to acquire and be acquired and surely that complicates code bases and org structures and everything how does it complicate an audit uh i think it's starts being as soon as you do an acquisition you start thinking about integration and how to integrate. There's so many different ways you could do it.
Starting point is 01:10:48 You could just literally dump data from one side to the other and keep it straightforward. You could integrate via API. And it was really hard to scope things. So we'd get in and a company would be like, oh, well, we just want you to audit this app. And then we get in and find out there were significant integrations with another product that they had. And that's what made it difficult. The boundaries, as soon as you do the acquisition, they start blurring. And that's really where it gets really tricky. It also gets more expensive. Doing a audit of three products is not unsurprisingly more expensive than just doing the audit of one product. And so I think a lot of startups didn't anticipate the increase in costs from this perspective. They were like, oh, can't we just have you do your normal 100-hour block and just look at three or four more products? And it's like, well, actually, that's more code. And they interact. And so it's more money.
Starting point is 01:11:49 And that was surprising, I think, to a lot of the startups. What's interesting is how you spend that time, really. I've been thinking about this during this conversation because I'm thinking, when you get awarded this block, let's say, to audit one product, how do you discern how to spend that time? Does your client slash customer tell you? How do you prioritize? That's just got to be interesting. Like how do you – because you're saying you're going to build your own software.
Starting point is 01:12:18 That's going to spend at least an hour or two. I mean you've probably done it 20 times maybe. So maybe you're getting more efficient every time you write the script because you hand roll it every time. But prioritizing how you spend those hours is interesting. And it's got to be interesting to also get them to buy another block. Not that you're trying to sell more, but actually, well, we use this 40-hour block pretty easily
Starting point is 01:12:38 because you got issues, okay? We got to do more. I don't know. I've just been thinking about that as we've been talking how you prioritize your time yeah it's tough and like the honest truth is the block size is arbitrary in a sense um it's super standard uh amongst auditors to do this because if you think about like any other way is you could never really predict it. We had a list of hotspots that for every, like a checklist for every app, we would look at like authentication authorization logic. Like how
Starting point is 01:13:12 were they determining who could get access to what we would look at validation. So how are they validating that, you know, the parameters on an API request were in fact what they were expecting. There was a whole handful of those. And then honestly, we would also ask the devs, we would say like, what keeps you up at night? Like where in the code keeps you up at night? We wouldn't treat that as God's truth, but developers have a surprisingly good sense,
Starting point is 01:13:42 even without security knowledge, of what parts of the code are scary and they're kind of worried about. They definitely have blind spots. That's definitely true. But in terms of like, we were talking about business logic, a lot of times they'll be like, yeah, this part is super gnarly. Like there's a ton of logic here and it kind of works, but like it also breaks a decent amount and it's an important functionality for the app. So please check that out. So those two things really helped prioritize. That scary intuition reminds me of Severance, honestly. It's like, well, I can easily spot the scary numbers here.
Starting point is 01:14:17 This next one's actually quite interesting because I'll read this one if you don't mind, Jared. Number 13, there is always at least one cloud security enthusiast among the software engineers. I love that. And one if you don't mind, Jared. Number 13, there is always at least one class of security enthusiast among the software engineers. I love that. And one thing he's saying there is that you're always surprised because they never know it's them. It's like, oh, there's somebody. So our listeners, look left, look right in your team.
Starting point is 01:14:36 One of you is a security enthusiast and you don't even know it. How will they find out? They get told by the auditors. Yeah, I mean, we would. Let their managers know and let them know as well a lot of times. Yeah, I love this one. And it makes me so happy that this is the case. But even in my current role, I've had some of my engineers come up to me, chatting with them for like 20 minutes.
Starting point is 01:15:01 And then they casually drop this concept, like into the conversation. I'm like, Oh, Hmm. Tell me what you know about crypto. And they're like, what newsletters are you reading? Yeah. They're like, Oh, well, like blah, blah. And then I'm like, Oh my God, you know, a ton about security. That's awesome. You know, I think developers don't really think of it as like a viable career path in some ways because they think of security in terms of IT security, which is, you know, not unexpected. And so they're like, oh, you can like spend all of your time focusing on like software security that like people pay you to do that. Like, I just thought I would do that on my own because it's kind of fun and interesting. So I think that's part of it. But yeah, I think it was always fun working and kind of like having that aha moment with someone where they made a comment, what would be the feedback to the manager team, company, et cetera,
Starting point is 01:16:08 to like how could they better leverage this individual and their passion slash knowledge? Yeah, I think one way is a lot of times for like a secure software development lifecycle, there's a step in it where the reality is if the developer who's making a particular pull request or commit, if it's their job to kind of alert someone on the security team that this is a potentially sensitive commit and we want someone on the security team to review it. And these people who are secret security enthusiasts are great people to pick up on that. They're oftentimes like very curious and they naturally, when they see a given piece of code, think like, oh, I wonder how this would be exploitable. And so for a manager,
Starting point is 01:16:56 they can be like, hey, like when you think that, like, don't just think it to yourself, like, you know, flag the security team. If you see a piece of software and you think, what's the weirdest way I could use this? You might be a security enthusiast. Certain people can just break stuff. They just don't use things the way the rest of us... I mean, there is no such thing as the rest of us. But the way that a developer might think you would use it
Starting point is 01:17:25 they're just going to use it in this weird way and then you can just have a knack for breaking things and then if you pair that with the enthusiasm and the interest well then you might have magic on your hands there number 14 quick turnarounds on fixing vulnerabilities usually correlated with general engineering operational excellence so if you leave a list of to-dos and you come back later and they're not to done yet you know especially when it's like high high risk security vulns that's not excellent
Starting point is 01:17:56 is that you're saying that's not very good yeah would that happen a lot yeah yeah definitely i think a lot of it has to do with like, I don't know, classical DevOps-y things, right? Like, okay, it becomes a lot harder to fix security issues if you don't have an automated test suite. It becomes a lot harder to fix security issues if you deploy once a quarter. You know, the traditional DevOps things apply directly to this. I think that's a big chunk of it. So like maybe a more refined way of saying this one is like the ability to turn around quick security phone fixes is correlated to like good cicd good devops right so if you can go back to number one tie this one together and i realize
Starting point is 01:18:39 you're just doing this based on intuition or whatever, can you correlate operational excellence to startup success? Or is it just there's no correlation there either? That's a really good question. I think you can correlate it with operational success, but there's so many different axes of operational success. Are we really good at hiring? Are we really good at coming up with good like scrum patterns for our teams that they buy into? I think operational success, maybe that matters for being tied to like product success is there was just like a ton of discipline
Starting point is 01:19:20 on product development. I don't know if you guys have looked, I know. So Basecamp writes like a bunch of stuff on remote, but one of their earliest things was actually on like software development and how they do product. And they have this amazing observation, which is like, one of their chapters is called like, it doesn't matter. Like it just doesn't matter. And their point is like, a lot of times you get into these product and feature discussions and the answer is like, it just doesn't matter. Like it doesn't matter if the button's on the right or on the left, like maybe it matters like a smidgen, but it won't matter for the success of your product. And I just found that like a lot of
Starting point is 01:19:58 the startups that now are, I look back on are like worth like a half a billion dollars. And when we audited them, they had like four developers. And like, I was like, I don't know if they're going to make it. And I'm like, oh, I'm so pleased. They were just really disciplined about that. They're just like very focused. And that's a form of operational excellence. They may not have had, like they may have been messy in other ways, but they were really
Starting point is 01:20:22 disciplined where it counted. Where it really mattered. Yeah. but they were really disciplined where it counted. Where it really mattered, yeah. What about the ones where you say the best cases were clients who asked us to just give them a constant feed of anything you found and they'd fix it right away? What about those ones? Those ones who yearned for where are the bugs, where are the issues, we want them to be squashed and fixed right now.
Starting point is 01:20:42 Versus can get scrum better, can hire better, like specifically this engineering practice where security and these vulnerabilities you showcased, like correlate that to success if you can think back. those same people who asked us to give them a steady feed were very like agile and informal in their processes. Uh, and like had a bias towards that. That's the closest I'll be able to say, which is maybe, you know, in some definitions of operational excellence isn't, but they were like very informal. They're like, yeah, we'll just give you access to our GitHub repo and like, just make an issue there. Like, don't give me a spreadsheet i don't want to have to like convert a spreadsheet to like our jira board and then convert the jira board ticket to like an issue and pr and github just like yeah less red tape yeah yeah exactly and they were they sometimes went a step further like make a pr would you would you be willing to make a pr and those were like ah this
Starting point is 01:21:45 is great like yeah they there's high trust there of um you know that we wouldn't screw things up and i know it felt good it felt very productive to be on those teams and i'm sure the engineers on those teams felt the same way the analogy i would probably bring in here would be and i think this might be the actual analogy so correct me if it's not. But there's a saying that says the car doesn't make the driver, the driver makes the car, right? And so if the product slash company were trying to gauge the possibility of success based upon the ability of the thing that gets them there, which is the software, the product, like the vehicle, so to speak. Just because the software and the team and everything is secure and amazing doesn't mean that the product would actually win. So it does take an adequate product to make it in the marketplace.
Starting point is 01:22:36 Assuming no one's going to work on something that isn't worthwhile, but that's my point. It's almost like if you've got good tires in your car, does that mean you can corner? Maybe. Maybe. You might be the've got good tires in your car, does that mean you can corner? Maybe. Maybe. You might be the kind of driver that can do it, but somebody else can drift and maybe you can't.
Starting point is 01:22:52 That depends on the tires and depends on the driver. Something in there is probably my assumption based on what I'm hearing from you. Lots of factors. Lots of factors. Sounds like JWT is hard. JSON web tokens. Not to change the subject, but number 15.
Starting point is 01:23:10 People get JWT wrong. People get webhooks wrong. These were common areas of vulnerabilities. What's the lesson learned there? Just don't use JWT? Yeah, I think you could say don't use JWT, but like. If you're using JWT, double check your implementation. Yeah. Put the podcast down and go check it right now. I think it's just, they're a good intersection of things that devs don't understand super well.
Starting point is 01:23:39 So both JWT and webhooks fall in that category. Like they're not like a super common thing that you come across every day and things where security does actually matter quite a bit. And so, I mean, with webhooks, it's, it's pretty simple. Like, you know, essentially when you have a webhook, you have an open receiver to someone else's that you're allowing a third party to like hit kind of arbitrarily. And the big problem there is, you know, I may be setting up a web hook with Stripe to receive, you know, sign up requests from them in real time, but there's nothing that says that Stripe is the only person who can hit that endpoint. And so I think it sounds simple when I say it like that, but a lot of times we tell devs that and you just see their eyes widen slowly.
Starting point is 01:24:28 And then you'd always see the guy who leans in. He's very clearly going to the repo and trying some stuff. Right. It's not super clever. It's just one of those things where- They just forget to set up the authentication part to authenticate the third party exactly there are some webhook implementations by relatively large third parties uh that i won't name that don't allow for authentication that's a whole separate issue that is really bad but yeah generally it's like oh like i didn't read the part of the document like that red like box in the
Starting point is 01:25:02 documentation that like said by the way, you should also include this authentication token with the request in webhooks. So if you don't do that, someone could denial service you, somebody could fill up your database with empty, just garbled data. What other stuff could they do if they can hit your webhook endpoint? I guess it depends on what it does. Yeah, yeah yeah a lot of times it's pretty bad so like maybe you're an e-commerce site and
Starting point is 01:25:32 the webhook allows you to process like returns from like a third-party service used for returns and so now all of a sudden you get like a lot of fake returns a lot of times like i said like stripe was a really common one where you know know, now you're dealing with money. So that's pretty bad in general or subscriptions. Yep. Yeah. So usually it like goes back to the business logic thing. A lot of times these were like also you have the webhook there to perform some pretty important business logic function.
Starting point is 01:26:03 Yeah. to perform some pretty important business logic function. Yeah, so one example, Adam and I were just talking about a Stripe webhook today about setting one up for our Changelog++ members. When they sign up for Changelog++, it goes through Stripe, and we can set up a webhook so that we receive notice of that and then generate them a coupon code for a free sticker pack on our merch shop. And if we just let anybody hit that endpoint at any time, then they could just generate a whole bunch of coupon codes that are junk or send them out to their friends or whatever. So JWT, it sounds like you said, not only do people get it wrong if you're implementing
Starting point is 01:26:37 it yourself, but also a lot of libraries have vulnerabilities as well. So it's pretty fraught, it sounds like. Yeah. Yeah. There was a pretty bad kind of class break maybe five years ago with almost every single JWT library out there where in a JWT token, one of the fields specifies the algorithm that's supposed to be used. And for some insane reason that I still don't understand, you can literally set that field to the word like none or like no algo. And when you do that, the intention is like, OK, this is a JWT token that's probably coming from an internal system. And I don't need to like validate that the token is actually signed. But it turns out that almost every JWT library out there didn't do some sort of check when there was no algo included to see if the signature field was blank. And so an attacker just set no algo and then the signature field would never be validated. And like it like totally broke JWT.
Starting point is 01:27:44 So that's an example of like where the, even the people writing the open source code got it wrong. But hey, it was also fixed. So the flip side of it is like, no one ever did better than those libraries. Like every single time people decided to hand roll JWT, they did something wrong because, you know, signature validation on JWTs is complicated. It's like, it's a crypto thing and turns out crypto is hard. Like, and so, yeah, I mean, I think that's an example where even though the open source stuff had vulns, I would still highly, highly recommend everyone use the open source stuff, um, rather than rolling their own. Don't roll your own crypto. Don't roll your own JWT. Don't use MD5 anymore.
Starting point is 01:28:26 Your last point, number 16. We've made it, friends. We've made it to the end. There's a lot of MD5 out there still. What a marathon. Yeah, it has been. It's been a good one, though. But it sounds like most of the MD5 out there
Starting point is 01:28:39 isn't really doing anything that's damaging. Yeah, I feel like most people know now that MD5 hashing is not the right hashing algorithm to use. You should be using SHA-3 now or something. And there's been a lot of publicity on that front, but it's just, this one's like, we're now getting at the bottom of the list. This one is just kind of like a quirky hipster observation. It was like, actually, like, because everyone knows this, I'm going to say that it doesn't matter, actually. We just found that especially, I don't know, we never found a case where someone was using MD5 to hash, for example, a password. So that's what you don't
Starting point is 01:29:16 want, right? You don't want people to use it as a hashing algorithm for passwords. But it turns out people still use MD5 for other reasons that are good. It's pretty fast. I think someone on my blog pointed out that SHA-1 is faster because there's a lot of hardware optimizations for it. They leverage some special hardware modules. But it's fast. It turns out that people use MD5 for things other than super secure things. So that was just kind of a quirky observation. Yeah. For those who may not know, so it may be bad to use it, but why? What happened with MD5?
Starting point is 01:29:51 What's the why for that? Yeah, yeah. So if anyone wants to learn about crypto, hashes are like a really great place to start. And basically the idea with a hash is you take some arbitrary sized input and it will map it down to another, like usually in the case of hashing algorithms, it'll match it out to like a base 64 looking string. And the way security with hashing works is you don't want what's called collision. So like you don't want two's called collisions. So you don't want two inputs that are completely different to map to the same base 64-ish string. The reason why is because usually
Starting point is 01:30:32 the exact reason why you use the hashing algorithm is because you believe that collisions- Unique, yeah. Yeah, almost never happen. And so MD5, it turns out, over the last 20 or 30 years, people have gotten more and more clever about identifying collisions. And to the point where now, like, it's like pretty arbitrarily simple, given almost any input to find another input that's a collision.
Starting point is 01:30:55 And there's a lot of clever math that goes into that that I don't know. But that's generally how it works. Yeah. Are you a fan? Sorry, Jared, I have to. Yeah. Are you a fan? Sorry, Jared, I have to do this. Are you a fan of Silicon Valley, the TV show? Silicon Valley? The TV show, yeah.
Starting point is 01:31:11 Yeah. Funny story about that. I think it was the first or second year at PKC. So this was like 2014, 2015. I think it had been out for a while maybe, but we were actually in the middle of building a product. So we built an end-to-end encrypted alternative to Slack called Balboa. Awesome experience. We decided to build it in Clojure, which is probably not the best choice ever, but
Starting point is 01:31:35 it's a great language and it's a lot of fun. And so we were in kind of like the middle of like building a product and like thinking about VC funding. And so we started watching the show with the original three founders and halfway uh, halfway through the show, I look over, I'm like, wait, um, Dan's missing. Like one of the founders were like, where the hell is Dan? We go into like this bedroom and he's like on the floor in like a fetal position. And we're like, are you, are you okay? He's like, I can't take it. It's too much. So that's my view on Silicon Valley.
Starting point is 01:32:10 It's an awesome show. But if you're actually going through. It was too close to home. Yeah. If you're actually going through that, it's some of the most painful TV to watch. Because it's just so true. It's great. Yeah. Okay.
Starting point is 01:32:24 Okay. Well, thank. Yeah. Okay. Okay. Well, thank you for that story. Season six, the very last season, I don't want to spoil the show for anybody, but like essentially they were, they built an AI that could predictably undo a secure, an algorithm like this. You know, they could undo a hash. So they had broken MD5, they had broken SHA1, they had broken through all these different encryption algorithms, essentially. That's why I brought it up.
Starting point is 01:32:49 I was just curious if you had known that, because that's part of the show, and it's pretty interesting to see that at some point, AI could be so smart to defy our security protocols by reverse engineering the algorithms that protect us and our privacy. I have not seen that. That sounds incredibly scary, though. Yeah. Well, if season one scared the crap out of your co-founder,
Starting point is 01:33:14 maybe you shouldn't watch the season six because that is the ending. You're not going to make it as six seasons. Very good, though. But that's an interesting thought pattern. It's like we're producing such powerful computers, and while collisions may be the issue with NB5, which is sort of basically an implementation flaw of the encryption algorithm, at some point can we develop such technology that we can break
Starting point is 01:33:35 these encryption algorithms otherwise, like through AI or through learnings? That's an interesting thought pattern. So I thought maybe someone like you might have had watched that and could entertain me a bit. Yeah. No, it is, in some ways, AI is like a great, or sorry, hashing is like a great use case for AI. You have like, because you can generate inputs, right?
Starting point is 01:34:00 So like a lot of times coming up with effective machine learning requires you to have a very large sample size that, you know, you can't like just make it up. But in crypto, it turns out you can. It's called the Oracle. Right. I ask the MD5 Oracle and I get a response and I can do that a million times and then hand the output to a machine learning algorithm and say, hey, are you able to correlate one to the other? Right. Like I just gave you five billion inputs. You should be able to. And I like that idea a lot. Yeah. Well, that's the scary future. That's what had the, one of the stars say, he said, how should I feel right now? He said,
Starting point is 01:34:39 for you abject terror. And that's all I'm going to say. It was just hilarious. Phenomenal lines in this show. I mean, you're missing out. If you've heard me mention Silicon Valley and Jared Rolls Eyes, which you don't get to see because it's an audio podcast, you're missing out. Watch it even if you curl up into a ball in season one. Persevere. Curl up into a ball on your couch and watch some Silicon Valley. Longtime listeners of the show have already been spoiled multiple times as Adam brings it up pretty much weekly, but I have to, it's so relevant. There's such correlation. There's such correlation really
Starting point is 01:35:09 is. Well, Ken, we've reached the end of your learnings and it's been a long one. It's been a good one. Appreciate you joining us and talking through all this. First of all, appreciate you writing it all down so folks can learn alongside you. There's definitely a lot to be learned from people's experience, especially being like an outsider, getting to see the inside of so many startups and how they do what they do, what they're good at, what they're bad at, what correlates and what does not correlate in the startup success. Interesting stuff for sure.
Starting point is 01:35:40 Yeah. Yeah. Thanks guys for having me. It's been a pleasure. Is there anything we didn't cover any ground left fertile that you want to talk about before we call it a show? I think, you know, I'm, I'm, I was very surprised that the article was so popular. I got over like, I think 300,000 views on my blog as a result of this. one of the i don't know how much you guys are following like the market and the tech scene market but i think it's like you know it's on a lot of very close minds yeah well a lot of it's on it's important right especially if we're going to start up and you're in engineering and i don't know i just like my my meta observations i think maybe one of the reasons why this type of article was more
Starting point is 01:36:25 interesting is like a lot of devs are asking questions about what has been considered maybe like like standard startup truth for example like if we aren't growing our engineering organization like we aren't growing as a company. And that kind of the first point kind of speaks to that. And I think, I don't know, I think it struck a chord because people are starting to question a lot of, you know, dogma within the engineering world as a result of some of the market changing and forcing us to ask really hard questions of ourselves. Well said. It's been good hearing your wisdom, both on this show and reading it as well. Thank you for,
Starting point is 01:37:08 as Jared said, for putting this out. I mean, if we don't have folks like you go down hard roads and do a retrospective for us to learn from, where would the world be? We need more people like you, Ken. So thank you for going down that hard road. Thank you for sharing that wisdom. We appreciate your time here today.
Starting point is 01:37:24 Yeah, thanks, guys. That's it. You made it to the end. Hopefully that fact warms your completionist heart like it would mine. Now it's time to subscribe to the pod. If you haven't yet, head to changelog.fm for all the ways.
Starting point is 01:37:40 And if you've been with us for a while and get value from the show, pay it forward by sharing the changelog with a friend. Send them a tweet, an email, a text, I don't know, post it on your favorite PBS, whatever works. Just tell them to thank you later, and we'll thank you right now. Thanks. You're awesome. New merch alert! A sticker pack is now available in our shop.
Starting point is 01:38:01 Buy one for yourself or send it to a friend at changelog.fm slash merch. Bye. changelog.com slash plus plus. Thanks again to Fastly for CD-inning for us, to Breakmaster Cylinder for keeping our beat supply secure, and to you for listening. We appreciate you. On the next episode, we are joined by James Long, who recently open sourced Actual, his local first personal finance system that he's been working on for over four years.
Starting point is 01:38:41 So stay tuned for that. We'll talk to you next time. Thank you. Game on.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.