The Changelog: Software Development, Open Source - Complex systems & second-order effects (Interview)

Episode Date: January 10, 2022

Paul Orlando joins Jerod to talk through some unintended consequences that occur when systems operate at scale. We discuss Goodhart's Law, The Cobra Effect, how to design incentive systems, dependency... management decisions, the risks of autonomous vehicles, and much more along the way.

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome, friends. You're listening to The Change Log, a podcast featuring deep discussions with the hackers, leaders, and innovators of the software world. On this episode, I'm joined by Paul Orlando to talk through some unintended consequences that occur when systems operate at scale. We discuss Goodhart's Law, the Cobra Effect, how to design incentive systems, those oh-so-hairy dependency management decisions The risk of autonomous vehicles And much more along the way Quick shout-out to our partners Fastly, Linode, and LaunchDarkly We love Linode, they keep it fast and simple
Starting point is 00:00:34 Check them out at linode.com slash changelog Our bandwidth is provided by Fastly Learn more at fastly.com And get your feature flags powered by LaunchDarkly. Get a demo at LaunchDarkly.com. This episode is brought to you by Influx Data, the makers of InfluxDB, a time series platform for building and operating time series applications. InfluxDB emp a time-series platform for building and operating time-series applications. InfluxDB empowers developers to build IoT, analytics, and monitoring software. It's purpose-built to handle massive volumes and countless sources.com slash solutions, network monitoring, IoT monitoring, infrastructure and application monitoring.
Starting point is 00:01:29 To get started, head to influxdata.com slash changelog and click get InfluxDB. Again, that's influxdata.com slash changelog. So I am joined by Paul Orlando, who writes on systems, complexity, and second-order effects on his Unintended Consequences website. Welcome, Paul. Thanks, Jared. Good to be here. Excited to have you. I've been enjoying your writings for a little while now, and I think a lot of them, if they
Starting point is 00:02:16 don't directly relate to the craft of software development, they definitely are tangential and have interesting implications for software folks. So let's start by learning about you and how you started writing about this topic. Sure. Tell you a little bit about how I got started with it first. So I was thinking about this just today, you know, nothing systematic about it whatsoever. So I saw a bit of news one day. So this is, I guess, two and a half, maybe three years ago, when Google came out with Google Duplex, that voice AI, and they did this really cool demo.
Starting point is 00:02:52 I don't remember. It was like ordering flowers or doing laundry or whatever. Yeah, they'd call and order things for you and interact with a real human or something. Exactly. They did this demo, and a friend of mine had actually had a voice AI startup that he had shut down maybe a year before that.
Starting point is 00:03:10 And so I was kind of like messaging with him. Was his timing off? Or is he going to revitalize it somehow? But it got me thinking initially about scale effects that would be possible with something like a voice AI. So in the same afternoon, I bought the domain name, put up the first blog post, you know, like a WordPress site, which is why the domain is, you know, too very long and difficult to spell words, which I thought at the time was clever, but and also like, you know, a dot ES at the end to make
Starting point is 00:03:43 it even right, almost too clever, perhaps. Yes, make it even more difficult. But that's literally how I got started, just kind of having this spark and then just cranking out a first quick post about what some of those unintended consequences with the existence of a voice AI might be. And then I kind of had this
Starting point is 00:04:06 thing, you know, so, uh, you know, I didn't, I didn't write on like an existing blog of mine or like a medium post or whatever. I had this thing. So it just became something I was going to return to. And the first few months, I don't know, I might've written every couple of weeks whenever something interested me. Again, I was not trying to make this a big part of what I did, but kind of early on, maybe a couple of months in, one of the posts that I wrote got to the top page of Hacker News. And that was the first time I had experienced that drug, you know, that feeling. And it kind of encouraged me to keep going. So long story short, I've written over 100 of these articles on various unintended consequences, topics and things that happen.
Starting point is 00:05:00 It could be something in history, something in the news. But I'm really trying to educate myself. And along the way, I've just discovered, yeah, a bunch of other people seem to get some value out of it too. It's fascinating because we work so often in the small, we have a hard time grasping the implications. And this is all new, like networked systems is new to all of us, right? I mean, maybe you're going back 50, 70 years, But to many of us, it's like, especially once social media blew up, like the implications of software at scale or systems at scale is something that we're all learning and grasping and realizing maybe years later, uh-oh, this was actually not a great idea.
Starting point is 00:05:39 I'm curious how your curiosity with this topic and maybe your expertise, you write very well. I learn when I'm reading your stuff. It's insightful. It's explanatory. What's your expertise? What's your background? Are you a writer?
Starting point is 00:05:53 Are you an economist? Are you a psychologist? Where are you coming from? Sure. So I've worked in tech my whole career. I have been just about always on the business side. So I started out early on doing voice over IP work. This is a little before Skype was around.
Starting point is 00:06:18 So it was about connecting telecom carriers to each other and routing traffic differently. And that's certainly when I got a great appreciation for how, in that case, telecom networks can lead to these unusual outcomes. We can dive into that maybe later on if it's a fit. But I had the experience also of starting a startup and making not 100%, but maybe 90% of the possible mistakes in doing that. And just really getting interested in how people figure things out and how things ended up, you know, end up being differently than you expect. So in our case, it was also a telecom related, you know, business. What we ended up doing was connecting patients for the patients who were going through a serious health situation, connecting them to another patient. And that was part of their recovery process. So you get to talk to another person who has kind of gone through some difficult recovery, and we protect the patient privacy. And, you know, we would kind of push out whatever their doctor or support group leader wanted afterwards. It could be a survey. It could be, here's some health goals
Starting point is 00:07:30 for you, you know, this next week. But getting to that point was this like, just like jumbled, like, you know, certainly not systematic, you know, process for us where we had all these different ideas of how people were going to use it. And then we're surprised in what ended up emerging and what people were happy to pay for, you know, later on. But I got interested in that process, like how founders figure things out or often don't figure things out. So I was in New York at the time. I ended up visiting a bunch of startup accelerators just to like mentor, you know, meet people and kind of talk and, you know, I guess coach back then. And I realized I wanted to make that a bigger part of what I did. And New York back then, so this is like 2011, 2012, New York had probably 10 startup accelerators. And, you know, I didn't think it made sense to try to start the 11th one. So I was looking for a new market to enter.
Starting point is 00:08:28 And earlier in my career, I had actually worked in Hong Kong. I worked in China a lot, other parts of Asia. So I did a scouting trip just to try to suss out, okay, maybe Hong Kong or maybe some other location in Asia might be a good fit for something like a startup accelerator. And I landed with a bunch of meetings set up. I met a ton of new people, got introduced around in Hong Kong and ultimately determined, okay, yes, this market is, I think, ripe for something like a startup accelerator. Kept the conversations going when I went back to New York, but ultimately spun up a pilot program
Starting point is 00:09:08 and then raised a small fund to support an actual startup accelerator and ended up building the first program in Hong Kong. And then from there, I got pulled in to run this unusual startup accelerator that was based in Rome. Now I'm helping this big nonprofit build a community health-related accelerator. And day-to-day, I've been at USC, so the University of Southern California. I teach there, and I also run the university startup incubator program. So I've kind of been in this early stage venture space for a little while. Right.
Starting point is 00:09:50 And I don't know, for me, there's a lot of overlap with the systems or unintended consequences interests that I have. It's certainly a side project, but that's kind of my process for how I got here. Very cool. Well, let's dive right in, shall we? But that's kind of my process for how I got here. land on changelog news. You might find it on hacker news, perhaps some things, but we hope we found a good cross-section here. So Goodhart's Law is one that we have discussed on the podcast. I think it was last year with Dave Kerr. I did a show called Laws for Hackers to Live By, and Goodhart's Law was one of the laws that we discussed. And you open up a post about that that says Peter Drucker said that if you can't measure it, you can't improve it. But he didn't mention the second order effect of that statement. What changes after people get used to
Starting point is 00:10:50 the measurements. So this keys into what Goodhart's law is, which I feel like has kind of been morphed a little bit to be applied to our circumstances. Can you want to break down Goodhart's law for us? Sure. So the explanation of it that I like and what I think is the most commonly heard one, actually, it doesn't come from Goodhart himself. It comes from an anthropologist whose name was Marilyn Strathern. So usually when you hear Goodhart's law, you hear it as when a measure becomes a target, it ceases to be a good measure. And so –
Starting point is 00:11:26 And Goodhart never said that. He said something similar. Yeah, he was – I mean, there's like so many variations of this. Everything from like going back to universities in the UK hundreds of years ago and how they were being measured when people were trying to, for the first time in history maybe, be quantitative about outcomes. There's more economic-focused versions of this. I like this formulation. When a measure becomes a target, it ceases to be a good measure. I think it's really easily understood. I certainly think there's like a
Starting point is 00:12:06 crossover there to, you know, could be software development, could just be general business, economic background. I kind of further summarize this, you know, maybe in saying that Goodhart's law, it has origins in a couple of different places. So one is that behavior change that occurs when people start trying to achieve a metric rather than a goal. In other words, here are the targets that you have. We think these targets are connected to this goal, but really we're going to measure you based on these targets. So it's a little different than trying to achieve a goal. And then the other origin, which is related, is that we create some projects because we are choosing proxies for goals themselves. So I usually think of that one
Starting point is 00:12:56 more in a healthcare setting. In other words, we have some ideas for what it means to be healthy. And somebody goes to the doctor and they say, you know, they've got some health complaints. The doctor realizes, hey, your blood pressure is high compared to what is considered normal. So they put you on a blood pressure medication that itself has some side effects for you and you end up feeling not healthier, but we did achieve that proxy of, okay, well, health is determined by a number of different factors. We think a metric that's related to your general health is your blood pressure. We're going to put you on medication, but it has these other outcomes
Starting point is 00:13:37 that are not beneficial for the patient. Yeah. So let me repeat back and make sure I'm following you. I think I am. So there's almost these two strata. The first one is by knowing the objective, the objective becomes less useful because I'm targeting that thing. And the other one is the objective or the target isn't the actual goal. It's just the closest thing we can get to the goal. And because there's that, what engineers would call an impedance mismatch, it's not really, but because it's not a one-for-one,
Starting point is 00:14:09 you're not actually optimizing for what you want to be. It's just like, because health is like, how do you measure health, right? You do like a heuristic. And so it's like, we take these things, we try to get these proxies, and we optimize for the proxies. You can't, almost de facto, you're not optimizing for the thing that you want to be. You're just getting close and this can backfire. Yeah, that's it. And it reminds me of something that you said actually early on, which is we're
Starting point is 00:14:35 in the relative, you know, early days of dealing with network systems or large systems and something like Goodhart's law. You know, if know, if you think in the history of humanity, you know, people were not living in these highly connected societies. I mean, like internationally connected, right? You'd be highly connected in your local group, which might be pretty isolated from, you know, others, you know, relatively speaking, you wouldn't really have something like Goodhart's law come into place. Like, it wouldn't really have something like Goodhart's Law come into place. It wouldn't be an issue. First of all, people were not, I think, thousands of years ago, at least, not using metrics in the way that we think of them.
Starting point is 00:15:16 Yeah, there's a sophistication involved. Sure. And then you also don't have these scale effects. So if you do have a small society that makes bad choices, those outcomes are pretty local. And it doesn't spread globally. So most of the impact in the past was local. And in a highly connected world, we kind of have to be a little more careful about some of the big actions we take because they can lead to these really big unintended effects somewhere else. Right. So there's a couple of ways we can exemplar for this. The common one in software that all developers innately or inherently understand is if you use lines of code as a measurement for productivity, then you just failed.
Starting point is 00:16:04 Especially if you know that's the target. As soon as it's the target, you're like, oh, okay, I get paid per line of code. Every self-respecting developer knows how to optimize for that particular target, and that's not actually a good proxy for productive work. That's an obvious one.
Starting point is 00:16:22 But it gets less and less obvious as you get better proxies, but you're still working in this uncanny valley for what you're actually trying to get to. It's a difficult problem, right? It is. Or using bugs fixed as well. If the same person
Starting point is 00:16:37 is writing the code and then being compensated for fixing the bugs. Or that's a metric. I can find a lot more now if I also create them intentionally. Like if there's a bad actor involved. Yeah, a lot of game theory comes into this. So you give an example in your post about Groupon
Starting point is 00:16:58 with regards to, I think it was a vanity metric or a specific metric for a company that was either IPOing or raising around. And here is another situation where the measurement is like, how good of an investment is this? Or how quality is this company's prospects or whatever? Which is really hard to measure. And so we have lots of different criteria that we look at. I think in that case, it was like, they made it look like they had great stuff going on in China,
Starting point is 00:17:27 but they did it by faking numbers. Do you remember the example there? Right, and this came from a conversation that I had. I was in Shanghai for work, and it was shortly before Groupon's IPO. And I was talking to this friend who told me, yeah, they're hiring like crazy in China. They really don't care who it is that they're hiring. They're not looking for people specifically
Starting point is 00:17:51 who are great at that telesales kind of a job. They'll overpay. It doesn't matter. The reason is to bump that IPO price another 10%, say. Right. We just need to show, or the company just needs to show, yeah, we've got, I don't know what the number was, 1,000, 5,000 people in China. In other words, that's kind of like a vanity metric for, all right, we have a lot of prospects, a lot of promise. Like salespeople, right? Right. And if the actual goal is bump up the IPO price, it's like, oh, okay, that makes total sense.
Starting point is 00:18:33 Yeah. Not for building a sustainable business. You would hire completely differently, or you would hire more slowly, or you'd hire a different background. Right. But you have somebody reading like a prospectus and all they see is, well, 5,000 people hired in China. This must be an amazing opportunity. I should invest. But yeah, once you dig in,
Starting point is 00:18:59 and I'd even say like maybe another theme for me in the writing, I've approached this topic because I did not have a background in systems analysis. I wasn't doing this as, I don't know, an economist or I didn't have that theory. So I always found that I was always working to make sure that I understood what I was eventually going to try to write about. So that often meant, okay, yeah, I'm going to go and I'm going to read all the papers that were cited in some other article that I read. And then sometimes you find, oh, like this person citing a paper and it actually is not in support of what they are claiming. It's, you know, it's like the opposite. Right.
Starting point is 00:19:43 So once you dig into it, you find, oh, the story is a little different than what's presented. And this is another unintended consequence of just say something like a fast-moving news cycle. People are presented a ton of information. Nobody has the time to really do the research. And as a result, the quality of the information that you're presented might be pretty low. It might also be hard for you to know that. Right. Unless you're like me and willing to dive into one of these topics early in the morning. Right.
Starting point is 00:20:16 But you can't do that all the time, of course. Well, that's why I think we need experts in different niches willing to do kind of the yeoman's work and to become the critic or the curator of a particular topic or niche that then other people can trust and vet that person and maybe they become less trustworthy and so they're no longer the critic but we need experts you know watching the the news cycle because it's so fast, it's so loose, and it's misincentivized around clicks and traffic and all these things that we've learned have huge unintended consequences. The metric of clicks or page views for news has caused untold consequences in our society because that incentive is not aligned with
Starting point is 00:21:09 high quality, well-reported, thoughtful news and analysis, right? It's all about speed. It's about sensationalism. It's about things that don't optimize for truth. Right. I had written, and I have forgotten the title of this post, but a while back I wrote about that topic. So how the changed media business model resulted in a lot more news, more people reading it or viewing it, but more polarized societies or or like you know like less accurate information and yeah like that that idea like um if you go back and look at the printed newspapers so you know you're at least going back to like the 90s there like you know if you go back and look at like the printed newspapers right or even the the news that was on like the three major networks you know back in that era, there were not huge differences in the way the same story would
Starting point is 00:22:07 be presented. A little bit, but for there to be a successful business model, any of these news outlets had to more or less go for the mainstream. So they couldn't be too skewed in any one direction because they couldn't reach that target target market it was it was mass distribution modern era you know you could have a newsletter that's uh paid that has whatever a thousand subscribers and you know it provides enough like you know paid subscribers it provides enough income for that person to keep keep writing it and their focus is on some really strange niche that you never knew existed or maybe did not exist in the past because it's kind of been created as the business models have changed. Yeah, so there's maybe a bit of nostalgia for those days a generation or two ago when it comes to reporting the news. Certainly, business models had a part in that change.
Starting point is 00:23:23 This episode is brought to you by our friends at Square. Square is the platform that sellers trust. There is a massive opportunity for developers to support Square sellers by building apps for today's business needs. And I'm here with Shannon Skipper, head of developer relations at Square. Shannon, can you share some details about the opportunity for developers on the Square platform? Absolutely. So we have millions of sellers who have unique needs. And Square has apps like our point of sale app, like our restaurants app.
Starting point is 00:23:51 But there are so many different sellers, tuxedo shops, florists who need specific solutions for their domain. And so we have a Node SDK written in TypeScript that allows you to access all of the backend APIs and SDKs that we use to power the billions of transactions that we do annually. And so there's this massive market of sellers who need help from developers. They either need a bespoke solution built for themselves on their own node stack, where they are working with Square Dashboard, working with Square Hardware or with the e-com, what you see is what you get builder. And they need one more thing. They need an additional build. And then finally, we have that marketplace where you can make a node app and then distribute it. So it can get in front of millions of sellers and be an option for them to adopt. Very cool. All right. If you
Starting point is 00:24:37 want to learn more, head to developer.squareup.com to dive into the docs, APIs, SDKs, and to create your Square developer account, start developing on the platform sellers trust. Again, that's developer.squareup.com. Before we leave Goodhart's Law, well, maybe just attaching hard to Goodhart's Law, let's hop to the Cobra Effect because these two things are interrelated. This has to do with incentive structures and the design of incentive structures. So many of us are building these things or maybe we're living inside of these things with regards to social media. homepage, right? And you were incentivized to get back to number one again, you know, someday soon
Starting point is 00:25:45 as you felt that dopamine rush of having your words read by many people and debated and enjoyed and maybe not enjoyed. And all that comes alongside that sometimes it's the best of times and the worst of times to be, to get all that attention all at once. But this Cobra effect, you've written about it a couple of times, and this is really the idea of, you know, sometimes your incentive systems go wildly haywire, sometimes because Goodhart's Law or for other reasons, they're just not well designed. Tell us the Cobra effect, how it got its name. I think that's one of the best examples of how it can go wrong. Sure. So the Cobra effect, and this is also sometimes called adversarial good heart or perverse effects. In other words, you're trying to improve some problem and the actions that you
Starting point is 00:26:37 take end up making it even worse than it was before. In other words, like this, this requires people. So like people, you know, this is where the adversarial part comes in, right? Yeah, people are adversarial. They can be. Or if you're presenting them with a silly rule, or there's a new regulation, people will find a loophole that ends up harming that goal. The Cobra effect, and this is another probably just the name itself also overlaps with unintended consequences. So the story behind the Cobra Effect is something that, as far as we know, never happened. But the story is during colonial India, so when the British were in India,
Starting point is 00:27:19 some British administrator decided that they wanted to reduce or eliminate the number of cobras. Maybe this is in Delhi. I'm not sure where. And so to try to achieve that goal, they put up a bounty and they say, okay, I'm going to pay a bounty if you show up with a cobra skin, and that's going to get rid of the cobras. And then the story, of course, is, well, people discovered, oh, so I should just raise cobras and turn them in for the bounty and raise more cobras and turn them in. And then the British realized what's happening. They eliminate the bounty and then everybody releases the cobras.
Starting point is 00:28:00 And so you have a worse problem than you had before. So this is the story. It has definitely happened in a couple other cases in history. And so after that Cobra story, the most famous one that there is history behind is also a colonial situation, so the French in Vietnam in late 1800s, I believe. So they discovered that rats were spreading plague and that they were using the newly built sewer system in different cities to travel around the city and get into houses. And also, of course, houses that were occupied by the colonists themselves. So they set up this bounty to pay people to kill rats. And I can't think of a worse job than having to go into a sewer and like hunt rats, you know, for a bounty, but that's what people were doing. Um, then one point, the French said, you know, enough with the dead rats. I don't want to collect these things.
Starting point is 00:29:08 Just the tail is enough. You know, just show up with the tail. I'll pay the bounty. And so then what was discovered was, you know, people discovered rats running around with no tails. In other words, like. Instead of killing them, they just cut the tails off. Yeah, I'm just going to collect the tail, like if that's what you're paying me for it. Or, you know, like raising rats, you know, again, probably a little easier to raise rats than to raise cobras, you know, for the bounty.
Starting point is 00:29:33 So when I looked at this, you know, so I looked into the history of this effect. And like the third famous example is with feral pigs in the US outside of a military base. So the first thing that struck me was the three famous examples all involve animals. And I started thinking, well, who is making up these rules? Like certainly not somebody who understood anything about these animals' life cycles, like how they naturally reproduce. And I started just like diving into, now I know like more about the life cycle of the cobra than I ever thought I would, or rats, or pigs. But I came away and I wrote that first article on the cobra effect,
Starting point is 00:30:20 saying, okay, the solution is actually wrapped up in the biology of the animal. So in other words, if you know what the gestational period is for a cobra, a rat, a pig, you can design that incentive program around that. So for example, and I proposed, here's a way that you might structure the actual cobra or rat or pig example. So if you know it takes however many months to get from the mated cobras to the eggs to the hatchlings and on, you then just work around that. So you either have a short-term incentive program. Hey, we're only paying it this month. Or you pay at a rate that it makes no sense for people to actually raise the animals.
Starting point is 00:31:08 In other words, okay, I've got to house them, feed them, keep them, deal with the danger of having them around. You can do something around just that biology. It's tough, but at least you don't get this spiraling out of control effect of people just breeding more and more of the animals for the reward. So the interesting thing that I found, and this goes back to that Hacker News post, because this was one of the ones that got attention there. It was interesting to me because some people had a little difficulty with that.
Starting point is 00:31:42 And they basically said, no, no, no, you don't get it. The whole story is that you cannot really control this. And that's, you know, that's just the way it is. And I agree. There's always going to be something you didn't think about. But this example that I'm giving of like just work with the biology, you know, when you're constructing an animal related reward system, If you at least do that, you know,
Starting point is 00:32:05 you get, you, you, you at least avoid that most obvious of, you know, bad outcomes. I'm sure you'll get something else, but you know, at least like avoid that most obvious one rather than just throwing up your
Starting point is 00:32:18 arms and saying, this is an unsolvable problem. So that's, uh, you know, that, you know, again,
Starting point is 00:32:23 it's just something that got me thinking and like, how do you apply that elsewhere? I think it's just something that got me thinking. And how do you apply that elsewhere? I think it's a lot of fun. Yeah, you wonder how far you can kick that can down the road to the point where the can becomes not a big deal. It's fine. Because, like you said, there is going to be something else. It's almost like that old saying, no good deed goes unpunished. It's like no good reward system goes ungamed.
Starting point is 00:32:45 Someone's going to game you, and a lot of it's a cat and mouse circumstance where you're just constantly changing the rules of the game, and then the adversarial people change the way they attack. I think in those circumstances, once you realize that first reaction, it seems like it was such a poorly thought out plan in the first place i think maybe what you're saying is slow down understand the problem better perhaps like in the in your case like the gestation phase of the animals and like
Starting point is 00:33:19 really understand cobras before you design then put the system out to the public before it can be tested. And maybe you'll be able to skip those first two rounds of just obviously bad and get into the more sinister and still effective, but maybe less consequential gaming that's going to happen. And the other piece of that, I think, is two of the three famous examples involve a colonial power, you know, doing something and I'll say doing it at, at scale. In other words, here's the pronouncement, you know, we have the budget, we're going to pay this reward and then things fall apart.
Starting point is 00:33:57 Without that scale effect, you know, you would have some little local trial of trying to do a reward system. You would discover, okay, this didn't work out. Let's reconfigure this. And then you would maybe evolve yourself to a better type of incentive structure. But if you're doing something really top-down, and maybe for all I know, the person who made that decision did not live with any of the outcomes. They were across the world. They were in a different city.
Starting point is 00:34:27 They're not necessarily the ones who are going to suffer if the system does fall apart. But yeah, scale effects are tricky, and they certainly lead to a lot of unintended effects. Yeah, so perhaps the takeaway there for those designing these systems is iteration and feature flags, effectively. We're talking software development teams like small sample size first.
Starting point is 00:34:53 Roll it out to yourself and your coworkers and see how it changes your behavior. I'm just thinking of now a software system. And here's a new incentive structure inside of our system that we want to have more of this kind of activity. And so we're going to make this feature. Well, roll it out to a few people and see how it changes their behavior because you'll game your own system. I know I've done it. It's just how we are. Like any system, I become fascinated with TikTok's algorithm, not so much the app and the content, but the algorithm is fascinating to me. It's almost tactile. I'm not sure if you've used it before, but it's like the fastest reacting algorithm that I've seen to where I almost feel like it's tactile. I can see it changing the next
Starting point is 00:35:38 thing based on, I think there's only two factors according to the TikTok folks. The first one's like the duration of the video, like how long you watch before you swipe. Obviously, like the longer you go on, it's going to like, you know. And then the other one, I forget what the other one is. There's just two factors. But their algorithm is so responsive that it almost like evolves as I use it throughout a session. I'm just sitting here watching.
Starting point is 00:36:04 So like people will game. And I'm just, I'm more interested in what the algorithm is doing than the content, which maybe makes me a bad use case. But point is like, you will game your own system and then roll it out to a few other people, right? And like start to scale. So you don't have to just blast it and get that top down huge scale effect immediately.
Starting point is 00:36:22 Scale it out in phases and iterate before you make a big mistake it's no it's so true and i i don't know enough about tiktok you know to comment on what you're saying but i have heard you know people describe it in that way what's weird about it is i'm not sure if it's actually good or bad sure that it's like that because. Because people say it's the best algorithm in terms of keeping you using the application. But it's so obvious. It's almost like it could have a UI, to me at least.
Starting point is 00:36:55 Maybe because I'm a developer, but I'm sitting here using it. And I'm almost like I know what's going to happen if I auto-swipe this one. Oh, don't watch this video for too long. I don't want more of this content. Quick, swipe away, you know? But that's also interesting because it's, it's then you with your realization of what is happening behind the scenes, you're changing your behavior. Like maybe you would have watched that video to the end. Right. But I don't want it to think that I want to watch it. Exactly. Yeah. Yeah. It's weird.
Starting point is 00:37:25 Like, I always think of, so I, you know, I've been helping different organizations or corporates like build innovation programs or build, you know, accelerator type programs. And in the beginning, when I'm working with someone, you know, they often want to see like, well, I need to see like the entire plan. Like, you know, show me exactly how this unfolds and like, you know, like what happens like, you know, they often want to see like, well, I need to see like the entire plan, like, you know, show me exactly how this unfolds. And like, you know, like what happens like, you know, week to week. And I always push back on that, just with the realization or the experience that I'm really not sure who I'm going to encounter in your organization. Like, I haven't met that many people yet. I don't know, like, are they bought into this yet? Like, or like what their skill sets are, you know, so far, like how dedicated they are
Starting point is 00:38:09 to like this new thing. So I kind of want to see that a bit, you know, first before I design something fully. And to be honest, I never want to really design the whole thing end to end because I know I'm going to change week to week depending on what's going on or like, you know, I'll need to spend more time in one area or another. Yeah, there's often, you know, pushback there because people want to think like, well, certainly you could design this, you know, end to end. And it is, it's something that you could write down.
Starting point is 00:38:37 It's, you know, it's just about like, you know, following, you know, these, you know, however many steps and just about anything involving people i think you have to have that flexibility to be able to like you know zoom in zoom out slow down speed up you know that's what makes an innovation program more powerful that's what makes um you know what you're talking about you know you know with with tiktok you know uh you're more powerful yeah you have to be comfortable with that uncertainty and like there's strength in that uncertainty kind of like um i did write about this this is a totally different you know connection of thought but um uh the peltzman effect about like uncertainty in our environment
Starting point is 00:39:17 or in driving you know uh actually being something that creates more safety oh so you just drive slower because you're less certain. You might drive more slowly. You might just be more aware when you're driving around. And once you then mandate a lot of these safety measures, it could be road signs, it could be seatbelts, the way you paint lines on the road. As a result people um
Starting point is 00:39:46 give up a little bit of their their natural you know uh like their defense mechanisms their defenses are down because they feel so safe sure and result they become less safe yeah but if you make it less safe their defenses are up and so as a result they might be more safe. That is interesting. I've never considered that before. And I'm not sure if this was him or me, but it must have been him. I don't think I would have thought of this, but he was basically looking or somebody was looking at the effect of what would happen when a country or a city would change the side of the road that people drove on. So like this happened like famously in Sweden in the sixties, you know, Iceland, you know, a number of different places went from driving on one side of the road to the other. And so like, you would just think like,
Starting point is 00:40:36 that's gotta be like complete chaos. You know, like everybody is changing, you know, like, you know, they're doing like a mirror image of what they used to do. And then the actual natural experiments that unfolded was, oh, the place became safer for a while because everybody is so careful about driving around. I've got to make sure that I turn in a different way now. I'm watching for traffic from a different direction. And yeah, there was a traffic engineer, Monderman, I think was his name, who was designing road systems where there would be, like, no signs at all. So, like, the only inputs are, like, if you- Pure chaos.
Starting point is 00:41:17 Well, but it actually was pretty safe. you know like so in other words you know uh and i think even like maybe getting rid of a curb or something you're like so like you know pedestrians who want to cross the street making eye contact with the driver the driver seeing them slowing down you know then they cross and so it seems like this would just be total chaos and actually in his rollouts you know it being pretty effective that reminds me of a scene from the mandalorian where they're commenting on the surprising lack of guardrails on walkways in star wars movies you know where they finally got self-reinforcential because there's like no you know like it's it's so hazardous you'd think that like some sort of safety council would be like let's put a handrail
Starting point is 00:42:01 in and i can't remember which episode it is but they send the guy out to hit a thing on you know along this chasm and he's like no guardrails like come on i'm not going out there and it's kind of like well actually you're more safe because you're like really paying attention to every step is crucial knowing that this is dangerous yeah that's it yeah this episode is brought to you by our friends at Fire Hydrant. Fire Hydrant is the reliability platform for every developer. Incidents impact everyone, not just SREs. Fire Hydrant gives teams the tools to maintain service catalogs, respond to incidents,
Starting point is 00:42:54 communicate through status pages, and learn with retrospectives. What would normally be manual, error-prone tasks across the entire spectrum of responding to an incident, this can all be automated in every way with FireHydrant. FireHydrant gives you incident tooling to manage incidents of any type with any severity with consistency. You can declare and mitigate incidents all inside Slack.
Starting point is 00:43:16 Service catalogs allow service owners to improve operational maturity and document all your deploys in your service catalog. Incident analytics like to extract meaningful insights about your reliability over any facet of your incident or the people who respond to them. And at the heart of it all, incident runbooks, they let you create custom automation rules to convert manual tasks into automated, reliable, repeatable sequences that run when you want.
Starting point is 00:43:39 Create Slack channels, Jira tickets, Zoom bridges, instantly after declaring an incident. Now your processes can be consistent and automatic. Try Fire Hydrant free for 14 days. Get access to every feature. No credit card required. Get started at firehydrant.io. Again, firehydrant.io. so garmin the gps maker had an unintended consequence recently some downtime downtime, an outage.
Starting point is 00:44:25 Want to tell that story and we'll get into some of these problems we have around dependencies. Sure. So this was one that I, as I remember, I saw the news and I just like wrote the post up in like no time, you know, like right away. So the story, so Garmin, you know, location based services, you know, like right away. So the story, so Garmin, you know, location-based services, you know, provider used by a lot of small plane pilots, you know. They'll sell like physical GPS devices. Exactly. Other people who are, you know, like out hiking and camping. Um, but, uh, so they, they went offline for a while and the, the story was
Starting point is 00:45:09 a group with just like the great name evil corp, you know, I think, you know, based in Russia, I believe, um, you know, which, uh, this is their business, you know, in, in the ransomware business, which can, uh, you know, can be good money, I guess. So they had their wasted locker ransomware, which has also given other companies some problems. But they, I believe, encrypted, and I'm not sure now in thinking back, they encrypted some part of Garmin's service, so it's not usable.
Starting point is 00:45:40 Basically said, it's easy. You pay $10 million and we give you the keys. Garmin is a multi-billion dollar business. So you think, oh, the easy thing is just pay the $10 million. That's the fastest solution, probably the cheapest solution. And they kind of went back and forth for a while and didn't, and maybe actually in the end did pay the ransom. They are a US-based company. They're not supposed to pay something like that. There's, I think, ways of skirting around so you're not breaking the letter of the law perhaps. But there's this trade-off that we have if
Starting point is 00:46:20 we become dependent on using a specific product, a specific piece of technology. And so if the only way that you're going to be able to navigate is to use this one device and then it's offline, you have a real problem. So I wrote this piece not to try to convince everybody to be able to navigate by the stars again, but just to draw attention to things like this.
Starting point is 00:46:48 Like we certainly go through most of our lives expecting that, you know, there is 100% uptime or we're not going to have to deal with, you know, some weird outage. Outages of this type are, you know, it's more of know, it's more of the norm I'd say than, uh, like, you know, once in a lifetime. So yeah, I, I felt like this was, uh, something that kind of connected to, you know, if I remember right, maybe a couple of other, um, examples that I had written about, it was like the Twitter hack that happened at around the same time, I believe. So this is when there was, um, and I think this was more of a social engineering hack that happened at around the same time, I believe. So this is when there was,
Starting point is 00:47:26 and I think this is more of a social engineering hack that somebody had figured out, but basically they gained access to, and I believe it was only like verified accounts. So like all of a sudden, one day you saw, you know, like Elon Musk or Bill Gates or, you know, like, you know, Oh yeah, I remember that. Kanye West, like, like tweeting about, Hey, I want to give back, you know, to my community that kanye west like like tweeting about hey i want to give back you know to my community and so if you send me there's a bitcoin thing wasn't it yeah yeah you send me a certain amount of bitcoin i'll send like double the amount back right um and so going back to algorithms so like twitter's algorithm even though people realize pretty early on okay
Starting point is 00:48:01 this is a scam yeah um the algorithm, well, wow, everybody is talking about and commenting and replying to- Engagements through the roof. Yeah, engagements through the roof. So we want to boost these posts. And of course it was doing the opposite of what they wanted. So in this case, it was a much more modest financial reward for whoever the scammer was.
Starting point is 00:48:26 I think it was only around $100,000 in Bitcoin that they ended up getting away with. But you have scams that can scale. So you can scale through a social network like Twitter. Or you have something like the Garmin ransomware situation where I'm dependent upon this one piece of tech. Now I cannot do the thing that I need to do at this moment. Or now navigating for me is impossible or it's really dangerous. So yeah, I like just bringing attention to these things. For me, when I wrote them, I think I was calling back to some of the writing I did about autonomous vehicles in both of these examples.
Starting point is 00:49:12 But yeah, if you don't mind, I'll kind of loop in the thought on autonomous vehicles there. Please do. Yeah, go ahead. I've been writing on this topic even before I had this unintended consequences, you know, blog, but, um, I remember a few years ago, like I, I saw a, um, a VT, a VC, you know, uh, I believe tweet, you know, um, that he was using support for autonomous vehicles as kind of a intelligence test. In other words, if you did not support them, it's like a mark against you. Well, okay. Because you don't understand how things can improve and whatever. And so I really started
Starting point is 00:49:51 wondering about, well, okay, is that a good test, first of all? Because I was failing the test. And it's not that I'm against AVs. I like the concept. I like the theory a lot. And I do not want to be a proponent of 3000 people a month in the US alone, like, you know, dying in traffic accidents. Like, I don't want to like say, no, let's maintain the status quo. But, um, when I started to think through what ultimately, you know, can happen when you do roll out, you know, like large scale, higher level, like level four, level five, like autonomous vehicles. I started realizing that, you know, while you might have say an average day where the amount of fatalities are much lower than today, you also have the risk of certain days of the year, and you have no idea when, where
Starting point is 00:50:46 there's like a huge burst up. Why is that? So that's because, similar to like the Garmin or Twitter examples, like you have things that can, you have an effect that can scale, whether it is a hack, whether it is a bug. I see. So like some hack disables the breaks on all these things, and like nobody can can break for that three minute and you have millions of them on the road and bam. Right. Okay.
Starting point is 00:51:10 Right. And, and so like you have to at least acknowledge there's this risk there. And again, like, I don't want to say I'm not like anti technology, obviously. I'm not like saying, okay, well we can never create a better world. Like you certainly can. But if the system that you're putting in place does not, or if any of the people who are talking about like, you know, building these systems is not really, human driven cars and say, okay, how do I make human driving much more dangerous than it is today? You know, I can't really do it. Like, what would I have to do? I'd have to encourage weird legislation to like allow five-year-olds to drive. I'd have to like encourage people to drink, you know, and then drive. I have to like remove, you know, stoplights and, you know, you know stop signs and you know increase the speed limit like all these things yeah and ultimately again kind of going back to peltzman effect you know people who are in the cars if they are really seeking danger they're going to remove themselves from the driving population you know eventually they'll be able to affect like a handful of cars.
Starting point is 00:52:26 They won't be able to affect a thousand cars or a million cars. So you can't really scale up danger with individual humans. You can scale it up when you have more of a top-down system or where you have like fleets of cars that are, you know, communicating with each other and, you know, doing all the things that, you know, AVs are supposed to be able to do, like, you know, ride really close to each other, you know, at higher speeds, things like that. So, and this was, you know, actually this, you've had people on your podcast that have talked about like, you know, Yagni, you know,
Starting point is 00:52:59 before this is where it's like, no, you, you really are actually going to need this like, you know, weird edge case. You actually going to need this weird edge case. You are going to need to think through this. But even so, what you can get is, so I'm looking out at the weather here outside of LA and it's unusually it's raining. I might say, I want to engineer a world where it's just sunny, 365 days of the year, rather than it rains unpredictably, or there's a storm or other. But if an outcome of that is once in a while, you have no idea when it's like you get the worst storm that's ever existed in like the history of the world. You know, I have to pause for a moment and say, we better be careful about rolling this,
Starting point is 00:53:44 this new system out. Because this is going to change some things. So what you're saying is when the outliers, right, those black swan events, I think they call them, whether in nature or now we're talking about in software systems like an autonomous fleet, when the consequences of those are so drastic that maybe it's like a humanity endgame, or at least for everybody who happens to be out and about that day yeah then it's worth solving for those not even edge cases or corner cases they're like black swan cases or at least thinking about those things
Starting point is 00:54:16 and weighing that into your decision making process before you go all in on right something that works 99.9% of the time, but the time that it fails, everybody's dead. It fails big, yeah. I've gone to cybersecurity conferences. I've seen the car hacking village, right? And people figuring out how to hack cars just currently, where there is limited damage you could do.
Starting point is 00:54:44 So I think it's something that you should at least be thinking about. The other effect, even if that type of an outcome is solved, I don't know that it is solvable, but say that it is, you have the other outcome. This is more of like a second order effect of how behavior changes. So, you know, I hear people say like, oh, well with AVs, you know, um, we'll be able to streamline traffic to the point where you can get from the East side of LA to the West side of LA in 15 minutes, you know, something unheard of because you have traffic is terrible. Um, the cars will be able to move much more quickly. They'll, you know, something unheard of because, you know, traffic is terrible.
Starting point is 00:55:28 The cars will be able to, you know, move much more quickly. They'll, you know, ride really close to each other. But the reality there is, of course, you know, human behavior changes. If I can get to the west side of LA, like I'll go more often. So I'll have more people, you know, taking cars, the traffic will go back up. And it was like reminding me, and this is actually a book that I didn't realize was popular with developers, but because I read it years ago from more of like an urban planning perspective, but a pattern language,
Starting point is 00:55:54 like Christopher Alexander's A Pattern Language. I don't know that one. Where he talks about like, how do you design like a city or even like a house around human behaviors? And he says like, if you optimize around cars, you're going to get more cars. So with the AV discussion, you might say like, you know, should we be designing for cars to be able to get around really quickly or safely, or should we be designing places so that people can enjoy
Starting point is 00:56:22 them the most and get around and maybe maybe actually like doing things at a human scale or like a walking scale is actually good for like parts of different neighborhoods right yeah so it may be you know maybe think a lot about you know about this topic and certainly billions have gone into uh av research it's going to be interesting to see how things end up you know shaking out yeah well first we got to get to level four and level five before we can even see and they're struggling to get there i think every year it's going to be here next year yeah but uh it's getting closer we're starting to see some consequences i'm uh specifically because tesla i think has the most
Starting point is 00:57:01 out there in the wild the tesla autopopilot, and there's been some casualties and I think a death or two. We're seeing some backlash to that. I think none of this is at scale yet, though. It's all at one, two. I think Tesla has the most scale out there. Maybe not in a single locality, but in many localities. And there's a lot of other things.
Starting point is 00:57:26 And again, I'm saying I am not against progress, certainly. But there's other things that I do benefit from currently when it comes to some of the same autonomous vehicle tech. I don't know, depending on if you're driving a car with this or not, but you get that warning if you're going to change lanes and there is a vehicle yeah maybe you can't see it like it's you know like you know right behind you has better eyes than you do yeah yeah so like uh there's certainly like a good camera you know human computer you know meld that you can make but uh yeah we should at least be thinking about what happens if you completely
Starting point is 00:58:02 flip the switch over and every car is av yeah exactly well there's some of them that have i think it's waymo but i could be wrong about this there's like there's no steering wheel you know it's like well what are we going to do in the case where we need to fall back to the human well there's no steering wheel so they better know how to drive via some sort of digital interface or something. That's definitely a step in that direction. I do agree that at this time at least, when it comes to computer systems, I think humans with superpowers is kind of like the best of the world.
Starting point is 00:58:38 Let's equip humans to take care of the tedious parts so they don't have to do that work to provide the superpowers. Like, hey, did you know you can now see behind you? For instance, I was actually at a museum, a Navy museum recently. We were watching some of the technology inside of these fighter jets. And the way that the pilots can actually see 360, and they can also see, I don't know what the other direction is, underneath them and above them completely
Starting point is 00:59:08 as if the jet doesn't exist. Like they're sitting in there driving the jet and it has enough cameras and enough smarts to like remove itself as if it's completely invisible. They can look down into the ocean directly underneath them. Like you couldn't, like there just doesn't, humans can't do that. But with the software that gives that that pilot super
Starting point is 00:59:26 really superpowers to see everything around them pretty cool stuff but yeah removing the human completely i think is kind of closing the loop or taking that next step and i i'm also pro progress but i also am like i think what you're saying is like let's slow down and consider not just what's immediately going to happen but what's going to happen with more units with more time right at scale and whether or not we want to guard
Starting point is 00:59:55 against certain things now What's up, friends? This episode is brought to you by our friends at Retool, the low-code platform for developers to build internal tools. Some of the best teams out there trust Retool. Brex, Coinbase, Plaid, DoorDash, LegalGenius, Amazon, Allbirds, Peloton, and so many more. The developers at these teams trust Retool as a platform to build their internal tools, and that means you can too. It's free to try, so head to retool.com slash changelog. Again, retool.com change log again retool.com change log so the garmin one is interesting because it's um there's two aspects that interest me both from like the consumer standpoint of like individual dependence um like if you depend on a Garmin GPS, whether it's mission critical
Starting point is 01:01:26 or you're out riding your mountain bike, and it was that time period when Garmin was down, your life is immediately affected. So there's that aspect. And then there's also software dependence, where it's Garmin is completely dependent upon whether it's a third-party package or however they got that ransomware
Starting point is 01:01:47 inside of their system. There's a supply chain problem, perhaps, on their side. And this is something that we think about a lot as developers is how much third-party risk can I take on in my supply chain, in other people's software, in being dependent upon maybe an integration with a company that I trust, but maybe their system has a problem. And we have this tangled web,
Starting point is 01:02:14 I guess metaphorically in both cases, but a web of both dependencies and network systems where we have to decide how much am I willing to risk with other people's code, with other people's systems, versus writing it ourselves. And so there's both aspects of that, like the Garmin side and then the person using the Garmin. And I think when it comes to the individuals, I don't think we've ever been more dependent as a species on anything than we are on these smartphones today. It's memes, people walking along the street, staring at their phone, get hit by a car or something. We are attached at the hip. And I
Starting point is 01:02:51 mean, like so much, it's like an extension of our brains. And so when there is problems with our phones, whether they're just offline or broken or lost or stolen, or they're hacked, I mean, your life is in there. And so there's a lot of dependence upon a smartphone. And then from the Garmin side, deciding how much we are willing to trust third-party systems in order to move faster and accomplish things that we may not be able to accomplish on our own,
Starting point is 01:03:19 I think there's definitely unintended consequences of running other people's code. It's really a trade-off and a difficult decision in many cases what do i do i don't know if you have any insights on that side of things but well i mean one of the other bigger you know i think you know uh hacks from around the same time as a you know garmin and twitter you know stories whereas um solar winds i believe was a supply chain. Yeah, it was. If I remember right.
Starting point is 01:03:48 So yeah, you certainly have those trade-offs. But whereas in the past, maybe that trade-off would have been localized to just this one un-networked company, now it becomes a much bigger deal when there is dependency across many different companies on the same software. Yeah, I kind of mentioned this in the beginning, but earlier in my career, I had worked in telecom. And I was always fascinated in, if I had been a little older, maybe, pre-internet when the hacking was primarily done on telecom networks, hearing people talk about like those stories and, you know, hacking like dial tone on the old landlines and, you know, getting free calls and doing that.
Starting point is 01:04:35 Even so, you know, in that case, like you have, you do have some like great examples of people like, you know, whatever, calling the White House or like, you know, getting just like unlimited, you know, uh, you know, talk time, but, uh, yeah, it does seem with a, a more interconnected world, you know, um, the network effects of scale effects are, uh, just getting bigger. And part of that leads me to say, okay, we, we should just be more cognizant of how we design systems in part, you know know might also be because i know even if i do everything i can that like somebody else is uh is not gonna have the same care or they're gonna right screw up something or they're like they don't have an incentive to be as careful
Starting point is 01:05:16 like or there's a mistake like yeah there's there's no end to mere incompetence it could be just mere incompetence sure you know um That you should also isolate at least some really crucial parts from being at risk to something like that. Well, the most secure computer that there is is an air-gapped computer. It's one that is disconnected from everything else besides power. And the only way to hack that computer is to sit at it. But also the problem is that's not a very useful computer.
Starting point is 01:05:48 You know? That's it. That's the rub. It's like, yeah, that's secure, but it's also not all that useful. And so we have to live our lives somewhere in between. But when it comes to your mission critical, your family jewels, your pearls,
Starting point is 01:06:05 maybe your Bitcoin private passphrase, maybe air gap's the way to go. I don't know. Well, Paul, the blog is Unintended Consequences. Of course, all the links to all the articles, as well as the sign-up sheet for everything to get Paul's future writings will be in our show notes, so you'll find them there. Anything else that you're up to that you want to talk about before we call it a show?
Starting point is 01:06:27 Yeah, I'll mention two other things that are, I'll say, related to this topic of systems. I wrote a short book about unit economics. So this is like understanding customer lifetime value, customer acquisition costs, like how businesses work, basically. It's called Growth Units. This is something that I did during the early part of the pandemic because I was teaching this topic and I've discovered that people have actually gotten value from the book. So if you want to think about these systems topics, but more in an internal company setting,
Starting point is 01:07:02 growth units is something you might enjoy. And then I'll just also put a call out to whoever's listening. The next thing I'm writing about is also a longer piece, probably a book-length piece on market timing or what's called the why now question with startups. In other words, how timing impacts your eventual success, like why this is a good time to build this specific company. I'm happy to speak to anybody who is kind of going through that thought process now, either in like presenting what you're doing to a potential investor,
Starting point is 01:07:39 like why now is the right time to build this business, or if you're building a specific product within an existing organization, like why the timing is good for your development of, you know, that next specific thing. Otherwise, this has been a lot of fun and really appreciate what you do, of course. And thanks for having me on. You bet. Yeah. Thanks for coming on. We'd love to have you back sometime, especially as you do more writing and more unintended consequences happen out there in the world for us to discuss, analyze, and hopefully learn from. So thanks again, Paul. This has been a lot of fun. Thanks, Jared.
Starting point is 01:08:16 All right. That's the changelog for this week. Thanks for listening. If this is your first time with us, subscribe now at changelog.fm. And if you're a longtime listener, do us a solid by recommending the show to a friend. Word of mouth is still the number one way people find new podcasts they love. A couple updates for you on a few of our other shows. Matt Reier from GoTime did an excellent AMA with the Go team at Google. Lots of juicy details in there about their big, generic rollout. Listen in at gotime big generics rollout. Listen in at
Starting point is 01:08:45 gotime.fm slash 210. And on JS Party, we rang in the new year by adding Ali Spittel to the team. And of course, we also predicted what's happening in 2022. Check it out at jsparty.fm slash 207. Special thanks to our partners for supporting our work. Fastly, LaunchDarkly, and Linode, y'all are awesome. And to the mysterious Breakmaster Cylinder for cranking out new beats for us on the regular. Next up on the show, Adam nerds out on file systems with Matt Ahrens, who co-founded the ZFS project of Sun way back in 2001.
Starting point is 01:09:18 That's all for this week. We'll talk to you again next time. Outro Music

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.