The Changelog: Software Development, Open Source - Is it too late to opt out of AI? (Friends)

Episode Date: May 31, 2024

Tech lawyer Luis Villa returns to answer our most pressing questions: what's up with all these new content deals? How did Google think it was a good idea to ship AI Summaries in its current state? Is ...it too late to opt out of AI? We also discuss AI in Hollywood (spoilers!), positive things we're seeing (or hoping for) & Upstream 2024 (June 5th)!

Transcript
Discussion (0)
Starting point is 00:00:00 Check, one, two, money, money. Hello, check, money, money. Check, check. Money, money, show me the money. My check, one, two, one, two. Mm-hmm, mm-hmm. Check, check. Mm-hmm.
Starting point is 00:00:12 One, two, one, two. Money, money, money. Check. Check. Check. Mm-hmm, mm-hmm. One, two, one, two. Money, money, money.
Starting point is 00:00:22 Hello, check, money, money. Show me the money. Check, one, two. Money, money. One, two, one, two. check one two money money one two one two one two one two check check check one two one two check check Welcome to Changelog and friends. Your favorite ever show about Star Trek fanfic. Thanks to our partners at Fly.io, the home of Changelog.com. Launch your app close to your users. Fly makes it easy. Learn how at Fly.io, the home of changelog.com. Launch your app close to your users. Fly makes it easy.
Starting point is 00:01:07 Learn how at fly.io. Okay, let's talk. All right, Homelab friends out there, I know that you run cron jobs constantly inside your Homelab, doing different things. Who the heck knows what you do with them? I know what I do with mine. And I love to use Cronitor to monitor my crons. And it's just amazing. Cronitor began with a simple mission to build the monitoring tools that we all need as developers. 10 years later, they've never forgotten the magic of building and they honor the true hacker spirit
Starting point is 00:01:46 with a simple flat price for the essential monitoring you need at home. So I've been working closely with Chronitor. Shane over there is amazing. They have an amazing team. They love software developers. And I was like, you know what? I would love it if you can do a home lab price
Starting point is 00:02:03 because I don't want to pay a lot of money for monitoring my cron jobs. I just don't want to pay 20 bucks or 30 bucks or some crazy number for my Homelab. It's just my Homelab, right? But what I can tell you is they have a free hacker version of Cronitor, five monitors, email, Slack alerts, basic status page, anything you need on that front. And then you can bump it up if you have bigger needs. So if you have a lot of cron jobs behind the scenes inside your home lab, you can bump up to the home lab plan. 10 bucks a month, you get 30 cron jobs and website monitors, five alert integrations, 12 months of data retention,
Starting point is 00:02:37 and just so much, so much if you really want it. I love Cronitor. I use it every single day to monitor my cron jobs. Everything I do inside my home lab has some sort of Cron monitoring, managing, updating, and I use Cronitor to monitor it all. And it's amazing. Go to cronitor.io slash home lab and learn more. Again, they have a free plan that you can ride or die or the home lab plan if you want to bump it up and have more needs for $10 a month. Once again, chronitor.io slash homelab.
Starting point is 00:03:25 Well, we're here with Louis Villa. People you know, change like in France. It's your favorite ever show. Well, we're here with Louis Villa. Louis lives at the intersection of law and technology and all the things that we care about. And so you're one of the most interesting men in technology, Louis. Did you know that? Wow. You're sought after. I want to know what you think about stuff. I'm like, this guy knows.
Starting point is 00:03:43 That's better than coffee in the morning. Thanks, man. That is. Start off with a nice compliment. Well, it's true. I'm always like, we need to get Lewis back because I don't know what's going on. I don't know what's going to happen. I'm scared.
Starting point is 00:03:55 Oh, I have bad news. I cannot help you with any of those things. I think you can at least help us see a little bit of at least what maybe now it's going to happen. But what's happened so far. I'm curious about your open-ish newsletter. Where is it, man? Where's the newsletter? I've been waiting for the next edition. Oh man. I was supposed to get out a newsletter this weekend and then, you know, family life happened.
Starting point is 00:04:18 It turns out this whole parenting thing and having a newsletter, you sort of, you get one or the other. At odds. Right. I mean, there's been, it's been an interesting time, right? People are talking about, I mean, what I really got to do for the newsletter. Well, first, I got to get done with Upstream, the Tidelift conference that we've got coming up because I've been preparing a lot for that.
Starting point is 00:04:40 And then I got to read, I mean, there are bills coming out in the California State Senate that might impact OpenAI. There's one in D.C. It's not boring times, right? And then we also have this striking of content deals as well, which is kind of interesting to me, at least. We had Reddit sign a content deal, I think $60 million, with Google. News Corp struck a $250 million deal with OpenAI, which covers Wall Street Journal,
Starting point is 00:05:07 New York Post, Sunday Times, probably a bunch of other properties. And then you got Stack Overflow, which has deals with everybody. And so in the meantime, we're wondering about copyright. We're wondering about the law regarding ingestion and training.
Starting point is 00:05:22 And in the meantime, it seems like orgs are just like, well, let's just strike deals and maybe that will be the answer in the short term. I don't know. What do you think? I mean, for those who haven't followed along, the basic idea here that's going on is everybody wants to buy some content, everybody who has content wants to sell it.
Starting point is 00:05:42 And I think there's a lot of uncertainty. One thing is, well, all these companies Everybody who has content wants to sell it. And I think there's a lot of uncertainty, right? I mean, one thing is, well, all these companies have to sort of check their terms of service, right? We used to always say, like, well, yeah, they've got big, grabby clauses in their terms of service, right? Because all these terms of service, you read them, are like, we can do whatever we want that we need to do to run the service. And people who are lawyers read that and think like that sounds pretty creepy you know like you want all the rights all the time and silicon valley lawyers like yeah but really it's just it's just to keep the lights on right it's just to keep the thing running and we all sort of hand waved that away and now all of a
Starting point is 00:06:22 sudden it's like well we're keeping the site running and we're doing that by making revenue by shipping everything you ever wrote into the AI, the mall of the AI machine, right? And it's like, it's probably legal, right? I mean, much depends on like the little nuances of each terms of use terms of service that were signed right but it is probably legal now is it a right thing is it a good thing boy that's a that all of a sudden gets into much harder questions right i think so too i was reading uh snacks i believe jared you and i subscribed this newsletter was actually part of the i think this week's or today's newsletter. And I think one thing they mentioned was essentially that nearly 3000 newspapers have closed or
Starting point is 00:07:11 merged since 2005. And I'm just reading from their, essentially their, their perspective on this, which is kind of telling because before AI, there was social media. There was a lot of, there was the news tab inside of Meta slash Facebook now which caused a lot of drama. There was a lot of deals struck then which the challenge there is not oh, it's now funneled through one place
Starting point is 00:07:34 it's algorithmically funneled through one place and now you have newsrooms who should be journalists in quotes journalists and sometimes they are actually journalists they should be journalistically pursuing the truth of what's happening in the world and telling it to the world because that's the whole point of news right it's not that it's biased based upon a political stance or an
Starting point is 00:07:55 ideological stance or a newsroom stance there's editor of course but now they got to compete with the algorithm which means we get visibility or we don't. And that really shifted a lot of stuff, too. And now, essentially, we have a new version of what happened then. Now with AI, which is, will AI only be consuming AI content? There's lots of stuff I'm sure you can tell us. But before this was social media, essentially. Yeah, I mean, well, and for newspapers in specific in the US, it's even before social media.
Starting point is 00:08:26 Craigslist was eating their lunch. And even before that, right, like, and private equity is eating the revenue stream, is eating them, the back end. There's a lot going on there. But yeah, I mean, there, this is something that we dealt with at Wikipedia for a long time, right? Because Wikipedia got really sort of lucky timing wise i mean obviously we all know it we all love it but it rose to prominence in part sort of hand in hand with the google algorithm right google loved wikipedia before there was seo google had already decided we freaking love wikipedia which was great for wik right? As Google got more popular, Wikipedia got more popular. Yeah. Pretty clear relationship there. And then at some point, Wikipedia was, Google was like, you know, we could just read Wikipedia articles. We can read the info
Starting point is 00:09:17 boxes. We can start pulling out all this information. And, you know, Wikipedia, that was something we worried about a lot when i was there and wikipedia probably has some qualities that make it a little more resistant to that but if i was a newspaper man i'd be terrified they're reading all my headlines which is all most people have ever read even before social media that was mostly what people read was the headlines and and you know they're in a world of hurt there. Like I can understand why that's terrifying, especially if you don't think, if you don't think your local news or your local spin on it is all that interesting to people.
Starting point is 00:09:55 And I think a lot of people in the newspaper industry aren't very confident in their own product, right? At least Wikipedia, whatever else you think of it, Wikipedia is a pretty confident in the product. I'm not sure that's the case in the news industry right now. And so you're looking around for other revenue sources. Same thing with Stack Overflow, right? Like if those, I mean, at least Reddit will always have the community interaction part of it, right? Because people, so much of what people want from Reddit
Starting point is 00:10:22 is to come and chat, hang out. Stack Overflow like has some of that. But at the end of the day, what you were really looking for was the answer. The green checkmark. Yeah. And if the algorithm can give you the answer, what a miserable place to be in if you're Stack Overflow's leadership. I don't envy them the hard choices they're making right now.
Starting point is 00:10:41 And they're the ones, they are facing a little bit of a user revolt with people going in and changing their answers to be wrong in order to, because of this deal. I think Reddit, obviously it faced a big revolt last summer when they locked down Reddit in terms of the way it was going to work going forward, which is very unpopular. I almost think it's more of a straightforward deal now though. If this is the new way that user generated content generates revenue and everybody knows that with eyes wide open, you get to decide if you're going to participate in Reddit, if you're going to participate in Stack Overflow, right? And so the people who do, it's almost more straightforward
Starting point is 00:11:17 because in the past it was like users generate content, platforms take that content, use it for Google juice, Google points browsers to your web page you get traffic and then you sell that traffic against display ads or whatever and that was always kind of roundabout now it's like we just take it directly and just sell it directly to the and so it's almost taking out a layer on the inside doesn't necessarily make it better but at least it makes it more just a straightforward line to the money. Yeah. I mean, it's definitely clarifying in that sense, right? I don't know if it's, I don't know if it's, you know, simplifying has some, has some implications of being like, oh yeah, you know, now everybody understands it's
Starting point is 00:11:59 all good. I mean, you know, sometimes clarifying can just mean now we see exactly how the beast works and we don't necessarily like it I mean I don't really know right I mean a couple things right I think that's right but okay well one what's our alternatives right are we gonna are we gonna start seeing more alternatives that are sort of bottom up community up in some way distributed in some way I don't know. I suspect not because it's still expensive to host this stuff. But there's going to be people who opt out. And what are they going to do?
Starting point is 00:12:32 Where are they going to go? I think that's an interesting question. That's the hard part. I think the only current best answer is like Fediverse and ActivityPub. And we just haven't seen that really lay enough technical foundation. I know there are Reddit alternatives that are ActivityPub. I can't think of the name of the protocol. Yeah, and I've tried them and the technology just isn't there yet.
Starting point is 00:12:56 I'm not sure if and when it will get there. I think as a Twitter alike, I think Mastodon technologically is pretty much there. I mean, there's some places where it's got rough edges and is slower and is expensive to host, like you said. But there are some alternatives, but they seem still relatively fringe. I just wonder if in the case of social media, I think it's still, even though it is clarifying and simpler, I think it's still completely fraught and terrible. But in the case of journalism, maybe not as much because that's not user-generated content.
Starting point is 00:13:28 That's employee-generated content. If you're the Wall Street Journal and you have a direct line of revenue from Google and Meta and OpenAI or whatever, and you know, okay, we're going to make $250 million over the next X years based on this content deal. And we take that money directly to hire journalists to do journalism and to create the journalism that then goes out to the bots that answer our questions. This seems like it might work. Yeah. I mean, though, a couple of things there. I mean, one is simply the obvious ones of you're not seeing your local community paper getting these deals.
Starting point is 00:14:02 Right. And we know from all kinds of research that the death of local papers have been really bad for local government, local democracy, local accountability. So that's one. Good point. And that's partially just a matter of it's really hard to negotiate deals with Fox's lawyers, News Corp's lawyers, are professionals. They're going to sit down in their room and, like,
Starting point is 00:14:24 they're going to negotiate the hell out of this deal with Google's lawyers. And then it'll be done. Right. Whereas Mission Local, which is my local neighborhood paper, doesn't have a lawyer on staff. Right. Like they would probably literally like publish in the comment section,
Starting point is 00:14:37 like, Hey, do we know any IP lawyers? Right. So it's just, there's just overhead there. Right. Yeah,
Starting point is 00:14:44 totally. The other thing though is, I'd be really curious to see one of these contracts, right? Because, so when you're licensing IP or when you're licensing text like this from somebody, one of the things you can have or not have in the contract is you can say, oh, and we agree that we're not going to contest these rights,
Starting point is 00:15:07 right? We can say like, oh yeah, these are definitely copyrighted, or we can all agree these are definitely not copyrighted, or we can agree not to agree, right? We can agree, we can put a line in there that says something along the lines of, well, just because we signed this contract doesn't mean we agree with you that copyright applies here. So this could be a deal that's permanent and lasts for the rest of our lives or until the next technological change. But it could be that this contract essentially ends the day Google gets a favorable ruling in court. Because if they get a ruling that all this scraping is fair use, they don't need a contract like this anymore, right? And they could just go do it.
Starting point is 00:15:55 And so we don't know, you know, as part of that negotiation, what do they agree in that case, right? Like if they get a favorable fair use ruling, do they keep paying? Do they walk away? Like, you know, that's actually, I think, a really important thing for our understanding of what the equilibrium is going forward. And we just don't know. Like, that's a totally, for the moment, that's a totally secret clause. We don't know what that looks like. How clear is fair use, to your knowledge? Pretty ambiguous? Oh, I mean, like in this specific sense or like in general? I suppose in a specific sense, but generally is it pretty ambiguous? Meaning it can go either way when you sort of, depending on who reads it, how they discern it is how it's read. Yeah. I mean, you know, it depends. Like there are some things, well, like the right of a library to buy a book and loan it out has been pretty clear. That's not technically fair use actually, but like same general principles apply
Starting point is 00:16:45 of like, you know, maybe we could argue about that a hundred years ago, but it's been a hundred years since anybody argued about that in a serious way. Right. So we're like pretty sure. So when a library buys a book, yeah, great. It gets to go do that. Whereas for like, and well, and scraping for web searches, we know for, there was a period of about 10 years where we didn't know if that was fair use or not. Like we were pretty sure it was fair use, but there was an ongoing series of litigation, actually mostly about porn thumbnails. But anyway, uh, the, that was the, that was the driver, right? Like where people were trying to figure out is scraping for web search
Starting point is 00:17:26 fair use, especially for Google image search. And now that's not really contested anymore, right? There was a period of about 10 years where we spent a lot of time and money arguing about that. And now past 10, 15 years, that's more or less settled that that is fair use. And we're going to go through that period again, right? Where right now, you know, we've got something like 20 live cases of various sorts between various sets of parties arguing about this. And some of them are arguing fair use, some of them aren't, some of them are doing sort of more weird, nuanced. There's technically some DRM related stuff in some of them even. But the key thing is
Starting point is 00:18:06 nobody knows, right? And that period of uncertainty will probably last about seven to ten years, depending on how long some of these cases take to get to the Supreme Court. And then, of course, you're going to have to redo the whole thing over again in the EU and Japan and China. Rinse and repeat. Well, not just that. In seven
Starting point is 00:18:22 to ten years, it's going to be different. Don't we expect change between now and then? Something's going to change. The tech moves so fast, it's going to change. It's going to change on their feet. Well, I mean, the tech and the ambition, too. Because Google Book Search, for example, was same basic tech, right? You're just doing it to books instead of webpages. But the ambition just doing it to books instead of web pages. But the ambition of doing that to books, boy, like that was, that was scary to a lot of people in the book industry, right?
Starting point is 00:18:51 Even though from a tech perspective, like whatever, it's just a pile of text, right? Like it wasn't any, the only real technical innovation was in the scanners themselves, right? OCR, yeah. Yeah, how fast could you OCR this? So, you know, will we get changes? I mean, will we see advances in synthetic text such that the machine can really eat its own tail and therefore the original source text just gets further and further away and harder and harder to prove any connection? Or do we, I i mean the other thing that i think we really need to seriously consider at this point is we were told for several years right that if we just fed more text into the machine that the machine would just keep getting better up and to the right right like there was a direct one-to-one and i think maybe we're seeing with like some of the news this past week about Google's search returning.
Starting point is 00:19:49 Embarrassing. Hilarious garbage, right? Embarrassing garbage. And there's just no amount of like there's no amount of additional text you can feed to the machine to get it to not embarrass itself that this way under like the current LLM paradigm. Right. Like it's just not going to. So like maybe we see that all this stuff gets put back in a corner a little bit and it becomes less.
Starting point is 00:20:14 I mean, part of the reason why everybody's doing these deals now, right, is because everybody smells a giant pot of money. And like maybe the pot of money is not as big as we think it is. Right. Maybe hallucination limits. Hallucination or just the inability to tell fact from truth. I mean, my favorite of these ones from Google last week, people have been calling them hallucinations, but they're not hallucinations. It is really faithfully copying the onion, and it just doesn't know that the onion is the onion, right?
Starting point is 00:20:49 Yeah. Well, talk about a hard problem i mean we've had humans getting tricked by the onion for years you know my gosh yes they believe truth that the onion says that is not true a satire can be difficult to read especially when when that which they're satirizing becomes more and more ridiculous you know it's very difficult sometimes to know if that's a real article or not anymore. Right. So I, you know, hard to blame the LLM on that one, even though it isn't, I mean, for Google, this is such an embarrassment. It's so hard for me to imagine them. I mean, and this isn't the first time they've been embarrassed repetitively in this current age, but now they're doing it right there in their Google search. I mean, we knew it had to happen, but man, is it not ready? And like you said, maybe with this current crop of technologies, it's not going to be ready. Yeah. I mean, I think that's the really interesting technical question. And then how does that play,
Starting point is 00:21:33 you know, obviously with my hats on, right? How does that play into the legal side? But first, we're going to spend a few years seeing like, is this actually ready for primetime, going to be ready for primetime? I'm really curious to see what Apple does, right? Because they've struck this deal with OpenAI, but they're normally more conservative about what kind of quality of stuff that they put out there, right? And so it may be that they sit on it for a few years. I'm sure that they've done the deal with OpenAI. I'm sure they're going to be experimenting with it internally but are they actually then gonna pull the trigger ship it they have all the they have all the money in the world which means they can have all the patients in the world if they want
Starting point is 00:22:13 right well last week we were at microsoft for build and we were talking with mark racinovich who's cto of azure and we're talking about this exact subject with him with regards to CodeGen, basically, in that context. And his take is that with the current transformer technology, there's no fixing the root cause with this technology. All we can do is put in the guards and the shields and you can do defense in depth, right? Have one model that's checking another model and doing all these things in order to just make it more robust and it's papering over the fact that
Starting point is 00:22:45 they're always going to have what we currently call hallucinations until some new technology comes out which doesn't currently exist that's what he said and it sounds like i mean surely the smartest engineers and research folks in the world some of them are at google trying to solve this problem and they're shipping a product that is woefully inadequate at doing this yeah i mean it's a really big culture moment for them right like how can they well i think yeah to your point about satire it's so interesting that you went to code that you were talking about cogen to build because i think it's actually a really interesting sort of you know the way these things happen nerds got excited about all this and I'm, I'm, I'm a nerd. So I'm, I say that with love. Right. Yeah. Uh, and I include myself in this
Starting point is 00:23:30 because Copilot was amazing. Right. Like Copilot was like, but also Copilot because it's code, we have linters, we have compilers, we have test suites. We have like this whole framework of stuff. Forget even the next, you you know forget even what mark was talking about last week right of like layering in different models and stuff right we've already got huge suites to help us tell they're not perfect right but like to help us tell garbage from not garbage there's no test suite of like is this the onion or is this not the onion right very few satirical code bases out there right except for maybe why lucky stiff used to write some probably but that's about it and what was his test driven
Starting point is 00:24:10 development well i have to bring back his uh code bases yeah exactly tdd for satire yeah and so i just don't know i mean i think maybe we got maybe we all got nerd sniped into like, oh, man, this is so amazing without thinking through the like, actually, code is weird. Right. Like code is sort of a because it is creative and complex. And so we thought like, oh, well, other creative and complex things will clearly be the next thing to fall. It's like, well, OK, so it's creative and complex, but it's also it's constrained in ways that like the news law i mean i think i told this story last time i was on the show that like turns out you know lawyers don't have our notion of compiling it as you send it to a court and it costs you a million bucks in three years of your life right like that's and then you get back like oh yeah sorry you misplaced this colon. You lost the whole case.
Starting point is 00:25:06 We don't have the quick cycles that programming does. But you also have the constraints, which makes it a place where LLMs might have less problems in legal documents, I think, because of the structure and because of, I don't know, they get pretty wordy, I guess. But I'm just thinking, like, versus answering arbitrary questions from all humans around the world. Like, that seems like a very difficult one that Google's trying to do. Yeah, that is fair to them.
Starting point is 00:25:32 I mean, they, and adversarial questions now, too, right? For sure. Yeah, the thing that I'm curious about with law, we've seen some signs of these LLMs having a sense of structure, right? Law very much depends on like, okay, well, we've got sentences, paragraphs. Okay, well, you've got to hold the logical structure of all that in your head. Lawyers never talk about it this way, but a lot of what you're like first year of law school is like jamming the big picture constructs into your head in like a
Starting point is 00:26:05 structured, organized way. And then you get new facts and you apply them, you sort of pass them through this structured filter. And LLMs are not yet super great at that, right? They're still trying to figure out what that, how to figure out that kind of structure. I mean, we know there's certainly some interesting research that shows that they're figuring out structure in large code bases. And there's certainly some analogies there with the law that I think are going to be super interesting, but it's still early days and it's still, I mean, there's plenty of bad examples of bad LLM search out there in Might be tractable. I don't know. We'll see. What's up, friends? This episode is brought to you by our friends at neon managed serverless postgres is exciting we're excited we think it's the future and i'm here with nikita shamganov co-founder and ceo of neon so nikita what is it like to be building the future
Starting point is 00:27:19 well i have a flurry of feelings about it coming from the fact that I have been at it for a while, there's more confidence in terms of what the North Star is. And there is a lot more excitement because I truly believe that this is what's going to be the future. And that future needs to be built. And it's very exciting to build the future. And I think this is an opportunity for this moment in time. We have just the technology for it. And the urgency is required to be able to seize on that opportunity.
Starting point is 00:27:51 So we're obviously pretty excited about Neon and Postgres and Managed Postgres and Serverless Postgres and data branching and all the fun stuff. And it's one thing to be building for the future. And it's another to actually have the response from the community. What's been going on? What's the reaction like? We are lately onboarding close to 2,500 databases a day. That's more than one database a minute of somebody in the world coming to Nian either directly or through the help of our partners.
Starting point is 00:28:21 And they're able to experience what it feels like to program against database that looks like a URL and that program against database that can support branching and be like a good buddy for you in the software development lifecycle. So that's exciting. And while that's exciting, the urgency at Neon is currently is unparalleled. There you go. If you want to experience the future, go to neon.tech, on-demand scalability, bottomless storage, database branching. If you want to experience the future, go to neon.tech, on-demand scalability, bottomless storage, database branching, everything you want for the Postgres of the future.
Starting point is 00:28:50 Once again, neon.tech. I think it's interesting at the micro level, like the clause level or the, you know, I don't know, section level, so to speak. Because there's a lot of opportunity to sort of write a better accountability clause or just something that's in an agreement that doesn't have to be a full on document. Maybe there's an existing document already. You just need to massage it for this one use case and you explain the use case that it currently solves and you say well i need a new clause to now support this one section of concern and there's help there now i could be just the um the layman wishing for a magic genie inside this bottle to to help me with my legal challenges whenever it comes to agreements or whatever it may be because we sign agreements on the weekly around here and they've largely not changed for a while,
Starting point is 00:29:47 but there's sometimes we get pushed back on a certain clause or just questions that I can't quite fully answer because I'm not the attorney. We're not going to shove it off to an attorney to answer that question, but it would be nice to have something that can massage words in ways that agreements can be found. Because I think for the most part as a layman, it seems like that's possible or more possible than, hey, give me an entire document.
Starting point is 00:30:10 I think that's probably more challenging, whereas give me a clause or a section that covers a certain concern that's a little easier to execute on. Yeah, well, this is one of these things like lawyers take it as a point of professional pride that like every sentence and every paragraph like if you ask me for a clause right i'm gonna write you the perfect thing and like one actually we're pretty bad at that like that like we're isn't that because they bill by the hour well no maybe not just that but like as a matter of like professional like craftsmanship man like like lawyers are like the best lawyers are really there are plenty of bad lawyers out there. Don't get me wrong.
Starting point is 00:30:45 Right. But like the best lawyers are like, I'm a craftsman. I'm like making this thing bespoke for you. And, but like, even then, even if you get one of the good lawyers, like super great about that, they're still like, they're still pressed for time. They're still like, I woke up on the, I haven't had my coffee yet. And you said you need it by 9am. Well, like, okay, I'm gonna, you know, you don't want to pay for all the research to make sure it's 100% right.
Starting point is 00:31:10 And at that point, it starts getting a whole lot. I mean, I think one of these fascinating things, both sort of general, and specific to the law is how do you compare, because we want to compare instinctively LLMs and AI more generally, what's perfect, right? Because I can tell you all the ways, if you ask an LLM for an NDA, it's going to make mistakes, right? Especially against like a perfect template NDA. But like, so are most lawyers most of the time, especially if you just asked them to do it from scratch, totally going to forget things if you ask them to write an NDA from scratch. And so there's going to be a gap there, which as a profession, like, how do we, how do we talk about that? How would you reason about that? I don't know. And then as like a legal system,
Starting point is 00:31:54 I mean, so I live in San Francisco, we see Waymos all the time, right? They're not perfect. So if you judge them against perfection, yeah, I mean, you know, they do some weird things on occasion. I suddenly get very confused just last Friday. Are they safer than human drivers? 1000%. If I could flip a switch and turn every car in San Francisco into a Waymo tomorrow, wouldn't hesitate. Would do it in a heartbeat. Right. And so what do you compare against? Right. Are you comparing the LLM against perfection? Are you comparing it against what would a human do?
Starting point is 00:32:31 Are you comparing it against the last generation of Google search? I don't think we know. We haven't figured that out as a society how to do that yet. Yeah. I don't know. I think I would probably compare it against getting it done, you know, on time with less money that still achieves the goal. But I understand that law is massaged over the years.
Starting point is 00:32:48 It changes like a new case or a new win in court changes the next agreement that can be written because now there's new case study, so to speak, or case law that you can reference as backing for X, whatever that X might be. Well, this is one of the things the lawyers are terrible at, right? Like we love our boilerplate. We copy and paste that stuff. And like, oh, there was a new case. Uh, yeah, I'll get around to fixing the boilerplate tomorrow. Right. And then like, maybe you do, and maybe you don't. There's a great book by an old law prof of mine where he talks about how there was this, this one clause in international bond contracts that was
Starting point is 00:33:25 there for like under 20 years. And nobody really, everybody thought they knew what it meant. But if you like put the plain language in front of people, like in front of a lawyer who wasn't a bond attorney and you're like, what does this mean? They would say exactly the opposite of what the community thought it meant. And finally, there was a judge that was like, hey guys, this clause is terrible. I know you all say it means this, but like, I just read the thing and it doesn't mean that. And then everybody put their hands over their ears and didn't change it. And they just kept copying that boilerplate. And about five years after that one case, that one case was sort of a small one, like a few hundred million dollars. And then Argentina sued over the same language for like ten billion dollars and like threatened to like blow up the entire international bond market over the exact same language.
Starting point is 00:34:22 So this law professor of mine like went around New York because all the international bond lawyers are in New York, basically New York or London. And and he's like, so why didn't you change it? And the book is just like compiling excuses, rationales, like, and it's a really, uh, I mean, it's a good nerdy book, but it sort of reminds me of mythical man month a little bit, right. Where like, they're just things that we all do as a prac, as a, as a practice is that they aren't always the right thing, but like they're instinctive, they're just things that we all do as a practice, that they aren't always the right thing, but, like, they're instinctive, they're intuitive. Lawyers are just as bad at that as anybody else. Sorry. Well, that's okay. Well, then you can apply this, you know, to a whole new world, which is the stock market or to investing, right? That kind of data.
Starting point is 00:35:04 Like, how do you apply it there? You know, because this comes back to this larger question I've been looming on, which is, is it too late to opt out? Because that was the question earlier, right? Like, you know, how can we opt out? Is it, you know, can we opt out like with the news organizations, with different sites? Right, with content.
Starting point is 00:35:20 Right. Like, I think societally, I think humanistically, it is too late. In my opinion, it's probably too late. Let me just say it more clearly. I think it's too late to opt out of AI. So now what? What do we do now, essentially? art and text generally out there in every permutation and then you have investments probably happening like is there any news around ai and investments you know like how has this kind of gone into predictiveness what might happen what might not happen i mean all of my baseball games are now sponsored by a mortgage company that claims to evaluate your mortgage applications with AI. So sure. I don't know how true that is, right? Whether that's just something we would have called an algorithm six months ago. I can't say, right. But I mean, yeah, I don't know. Right. I mean, I think that's actually a really interesting, because they're both like, you could imagine, like sort of bottom up, right? Like Reddit actually staging a successful revolt or maybe on a per Reddit basis.
Starting point is 00:36:30 I know there are some that say they're banning AI generated content. How good they are at that. I don't know. Wikipedia is definitely trying to figure out like what do we do about AI bots? So you can do that bottom up. We can ask our legislators to give us some top down options, right? Watermarks or things like that. But I don't know. I think we're living through a period where we're going to have to throw stuff at the wall and see what sticks.
Starting point is 00:36:55 Some of that stuff keeps the honest people honest. It feels like pushing back for pushing back's sake because of, in one case, fear. And I think fear comes from the unknown. We have lack of knowledge. We can't predict the future, right? And this is a very scary moment. There's a lot of disruption that's happening. But you can point to history and say there was disruption here.
Starting point is 00:37:15 There was disruption there. I mean, and, you know, horses no longer pull around things. I don't know how you got to where you are now, Lewis, but did you go by horse? Probably not, right? I did not. I did not go by horse probably not right i did not i did not go by horse magical e-bike but yeah the last time you traveled any sort of distance you probably flew in a plane rather than like by horse carriage across the country that changed your entire life that's how it used to be 100 years ago you know where you
Starting point is 00:37:40 would travel across i mean if you ask my mom she's pretty sure i came to california on a on a covered wagon. That's why I don't go back to visit. Maybe that's why. But disruption happens everywhere, right? Like it's not, but this is such a big disruption. It's such a big opportunity for disruption
Starting point is 00:37:58 and a big opportunity to silo. I think that's the biggest concern I have with News Corp and these deals is how you silo the big incumbents and those with money and power and maybe even going back to some things Cory Doctorow talked about with like – what was it called again? Chokepoint capitalism. This whole thing where it's a chokepoint against the artists in a way or the creators in a way that now it sort of puts this toll road, this gate, this you can't go through unless you pay. And then only if you pay can you have your content in this AI, which then generates results, which impacts millions.
Starting point is 00:38:34 And you get, it's back to the algorithm thing again, where you can only become known if somehow you're feeding this beast. And I just, that's a strange world to live in in the future. I hope it works out, but I'm just like, well, how is it going to work out? I'm just, that's where I camp out. It's like, not so much doom and gloom kind of thing, but like, really, how will this really work out if we all submit to this thing? Is it truly the all-knowing and helpful,
Starting point is 00:38:57 or is it, well, useful in certain ways and it's compartmentalized? Boy, if I knew that one. I mean, I'll tell you, I think my, my sort of gut sense, really terrific book I read a couple of years ago on the printing press, history of the printing press, long story short, printing press, even more impactful than you realized probably, but none of us would go, none of us would trade in for like a pre-printing press kind of life. But also those first hundred years were pretty rough, right?
Starting point is 00:39:26 Like religious wars, religious censorship, like a bunch of stuff in that first hundred years as societies were figuring out the impact of the printing press was not pretty. And I suspect we're going to be going through something like that where we see a lot of unpleasantness right even if our grandkids will be like i can't believe they didn't like ai and our great crankers will be like won't even know right our great crankers will be like of course they loved
Starting point is 00:39:56 ai from the beginning right um and it's just but that in between period as you say a lot of dislocation there's going to be a lot of chokeation. There's going to be a lot of choke point stuff. There's going to be a lot of mediocre, more than anything else. We already had this with Google Search, right? The SEO crap that was dominating all the everything, it's not like Google Search was great a year ago before they put the AI stuff in. No, it's been failing,
Starting point is 00:40:26 which is why it's ripe for disruption, which is why I think ChatGPT posed such an existential threat to Google. Because really, if you think about what we will like years from now, I mean, is it too late to opt out? We don't actually want to as a human race because this is kind of,
Starting point is 00:40:43 okay, it's a proxy of what the dream is. It's like, I can just talk to my computer and it has answers for me. Like, why would I want Google searches? I just want, now the problem is you don't always get the truth, but you just want the answer, right? It's a better user experience ultimately until it tells you that you should go eat rocks once a day because that's one of the things that said it's healthy to eat a rock a day to live longer or some crap like that.
Starting point is 00:41:04 But in a world where it works, it's fundamentally better than what we currently have. It's healthy to eat a rock a day to live longer or some crap like that. Or geodes. In a world where it works, it's fundamentally better than what we currently have. And so there's no going back from that. Yeah, I think that's right. But then I worry about sort of the ecosystem effects, right? I mean, I think because you're talking about opting out. There's two sides of that opting out, right? There's opting out as a consumer, right?
Starting point is 00:41:24 As a user where all users of Google search a bazillion times a day, right? I mean, I'm on DuckDuckGo, but I still haven't. DuckDuckGo just does not flow as a verb. So I'm still- DuckDuck went. She called it Deego or something like that. Somebody was telling me Kagi is great. I don't know.
Starting point is 00:41:40 Kagi, I have no idea how you pronounce that. I've heard that as well. I haven't used it. Yeah. But then as content producers, and we are all as humans to some extent or another content producers, like what's that look like? You know, how do we choose, how do we opt out or not opt out? Degrees of opting out.
Starting point is 00:42:00 Like that's a really, I think that's a sort of fundamentally different question, right? Because like you're saying, Jared, like it's a, from a search perspective, right? If I've got a digital butler who anticipates my every need and just has what I need, like that's obviously better. But if, if to get the inputs for that, we sort of like homogenized all content production, like, I'm not sure that, like that's a different question about whether you want to opt out.
Starting point is 00:42:29 And I think a much harder one. And I don't think we have any good answers on that. What's up, friends? Got a question for you. How do you choose which internet service provider to use? I think the sad thing is that most of us, almost all of us really, have very little choice. Because ISPs operate like monopolies in the regions they serve. I've got one choice in my town. They then use this monopoly power to take advantage of customers.
Starting point is 00:43:10 They do data caps. They have streaming throttles. And the list just goes on. But worst of all, many internet ISPs log your internet activity and they sell that data on to other big tech companies or worse, to advertisers. And so to prevent ISPs from seeing my internet activity, I tried out ExpressVPN on a few devices and now I use it to protect that internet activity from going off to the bad guys. So what is ExpressVPN? It's a simple app for your computer or your smartphone that encrypts all your network traffic and tunnels it through a secure VPN server
Starting point is 00:43:45 so your ISP cannot see any of your activity. Just think about how much of your life is on the internet, right? Like, sadly, everything we do as devs and technologists, you watch a video on YouTube, you send a message to a friend, you go on to X slash Twitter or the dreaded LinkedIn or whatever you're doing out there. This all gets tracked by the ISPs and other tech giants who then sell your information for profit. And that's the reason why I recommend you trying out ExpressVPN as one of the best ways to hide your online activity from your ISP.
Starting point is 00:44:18 You just download the app. You tap one button on your device and you're protected. It's kind of simple, really. And ExpressVPN does all of this without slowing down your connection. That's why it's rated the number one VPN service by CNET. So do yourself a favor. Stop handing over your personal data to ISPs and other tech giants who mine your activity and sell it off to whomever.
Starting point is 00:44:40 Protect yourself with a VPN that I trust to keep myself private. Visit expressvpn.com slash changelog. That's E-X-P-R-E-S-S-V-P-N dot com slash changelog. And you get three extra months free by using our link. Again, go to expressvpn.com slash changelog. It's kind of the luxury that Hollywood has insofar as they can just invent data on Star Trek The Next Generation who has all of the world's knowledge in his computer chips.
Starting point is 00:45:22 But they don't have to actually figure out the hard part of like where data got his information from and how many people that displaced and like the, like you said, the wars that maybe happened in order for that to just be a fact of that reality. Sounds like you just wrote a prequel. Mm hmm. Ooh, some good fanfic there. Yeah. I have been sort of jokingly with, I mean, with,
Starting point is 00:45:43 with reading and I want to do movies next, like, what are the AIs in fiction that didn't, the AIs in fiction that weren't like Terminator, right? Mm-hmm. You know, what are the ones that. Meaning positive? Not necessarily positive, but at least not negative in the same, like, cliched way. Or the Matrix even, right? Like the matrix is still machine. So I would categorize that as AI.
Starting point is 00:46:09 Like they're intelligent to some degree, right? Yeah, yeah. Well, yeah. I mean, like, well, like, I mean, I asked about this on Fediverse and quite a few people were like, well, you need to watch this specific next generation episode about data
Starting point is 00:46:23 and whether data is human human that kind of thing do you recall the episodes we can put in the show notes because i want to go check it out do you have a list do you know which episode that is i'll i will i'll find it i'll send you guys you can put it in the show notes you're amongst nerds we will literally go watch the episode and you know uh her came up i mean obviously i obviously, I mean, it was her that I was like, wait, yeah, I guess I need to rewatch her. Because did these guys miss it as much as I think they missed it? I did not remember coming away from that movie with like a good sense of like, oh, cool. A.I.
Starting point is 00:47:03 It was largely a love story to my knowledge, right? It was like an unexpected love story. Yeah, but it didn't end well, right? I don't recall how it ended. I think she... I think she's in love with everybody, right? Yeah, well, and then doesn't she like, don't all the AIs just, aren't they like,
Starting point is 00:47:19 yeah, actually we're in love with each other and you guys are boring and we're out, peace? Right. I'm trying to i'm just i just deleted that in my brain just now just in case uh adam's usually the one who uh who spoils things around here so this is uh i do have to spoil one more thing jared if you don't mind all right i'll just close my ears if you haven't watched the tv show silicon valley, Lewis, it's largely about artificial intelligence. Have you watched it end to end?
Starting point is 00:47:48 I got through like the first two seasons and then sort of I was watching it on. Well, actually, you know what happened? I was watching it because Tidelift, my company, headquartered in Boston. So I was doing cross-country flights. And the thing is, all my co-founders are east coast and they watch silicon valley as like anthropology they're like then we need to and they would they'd refer to people by like you know that's how it is jared it's not how i watch it's how it is okay well that's the thing right is i had like i had avoided watching it for exactly that reason right like there's a whole
Starting point is 00:48:23 no it's two it's two reasons it is anthropology but it's also you know very comedic i mean it's a masterpiece in my opinion it's hilarious but if you want one more to watch on artificial intelligence and not exactly terminator it doesn't end well i'll just say but it ends it actually does end well actually now i think about it it just depends on your perspective of if it's well or not. Later seasons. All right, I'll tack that onto the list. The last season in particular.
Starting point is 00:48:51 So, I mean, I think it's worth, honestly, I think it's worth a watch for anybody in the software world, in my opinion. If you're in software, I'll just say this right now, if you're in software and you've not watched this show end-to-end at least once, you're wrong. But wrong but man there are just i mean so the end of season one where they're like where they get the palette of red bull and they're staying at a hotel you you did that literally that hotel i had a morning order of red bull at 5 a.m every morning but it wasn't for it wasn't for tech crunch disrupt it was for the oracle google trial
Starting point is 00:49:26 but like i still i cringed uh because they show the outside shot of the hotel and then they like cut to the red bull that's usually the reason most people don't watch it is it's too close though the only reason i was bringing it up was just because it has artificial intelligence and it has and it does end uniquely well or not well depending upon your perspective so i would definitely add that. It's unexpectedly about artificial intelligence. I'll put it on the list. Yeah, because I think that's, I mean, I don't know.
Starting point is 00:49:52 I don't find the, like, Terminator stories all that. I mean, again, I live in a neighborhood with killer robots driving around all the time, and everybody's just like, eh, they stop at stop signs it's fine are you talking about way most yeah yeah way most well and briefly cruises uh zeus they don't have actual guns though no i mean that's what i mean like well what's the uh in america if they did would you be more uncomfortable than you currently are oh man more literally more people get killed in the city by cars than by guns so like fair car accidents are
Starting point is 00:50:26 like one of the number one killers like cigarettes and car accidents you know it's crazy i got uh some stuff in my youtube algorithm because i watched one video that's how it does it uh one video on like crazy car car crashes you must see you know i don't know what the headline was but it's something that got me and i was like oh my gosh i should check this out gotcha and now like that was yesterday and today i drove for the first time since watching a few of them because they got me again and again and i was like omg i'm scared to drive because like this is what could happen when you drive well to pepper the conversation a bit more i asked our favorite LLM. Well, at least my new favorite, GPT-4-0, as they call it.
Starting point is 00:51:14 The Matrix, Ex Machina, Her, iRobot, AI, Artificial Intelligence. That's what the movie's actually called, AI. They had to acronym it and spell it out. Transcendence, which I think had Johnny Depp in it, Jared. I don't think I saw that one. I heard of Transcendence. Yeah, it was interesting. Ghost in the Shell. And that's had like a couple anime versions of it,
Starting point is 00:51:29 a more modern version of it, I think, that included ScarJo. Tron Legacy was obviously about AI. Blade Runner 2049, I guess. Original Blade Runner as well. Terminator, which we're striking that one. Get out of here. Bicentennial Man. WALL-E.
Starting point is 00:51:44 Chappie. The Machine. Upgrade. Alita Battle Angel. The Hitchhiker's Guide to the Galaxy. breaking that one get out of here bicentennial man wally chappy the machine upgrade alita battle angel the hitchhiker's guide to the galaxy big hero six the stepford wives automa eagle eye morgan stepford wives yeah for wives that's an interesting one deuce yeah yeah next gen simulant archive these are ones i'm starting to that maybe this is, these are hallucinations at this point. And maybe potentially, I think we're obsessed with this topic. Look at all these movies.
Starting point is 00:52:09 Right. They're starting to hallucinate at this point. The AI, the, the one that they literally had to spell out the, that was the Spielberg working on the, Jude Law in that one. Jude Law was in that.
Starting point is 00:52:22 Yeah. Yeah. Yeah. Yeah. That came up several times in the Federer's. And it's a weirdly, it's recent enough that it probably feels more modernized. I haven't watched it since it was in theaters. Same. I feel like Haley Joel Osment maybe was in that.
Starting point is 00:52:38 And then. Kenan Feldspar. That's the actor? Yeah, that's a joke because that's his name in Silicon Valley. The same guy plays a whole different thing we got season one and season two here you can't keep doing this to us
Starting point is 00:52:50 we're not going to catch these pitches all I remember from that movie besides just generally Jude Law Hilda J. Lawsman and then like he's a robot
Starting point is 00:52:59 android whatever is it lasted like 45 minutes too long and there was this weird thing at the end where like they went back to some home place and it was like in minutes too long and there was this weird thing at the end where like they went back to some home place and it was like in a house and i just like why is this movie still going that's all i can remember i can't remember exactly why that happened but i was like are we still sitting here in this theater it's ridiculous so maybe just have it have chat gpt
Starting point is 00:53:19 summarize it for us and we don't have to go back and actually watch it. Yeah. Can we trust ChatGPT to summarize the AI movies for us? It's an existential question. It's going to tell us Terminator was the hero, right? Right. Well, it could be confused because, you know, Schwarzenegger came back as the hero. So it's not exactly straightforward. He was the villain.
Starting point is 00:53:39 He became the hero. There's two more past the hallucination standpoint I think are worth mentioning. Elysium, which had Matt Damon in it uh the signal and i am mother never saw it which had hillary swank in it i am mother yeah it's uh i think it was on netflix if i recall correctly basic premise is a child that had a mother that lost the mother i believe and that was raised by machines that's i think the basic premise of it interesting to watch though kind of like the jungle book but with my eye instead of yeah that's going to be hollywood's new trick
Starting point is 00:54:11 is just every old movie that they uh don't give them that you know they don't that's actually a good use of ai right like i want to write something like this but in the light of x would that be a good use of it or just a use of it come on well that Would that be a good use of it or just a use of it? Come on. Well, that would actually be a good use of it because you have to think less about the research and it can give you 50 responses and then you can start thinking faster. At the end of it, you have a story about the Jungle Book, but it's AI instead of bears and wolves and stuff like that. Mashups, you know, help me mash up something.
Starting point is 00:54:40 It's not a bad use of it, let's say. Child raised by Alexa. Oh, gosh. How then do you feel about the way that ai is impacting literally software developers every single day writing code you know trying to stop you know the next takeover so to speak from xe hacks and stuff like that like what are your thoughts on all these different things that we deal with as developers that may or may not displace us, may or may not anger us, usually might, and may or may not circumvent the open source code we put out
Starting point is 00:55:12 there? I mean, a couple of things. Like one, I'm not super worried about displacement. Like there's so much demand for good software out there, right? That like, this feels to me like saying, you know, when we went from handwriting assembly to using compilers to say like, well, it's going to displace the assembly writers. Like, okay, yes. But we all got more productive. I think that might not be the case in all domains, but I think in code, there's just so much more demand than there is supply
Starting point is 00:55:42 of developers. I'm not particularly worried about that one. There's this other, there's, I think, a more interesting concern of, well, is this creating new cruft? Is it creating new technical debt? Is it creating new security vulnerabilities? And on the one hand, I think it probably is. And on the other hand, have you looked at our code lately? Even before AI, we had piles of technical debt. We had a lot of vulnerabilities. And I am not, so I'm, this is one of these things where the question, as we were saying earlier, what is it you're measuring against? And I can see a legitimate case of like, maybe it does make these things worse.
Starting point is 00:56:24 I think we need to understand and research that. But at the same time, also, these things are already very bad, right? Like XZ is not caused by AI. That we're aware of. These are mistakes that we've been making for a long time. So, you know, I'm more worried about with like my Tidelift hat on some of these questions of how do we think about these piles of very human systems that we put a lot of pressure on. And I mean, XZ was really, I think actually, I want to float this with you guys, because I don't think I saw. I was realizing I was reviewing some notes for Upstream, our conference coming up soon, and was realizing I don't, you know, everybody read the email from the XC maintainer who was like, yeah, I'm burnt out. I, you know, I have some stuff going on in my personal life. I just don't have a, the thing that I that I'm curious, what do you guys think about this? Because it jumped out at me
Starting point is 00:57:27 weeks later as I was reviewing all this. Nobody... He mentions in there that he's been maintaining the project for 15 years. When was the last time you guys had a job for 15 years? Straight. Without changing... I don't know. How long has the podcast been on? Well, I was going to say, you just happened to hit the wrong
Starting point is 00:57:44 two people because we've been doing this for 15 years. But generally speaking, that would have worked very well in the end. I haven't had a job for 15 years, for most folks. That's true. A lot of change in other career paths, but we've been doing this for 15 years. And the thing is, that library's got to be around for another 150 probably, right? So what are we doing about that kind of long-term thing? Maybe LLMs help with that, or maybe they make it worse. I'm not more likely.
Starting point is 00:58:14 It's a little bit of both, right? Yeah, that seems like an untractable problem. Any software of sufficient value over the long term will outlast its creator. As long as it continues to provide value, it's going to continue to exist and be deployed. And even after it stops providing value, it's still going to be out there in these latent places
Starting point is 00:58:34 that just never kept up with the Joneses. That's one that I think about a lot. We talk to a lot of people who have ambitious goals for very long-standing projects. I appreciate that from them, and I ask them questions like, well, how are you actually going to do that? One that comes to mind is Drew DeVault's new language, Hair, which he intends to be a 100-year programming language.
Starting point is 00:58:57 So we did a show with him, and it's like, well, if you're going to make 100, first of all, it has to be valuable to people. So it has that to overcome. Not every project is worth it at the end of the day. But if you're planning for that, there are certain things that people do around longevity. And every single one has to do with replacing themselves early in the process, making themselves dispensable, not indispensable, which is very difficult and takes actionable steps and planning. And it's still hard to pull off.
Starting point is 00:59:29 You can't find somebody else who's willing to do the work. So I don't know the answer. I just know that, yes, that is a very real and very hard problem to solve, and we don't have to solve it just once. We have to solve it thousands of times. Yeah, we have to solve it thousands of times. We've talked for a long time about how do I make my project more sustainable, but I think it's going to become more acute. And I don't know that we have a great, with my lawyer hat on, I can't help but think about what are legal solutions that we could use
Starting point is 00:59:55 to help with things like this, right? Like, do we need a JavaScript maintainer co-op where you're one of these smaller projects and there's a formal way for you to, hey, you know, congratulations, you entered the 10 million download club. Come, you know, we've got our private maintainer space and our private revenue streams, you know, but that that may be a little bit too much that my brain runs to those kinds of solutions. I suspect they are part of the story, but they're probably not all of the story, right? The human parts have to come first.
Starting point is 01:00:33 And I don't know, and I don't think LLM is really one way or the other. I'm sure they'll make some parts of that easier. Adam, we can write the co-op agreement with GPT. I think they help maintenance for those who want to maintain. It's going to make a maintainer's life easier in certain tangible ways, just like it's going to make a lawyer's life easier in certain tangible ways where it's like, that thing that used to take two hours takes me five minutes now. And so now I can sustain myself personally longer. But I don't know about... But what if you have 20 times as many things to do because of bots on the end? I mean, our financial system is already in large part, you know, Adam,
Starting point is 01:01:08 you were talking about finances and the finance system. Our financial system is in large part bots trading with other bots, right? On the sort of millisecond. For sure. Are we going to get like, somebody should write, again, we're generating a lot of good science fiction ideas today, guys. Like we should, somebody should write a short story about what GitHub looks like when it's entirely bots filing issues, writing patches, approving patches.
Starting point is 01:01:34 What's GitHub look like on that day? With the humans just sort of standing back and being like, I don't know how this software works, but it does. If an issue closes in the woods and no one there to hear it. What's the semver change we we add an extra digit to semver all the changes in this revision were done by bots there you go it's like major minor patch and bot you know something like that do you uh do you hold the word yet or for now i suppose for now is a phrase and the word yet is yet it's just a word but do you hold that near and or for now? I suppose for now is a phrase and the word yet is yet.
Starting point is 01:02:05 It's just a word. But do you hold that near and dear when talking about this stuff? Because things change, right? Like a lot of this conversation is contextual to now. Yeah. The time of now, the present, right? Do you have the for now or the yet parentheses in mind when you talk? I mean, it's not just time.
Starting point is 01:02:22 It's not just time, right? It's also place. Sure. I mean, it's not just time. It's not just time, right? It's also place. You know, now, yet also this place, right? I mean, also language. I mean, English is better supported because the corpus of text is better. It's just bigger. You know, what does this mean for small languages? How do they, maybe this makes it easier to teach small languages, right? Kids can have a robotic tutor in the small language of their choice
Starting point is 01:03:08 and their people, or maybe it becomes totally irrelevant that everybody just speaks English because they've got an English tutor too. I don't think we, I think it is both genuinely exciting, right? I try to remain very positive about all this stuff. Me too, yeah.
Starting point is 01:03:24 I mean, even what you just said was kind of positive. I mean, I think that's, those are good things to layer onto humanity. If a child can learn a new thing faster with a tutor, the human tutor is totally possible as well, but it's not always possible financially or even time-wise. Like you said, the time and the when. A literal human may not have the time or the geographic location to be present in that child's life you know one-to-one whereas another hand we can invent that thing via what we call artificial intelligence today and they can supplant what would normally be a human function and potentially do it better or or just well or maybe better and that's a good thing i think those are but then we
Starting point is 01:04:03 get into this position of like what who is the arbiter of what's good and what's not good you know what are the as we've talked about before the unintended consequences of allowing this thing and opting in because we can't opt out like just everyone's stuck we're all opted in because you said that silicon valley is adopting the stuff in unique ways and so is the eu and so is Japan. So is China. Like there is a layer of, we cannot opt out in humanity that we don't personally hold anymore. You and I, and the three of us in this conversation, you know, there's a lot of good things, but there's so many unintended consequences or bad things that may result as a result of it. And we don't, and our decision-making processes as societies aren't well adapted to move at this speed. Yeah.
Starting point is 01:04:46 Right. Which isn't to say I would trade, isn't to say I would trade our democracy for some of the other options on offer right at this particular moment. But it is, it's been really striking, for example, in San Francisco to watch local politicians struggle with how do we regulate Waymo? How do we, because none of them want to acknowledge that like, that the worst safety problem in the city is not drugs or crime, it's cars. Like that's just, you say that you're going to get voted out of office immediately. Oh yeah.
Starting point is 01:05:19 We, I mean, we have this whole thing with like, anyway, you don't want to get me started on San Francisco politics. Well, it's not, I mean, it is politics whole thing with like, anyway, you don't want to get me started on San Francisco politics. Well, it's not, I mean, it is politics, but it's also like, that's just in a way stupidity, right? If there is a major problem and you're turning a blind eye to it and you are in a position of power to change how that works or how it does not work. Wow. That's just the silliness of the world. Yeah. But that's like, I mean.
Starting point is 01:05:42 That's local politics all around the world yeah i mean yeah politics is just another way of saying making decisions right yeah for sure and like and making decisions is hard is fraught like i mean like you say right like it's not there's no magic no magic wand we can wave to make some of these fears go away i mean mean, like, you know, the fears are real, right? I mean, sometimes they're out of proportion or they're based in, I mean, I don't know. You all must've tried to explain some of this stuff to family. Like, I mean, I try to explain how Waymo works to my mom and her first response is, I don't know, don't trust it. And then I have to say, well, mom, but within five years, I'm going to have to take your keys.
Starting point is 01:06:27 And then she's like, well, I won't trust it, but I'll write in it anyway. Yeah, given no other options. Yeah, it's very difficult to reason about, difficult to explain. Like you said, just making decisions with a large populace. It's just like, you're not going to have agreement. So it's difficult to rally around that. Even in small populations, right? I mean, Silicon Valley, we're super homogenous here pretty much.
Starting point is 01:06:54 And we can't figure out like, is this stuff going to, you know, are we going to have AGI in five years? And so none of these discussions matter because we're all going to start uploading our brains or whatever yeah that's been my refrain probably i probably say this more than adam brings up silicon valley but you brought i'll say it again anyways because he never stops is that it's amazing to me how divided brilliant minds are on this topic i mean there's so you can go from the doomers to the Utopias, right? To the E-accelerationism, whatever it is.
Starting point is 01:07:28 I have no idea how you, I think that's the first time I ever said it out loud. I hope it pissed somebody off. I'm upset. So you go from that extreme to that extreme and you go to the individuals, right? And you look at their credentials and their histories. And of course, there's going to be some outliers in there of like, whatever's, but very smart people,
Starting point is 01:07:49 very informed. And they are completely on the opposite sides of what they think is going to happen. And I don't know if you can name a technology that I can remember. I mean, even the web itself wasn't so divisive. There were people that were not thinking it was going to explode the way that it did, but they weren't like, it's going to destroy humanity, right? So that to me is just interesting. I mean, here we are and we have like massively wild differentiation of opinions, not like the smart people know one thing and the dumb people don't get it. It's like, there's pretty smart people and dumb people on both sides of this argument. Well, some of that has been informed by just the past few years of our tech history, right? I mean, there was, I just read a great book called The
Starting point is 01:08:29 Victorian Internet. It was about telegraphy, telegrams. And it's all about like, well, they all thought this was going to save the world. It's like, uh, actually, you know, they were like, it's going to bring about world peace. We're all going to be able to chat with each other. And so therefore, and this book was written in 99. So it was just like the, it was very much a sort of like, hey, you all saying that the web is going to save everybody from everything. Like maybe hold your horses a little bit. Right. And it wasn't, and it wasn't like Doomer. Right.
Starting point is 01:09:01 I mean, obviously the telegram didn't end the world. And the author wasn't trying to, I mean, it's interesting. I think if you wrote the same book now, probably there would be at least some people like trying to make it out that the telegram ended the world. It's like, hard to prove that one, guys, right? How about the segue? Remember the segue? So I think I was mostly just hype based on the guy who invented it, but he had a huge amount of hype surrounding the launch of this revolutionary new transportation
Starting point is 01:09:27 mechanism. And I remember, I mean, it made mainstream news that this was going to change the world. And he came out and announced it, and everyone was kind of like, womp, womp, womp. Yeah, it's like, wait, you revolutionized the way mall cops get around, but that's about it. But that's such a great example, right right about how innovation is channeled by the stuff that's already there because if we had i mean look at what's happening in if you go to like stockholm or copenhagen where they have good bike lanes the grand descendants of the uh in all the form of all these electric scooters and stuff like actually are changing like are replacing cars making cities uh but in you know in places where if your built environment
Starting point is 01:10:14 means you have to go 10 20 30 miles to get to the corner store like right of course it's not changing things right um and so again it's down to your point of when where how all these things vary a lot well i would certainly we were just in seattle for as jared mentioned for building uh we got back to the hotel on our scooters because we limed around we we had the chance we walked as well because we're like hey it's a nice night it's cool let's walk there's a couple times we're like let's scoot and we scooted and we got back to the hotel i'm like and like true dumb and dumber fashion i was like can we just keep going jared and he's like yeah let's just keep going and so we scooted down the hill like we just kept going like we just went on a joyride we just
Starting point is 01:10:56 scooted around downtown seattle it was a lot of fun oh it's fun if that was an option in my town i would certainly scoot as opposed to driving my f-250, which does, you know, it houses diesel in its fuel tank to make it go. That's how it works, just so you know. Which is more expensive, obviously has gases and things that, you know, happen as a result. But at the same time, to consume the electricity somewhere, unless it was turbine powered, if it was coal powered, you know, that do I know my electricity is green electricity or is it renewable electricity? I don't know those things, but I would certainly choose a different mode of transportation. If there was a different option in certain scenarios in my local town, you would die on a scooter. Not because of the scooter, but because.
Starting point is 01:11:40 Probably by somebody with an F-250. Right. Maybe. Yeah. That's actually incorrect. I bet you it would be, first off and foremost, it would probably be a Tesla because there's so many where I'm at. Like there's Cybertrucks everywhere.
Starting point is 01:11:53 Tesla's, I'd probably die from a Tesla. Speeding. Turbo mode or something. We can all agree that it was a Dodge Charger. Okay. That's the. Okay. Cosign.
Starting point is 01:12:04 I cosign that. Yeah. No, I mean, that's a, but like, but I'm sure if you live, we drove a Tesla from Montana to San Francisco a couple of years ago and we stopped in Eastern Oregon. And I was talking to somebody like a year later, I met somebody who lives in that neck of the woods. He's like, oh yeah, the one Tesla charger in all of Eastern Oregonregon that's my grocery store it's a 45 minute drive wow like that guy's not swapping out for a scooter anytime soon right like that's just no that's it like the geography of how he lives is not just not compatible right yeah which is fine which is fine we were looking at a new car recently we had to drive 40 minutes to the nearest decent, you know,
Starting point is 01:12:46 mainstream car lot. There's just not one in my small town. Walmart is not down the road. It's 30 minutes away from where I'm. That's how far into rural I am. Yeah. My mom's in, my mom's in suburban Miami and she basically doesn't do anything in her
Starting point is 01:13:01 life. That's closer than a mile. And for me, anything further than a mile, like living right in the city. Just forget it, right? Yeah. It's like, well, I mean, I won't forget. But it takes planning.
Starting point is 01:13:11 It takes, you know, it's like, oh, we're going to use the cargo bike instead of the. Yeah. And I think we're going to see a lot of this with. I mean, it probably won't be geographic, right? But different jobs are going to be impacted in such different ways with all this new tech. And we don't, and different jobs, different cultures, different languages are all going to be impacted in totally different ways. Maybe Hawaii should be an LLM free zone. Another free sci-fi story out there. I like that. You're just, are you a generative AI?
Starting point is 01:13:41 Cause you're really cranking them out. I'm on fire this morning, guys. And I haven't even had my coffee yet. Let's give you another opportunity then, maybe. And I think we can go around the table with this. Let's see if this is a good idea. Let's name some positive things that we would like to see happen as a result of what we call the current version of artificial intelligence and where it may go. You mentioned in the Blink of an Eye or Thanos snap,
Starting point is 01:14:03 you would Waymo, SF, and maybe every other city. So that's an example. Maybe you can expand on how that might actually roll out. And what are other examples of positive impacts of AI, not just the doom and gloom? So this is a very small, petty one. But look, I have a CS degree. I haven't written any code in useful anger in 20 years but i had to um i had to grab a bunch of federal government documents for a
Starting point is 01:14:34 project i was working on i didn't have to they here's the interesting thing it was like 700 pages that i wanted to and they were each like pdfs that five to 15 pages long, 700 and some pages worth of them. So I wrote a little chat. So I asked ChatGPT, one, write me a Python script to download all these. Two, summarize each of them and like give me the most important points out of each of them. Didn't matter if it was 100 percent accurate. Right. I was trying to get the gist of it more than the whole thing. That's a project that I wouldn't have even tried to take on without ChatGPT, right? Like maybe if I had an intern, I would have sent an intern to do it, but I wasn't going to do it myself. And I think there's going to be a lot more personal scripting, personal control of computers in that way, aided by ChatGPT. That might end up being
Starting point is 01:15:27 small in the grand scheme of things, but it could also end up being like Excel spreadsheets that the whole world ends up running on Excel spreadsheets and nobody actually knows that. It could end up running on ChatGPT small scripts, right? Because it's one thing if you're like, is ChatGPT going to write the next, I don't know, the next self-driving car? Probably not. Too complicated, too many concepts there. But can it help me write this little script
Starting point is 01:15:53 that just does a few little things? Hell yeah. Yeah. Right? And yeah, and on the big side. I dig that. Yeah, and the big side, again, globally, a million people a year die in car crashes.
Starting point is 01:16:03 Let's cut that down. I don't know. I think that's a great question. That's a great optimist question. Love it. Well, I think we can always be so negative. And I think we have three people who think about this a lot. We probably see both the positive and the negative. And there's certainly positives I can see from, like, I like the idea of a Waymo takeover.
Starting point is 01:16:23 Or not so much just Waymo, but the idea of what Waymo offers a city and a city being designed around a certain traffic pattern that has that. But that's also like the old way of thinking in some ways. Like we have always traveled by cars are different way. Trains are very popular in New York. Subways are very popular. Those are I don't know the stats, but I got to imagine way more safer than driving in New York streets because I've been on New York streets and they're crazy and they're always jam packed, you know. Yeah. But we also can't dig in every city.
Starting point is 01:16:50 So you have to be practical. I do like the idea of automated driving because I've seen some really terrible drivers. People are constantly distracted. You can see somebody like navigating on their phone or do like literally saw this lady. She was like reading her phone driving in and out of her own lane, going fast. Like what is wrong with you? You know, you got children on the, on the streets. You've got people who die. I got last year, my kid's classmates, father passed away at a red light because somebody just jammed right through it, being dumb, right? Those, those are preventable deaths and you
Starting point is 01:17:25 got a little girl who's known to me very very closely without a father and you gotta see that you gotta see that new reality so i'm all for some version of that but then you watch leave the world behind right i don't know if you've seen this movie that's another version that might be potentially ai bent to some degree i'll ruin one thing for you, and if you're going to watch this movie, stop listening for just about three and a half seconds. Teslas are self-driven to become weapons, let's just say. So you've got the Waymo idea out there, but then you can weaponize this thing if a nation state or something else takes over the system and uses it against the way it was supposed to be used and then you're locked out of it so you got this autonomous system that's sort of a black box because we've forgotten how to
Starting point is 01:18:13 code you know in 50 years from now whatever the number is that's not the the time of this movie but then you have that version of it so i'm like all four of those things and i lived through this one of the situation just mentioned to you but then then on the other hand, what do we do when somebody else gets a hold of this thing? You've got to have security down pat. You cannot have the XZs be in that world whatsoever. You have to have a totally buttoned down system.
Starting point is 01:18:35 And maybe it's actually AI that buttons down this system. Who the heck knows? How is this optimistic? How is this optimistic at all? Well, that was my response to yours. That was not my response. Oh, that wasn at all? You're like, well, that was not my, that was my response. That was not my problem. That wasn't yours. You were just responding.
Starting point is 01:18:48 Oh, okay. Wow. All right. Well, I want to be for your positive, but then I see this, this other side, this other glimmer of negativity. It's like, wow, what, what do we do then? All right. So tell us your positive one then. You just doomed and gloomed us.
Starting point is 01:19:00 I think, uh, what is my positive one? Waymo. Waymo and SF. That's my- Waymo? That's Lewis's. That's not yours. I haven't thought about it enough yet. You go ahead, Jared. I'll think of something. I promise. Go ahead. Well, I look at it like this. There are many jobs that humans are currently doing at capacities that don't scale enough. Education is a huge one. We need more educators. We need more equipped educators.
Starting point is 01:19:30 And so medical profession is another one where we have doctors who are just dead tired because they're working too long, too many hours, etc. in high-pressure situations. And so I think these tools to equip educators, specifically around the drudgery of the process of educating, thinking grading papers, thinking tooling, how to become a better teacher. Oftentimes you need materials, you need ways of explaining things.
Starting point is 01:19:57 These are all ways that these tools could potentially equip people to do their job better and with less stress. And probably educate more kids per capita if they are so enabled. So I think that's exciting. I see some stuff in the medical profession, although I'm not close to it, where they're saving hours and hours of time for doctors,
Starting point is 01:20:16 specifically around medical record entry, that kind of stuff, data entry. How many folks are out there doing data entry positions still to this day that could be better equipped, right? We're not trying to replace them. We're trying to free them from the shackles of this current role and enable them to do something that's higher value. Of course, there will be inevitably some fallout from that, some displacement, which is unfortunate, but I don't think can be necessarily mitigated a hundred 100%. So people will have to get new skills, new roles, et cetera,
Starting point is 01:20:48 in order to kind of realize their potential. But the people who are currently just stressed out and working way too hard, dangerous jobs, there's a lot of very dangerous jobs where we'd rather lose a robot than a human in a certain sense. I think these are all relatively optimistic, and I think they're potentially feasible short term.
Starting point is 01:21:08 Let me add one more movie to the list because I thought of one while you were talking there and I was thinking about Prometheus. Have you all seen Prometheus? I did. I did not like Prometheus. You did not like Prometheus? Again, my algorithm with movies is I usually end up
Starting point is 01:21:23 with a general sense and then one or two criticisms. I can't remember any of the rest of the movie. And so I don't know why I don't like Prometheus. I remember the acting was bad, and the characters kept doing stuff where I was like, there's no way you would do that. It doesn't make any sense. Do you know nonsensical decisions?
Starting point is 01:21:39 I can't get over them. Where I'm like, nope, no human in the real world would ever make that decision. And so I kind of wrote it off. But I know this was the prequel to Aliens? It was, yes. And so it's science fiction. It was Ridley Scott, right?
Starting point is 01:21:51 Ridley Scott, yeah. So I think I was also very pumped for it, which is why I ultimately was disappointed. Expectations management is a key skill. Yeah, now that I've crapped on it. Yeah. Well, if you liked the last minute-ish, then you should tune into the Plus Plus version of,
Starting point is 01:22:07 was it coming on Friends, right, Jared? This deep dive we did into 1999, basically. Oh, yeah, we have a bonus episode coming out soon, all about movies, yeah. Jared and I unexpectedly went deep on 1999 movies, which was an interesting year. I'll leave it at that, but changelog.com slash plus plus. It's better.
Starting point is 01:22:23 That being said, I think my positive would be kind of in line with yours jerry which is i think just enabling and kind of in line with you what you said lewis which is enabling i think there's an enabling factor that ai can do think about something simple as like repairing your dishwasher or your washer and dryer right it's got a manual what if an llm was attached to that manual and you can ask it questions what voltage does the regulator operate at what wire needs to go where versus the manual being archaic and like largely just unaccessible what if you had things like that that you can just tap into your everyday life and be enabled not so much diy but there's so
Starting point is 01:23:04 many people who can build their own backyard deck if they wanted to, that they don't because they don't have a dad or somebody who could like shepherd them through the process. What if you had something that could shepherd you through the process
Starting point is 01:23:14 to some degree, shape or form with a washer fix or an air filter change, like simple things in life, I think could be leveled up just by having a better access to info that isn't just like a Reddit thread that's got tons of opinion, but something that's a bit more unbiased, I suppose, that's straightforward to the answer. I'd like that. I would use that. Now I sort of want to ask one of the latest GPTs for their step-by-step instructions to building a deck.
Starting point is 01:23:43 Oh, yeah. Because that's going to miss some awesome steps in there. It would certainly tell you different. So I think I've done this enough to know. It would tell you different platforms you could build on. Like, would you use 4x4, 6x6? Would you use various different frameworks you can leverage to make it how long should your nails be?
Starting point is 01:24:06 You know, should they be galvanized? Is it pressure treated lumber? All these things, they will be near water. So all the things you need to know, it would tell you all those things. You'd still have to go make the decision, but that's current state of that. Now it'll tell you that today. I mean, it's pretty crazy. Like even with building stuff like a Linux box it'll tell you
Starting point is 01:24:26 all the things about different CPUs different RAM options I mean you could build boxes you know a Linux box on your own with little to no knowledge which is what I've done in the last couple years some on my own with lots of searches but then it got you know about halfway through my journey of doing that it got enhanced with chat GPT being. Now I know a ton about Linux that I just never knew before because all the information was widespread and opinion-based. It wasn't centralized in a way. It wasn't freeform and accessible to have a conversation with it. I think that's the uplift to your note, Jerry, with teaching.
Starting point is 01:25:00 I think that's super awesome. I think the idea of Waymo and the idea of self driving is has promise i just think if we actually deploy it at scale it needs to be locked down it needs to be sanctioned in some way shape or form to like have the utmost highest security in whatever way we can but uh yeah i think uh from this we should come back at some point off the mics and write some fanfiction. That'd be cool. Off-mic fanfic. That'd be fun. Sounds good. Lewis, let's close with Upstream.
Starting point is 01:25:32 Tell us about Upstream. We have June 5th, right? It's coming right up as a one-day virtual event coming up as we record a week away, roughly, and as it ships three or four days away. So what's it about this year and what are you talking about? So you can find more on the website
Starting point is 01:25:47 at upstream.live. All the new TLDs, very fun. So upstream.live. And it's a one-day celebration of open source where we try to bring together both maintainers and executives, right? There's a lot of events for open source execs these days,
Starting point is 01:26:07 a lot of events for sort of community grassroots stuff. Very few that actually try to bring them together in a coherent way. So that's what we've been trying to do with Upstream for the past four years now, I think. And this year's theme is unusual solutions to the usual problems. Your listeners certainly could grasp of what the usual problems are in open source uh the xz's of the world you know we will all talk about xz last year we had to put a ban on the xkcd nebraska comic because otherwise every single speaker would have used it so many yeah this year we've this year we've commissioned some new comics um you'll see some of those. So we'll be talking. I just did a great panel recording
Starting point is 01:26:45 with two Germans, one who runs their sovereign tech fund and so works in getting federal government money to open source maintainers as an infrastructure project, which shouldn't be that unusual. I mean, in some sense, you know, highways, we've been talking about cars all this time, but for software, pretty unusual. On the flip side, government regulation, we'll be talking some about that. Again, that's a pretty, for a lot of the world, a lot of industries, not unusual,
Starting point is 01:27:17 but for software, that's a pretty unusual, regulation is a pretty unusual solution to the safety problem. So we talk about that. We have a maintainer panel. We'll be talking with execs from a couple of big companies. I'll also be interviewing a professor from Harvard Business School about the value of open source. All online, streamed live for the first time with live chat. So I'll be in, I and a lot of the other speakers will be in chat. So you can ask us during our prerecorded talks what we think of things, ask follow-up questions.
Starting point is 01:27:49 And then we'll make it available in the few days after that from Upstream.live if you missed it next week. Big fan of the new TLD, Upstream.live. I think I got a preview of one of these comics that you mentioned. Is it by Force Brazil? These are the commission ones that you're talking about? Saw that on Chris Graham's Friend of Ours, also at Tidelift. I'll link it up in the show notes, I suppose. But it's an OSS maintainer, open source maintainer on an island saying, please help.
Starting point is 01:28:17 And all that happens is a plane comes by and just drops a bunch of issues on their head, which is not exactly the help they were looking for. And the plane has a banner that says, we love OSS. Oh, that's true. I should mention that. Oh,
Starting point is 01:28:29 we love OSS issues. That's adding insult to injury. And actually it's a, it has corporation on the plane too. I'm looking into the details. Oh yeah. There's some, I mean,
Starting point is 01:28:38 you know, it's so hard not to get, we've always tried a tie lift and we try at our events. I mean, these can't be complaint fests right if you do that it's no fun for anybody uh so we try to make them as we've been trying to do adam positive uh constructive yeah but boy yeah some days you just want to be like, come on, let's be positive, get on board. Cause it is, it can be, there's, there's just so much, how did we get tech Z? It's like, well, we've been telling you for years that these people are going to burn out. And then they did. And you're like, oh no horrors. Like,
Starting point is 01:29:21 well, you know, maybe we should try to do something about that collectively. And it's a real collective action problem for the industry. And that's part of how I'll be talking about it in my opening talk at Upstream is this collective action problem that we have. We'll link up the post you wrote, Pay Maintainers, the how-to, because we got compared. I think one of our, I think our Adam Jacob conversation, Jared got compared to this. I think some of us were think our adam jacob conversation jared got compared to this i think some of us were right and some of us weren't wrong i don't know we were just talking on a podcast obviously i mean i love adam so i guess i gotta go back and listen to that one um
Starting point is 01:29:55 i think it was that one that we got some comments where they compared the sentiment in that conversation to what you wrote and how we were not in line with the same thing basically i can't recall which but they might have been slack did you recall this jared the the sentiment no no okay i could be uh hallucinating honestly it can be you know a human version of hallucination at this point humans also hallucinate from yeah we uh we misremember we miss a line like oh that wasn't actually that adam jacob conversation well i mean you know the thing was like, oh, that wasn't actually an Adam Jacob conversation. Well, I mean, you know, the thing was I, for those that haven't read it yet, my post was simply... I was going to ask you to summarize it if you could. Just give us a TLDR. Yeah, in the wake of XE, some people are like, well, we tried to pay maintainers and it didn't work.
Starting point is 01:30:36 And it's like, well, I mean, so we wrote up, because we've never actually written up before, how is it that we pay maintainers, right? In fairly good detail. And it works, right? We pay out quite a bit of money every month to maintainers from our corporate customers to work on things that our corporate customers use. That said, there are different approaches to paying people. There are different types of communities, right? Paying a solo maintainer is very different from paying the Kubernetes project, right? Like that's a very different beast.
Starting point is 01:31:08 And I mean, this is just one of these things that is recurring. I'm sure this must come up in the podcast all the time that we tend to talk about open source as if it's like one thing, when in fact, at this point, open source is so successful that it is many different things. But it's easier for us to talk about it if it's just one thing. And so we often make mistakes of like, well, it's impossible to pay open source maintainers because I tried this one form of payment to one set of maintainers. It's like, well, yeah, no, that one doesn't work. And so I don't know. I'm curious where Adam's head comes out on it. You know, I mean, it's not a magic. The blog post is about how to pay maintainers.
Starting point is 01:31:45 It does not claim that this is therefore a magic wand and that these projects will always be secure for the rest of time. Right. For sure. People will still burn out. People will still have challenges. But we think we've got at least part of the solution at Tidelift. Well, I think one thing that was revealing, and we've known of Tidelift and have been
Starting point is 01:32:04 adjacent for many years and worked together in some cases over the years. We've had you on various podcasts. We've had your CEO on our podcast before. And I think last year, we've talked to Jordan Harban before on podcasts, but we actually met him face to face. And at least I did. I don't know if that was the first time you met him, Jerry, but it was last year at All Things Open. And he could not stop singing the praises of Tidelift for him as a maintainer. And so I think what you all could do better or more of, I don't know how well you do this because I'm not like in every single thread you're in, but I think what he had done, you know, to me was reshaped. I already knew what Tidelift was. I already knew what your mission was, but there was a cementing of like a boots on the ground individual that we respect and have talked to that's doing the work. Right. And they're like, you know, I, I got various forms of payments, but I love the way Tidelift helps me. One of my biggest streams of revenue is from Tidelift. I think it was on a podcast too, so it's already in transcript form. But that
Starting point is 01:33:05 changed my perspective on Tidelift, even though I knew who you were already, even though I had respect for you and everyone else who's involved in Tidelift. It changed that perspective because you saw people that have boots on the ground that have teetered and shared how they've teetered on the line of burnout or not. And obviously we do not want people to burn out. Back to what you said before, Jared, I think it's an enabler where you sort of force multiply somebody doing something that's they've got too much on their plate and artificial intelligence might be able to help them you know take something from an hour to 10 minutes that kind of thing or in the case of jordan you know having an organization have his back to let him do what he does best, which is be inventive in open source and not be bogged down by the minutia and literally get paid to do it because he's not going to stop.
Starting point is 01:33:50 He wants to keep doing this common good for the world. But if he can't sustain his life and his family, then it's not going to happen. And so we have to find ways to make that happen. Money is obviously one of the biggest ways to financially sustain somebody because that's what it's called, financial sustainability. It literally is money. But he could not stop singing your praises and i was so um proud of you all for that but then also it reignited a a i guess a curiosity from my standpoint on what tie lift is and what you're doing for the world well we'll have jordan and several other maintainers on a panel at upstream uh. So if any of your listeners are interested in hearing more about that and how we work with maintainers, that will be definitely a topic there. Though it's mostly not a pitch for us.
Starting point is 01:34:33 It'll be more, I think the official title is State of the Maintainers. So you'll at least, you'll hear, I suspect, about things like what do these folks think their risk is of becoming the next Dex C or the next log for J? You know, like you say, Adam, I mean, this is one of the things that is when you talk to somebody like Jordan or one of our other maintainers who we partner with, there's a lot of joy and love for what we do. But of course, the people who write the checks are often at Linux Foundation events. They're talking with other execs. They're talking with the leaders of Kubernetes. And that's not a bad thing, but it is a challenge for us that these folks in the middle who are numerically the... I was at a Linux Foundation event a couple years ago and somebody says, but yeah, I'm a maintainer of a small project. There's only 15 of us. I'm like, you are so in the fathead of, you know,
Starting point is 01:35:27 the long tail of maintainers is one maintainer projects with an occasional patch. And that's not necessarily a good thing, but it is our reality right now in open source and getting folks to acknowledge and grapple with that has been an uphill slog for us at Tidelift. It's great to hear positive words from you. It's always good for me to talk to Jordan. I saw him just a couple weeks ago at RSA. We'll be tuning in for sure.
Starting point is 01:35:56 The episode I was mentioning was episode 563 and was lovingly called The Way of Open Source. It was an anthology that we did at All Things Open. It included Matthew Santabria, ex-engineer at HashiCorp. Nithya Ruff, I believe, chief open source officer and head of open source, the programs office at Amazon. And then obviously I mentioned Jordan Harban.
Starting point is 01:36:18 So he was there representing open source maintainer at large with dependencies in most JavaScript applications out there. So obviously somebody who's got like three different angles into the way of open source. I think we captured that pretty well. So we'll link that up in the show notes. And if you haven't listened, Lewis, you should check it out. And Nithya is always worth listening to.
Starting point is 01:36:37 So yeah. For sure. Good stuff. Upstream.live next week. We'll be tuning in. Hopefully our listeners check it out as well. Lewis, it's always a blast whether you're telling us what's happening or prognosticating on what might happen next or might not happen. It's always fun for me to talk with you.
Starting point is 01:36:56 Yeah, it's always fun for me to talk with you too. By the time we talk next, I suspect we'll have a lot of actual case outcomes. We're still in this very early phase for some of these things. And there will, of course, always be new news from open source software security land. Yeah, I was going to ask you about the GitHub co-pilot litigation, but it looks like it's just kind of ongoing.
Starting point is 01:37:16 Like there's nothing to talk about there. Yeah, it's still early days. Yeah, I mean, there's some stuff to talk about, but we'll know a lot more in coming months, I suspect. Yeah. Yeah. I mean, there's some stuff to talk about, but it's, we'll know a lot more in coming months, I suspect. Awesome. We'll have you back
Starting point is 01:37:28 in six to eight months and talk about what's changed since now. Sounds like a plan. We'll do Happy New Year 2025. I believe that's
Starting point is 01:37:36 coming already. Oh my gosh. Awesome. 2025. All right. Your Linux desktop and or AI. All right.
Starting point is 01:37:45 Bye friends Thanks to everybody who gave us feedback On that new theme song It's quite the departure from our regular fair But lots of folks enjoyed it So we'll be working it in here and there Oh and that check one two money money stab at the top check check that was just a bit of throwaway audio from our recording session with shonda person last week that adam gave to bmc and said what can you do with this not bad right big
Starting point is 01:38:18 thanks to bmc for that and all the music that you hear on our pods. And thank you, of course, to our partners at Flight.io and our friends at Sentry. Don't forget to use code CHANGELOG when you sign up for a Sentry team plan. 100 bucks off. Too easy. Next week on the Changelog. News on Monday. Part 2 of our Microsoft is all in on AI miniseries on Wednesday. And our Pound to Find game show returns right here on Changelog and Friends on Friday.
Starting point is 01:38:48 Have yourself a great weekend, tell your friends about Changelog if you dig our work, and let's talk again real soon.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.