The Changelog: Software Development, Open Source - Agent, take the wheel (Interview)

Episode Date: July 2, 2025

Thorsten Ball returned to Sourcegraph to work on Amp because he believes being able to talk to an alien intelligence that edits your code changes everything. On this episode, Thorsten joins us to disc...uss exactly how coding agents work, recent advancements in AI tooling, Amp's uniqueness in a sea of competitors, the divide between believers and skeptics, and more.

Transcript
Discussion (0)
Starting point is 00:00:00 Hello friends, I'm Jared and you are listening to the Change Log, where each week we interview the hackers, the leaders, and the innovators of the software world to pick their brains, to learn from their failures, to get inspired by their accomplishments, and to have a lot of fun along the way. Torsten Ball returned to Sourcecraft to work on AMP, their agent at coding tool, because he believes being able to talk to an alien intelligence that edits your code
Starting point is 00:00:37 changes everything. On this episode, Torsten joins us to discuss exactly how coding agents work, recent advancements in AI tooling, AMP's uniqueness in a sea of competitors, the divide between AI believers and skeptics, and a whole lot more. But first, a big thank you to our partners at fly.io, the public cloud built for developers who love to ship. We love Fly. You might too, learn more at fly.io. Okay, Torsten Ball on the changelog.
Starting point is 00:01:10 Let's do it. Let's go. Well friends, it's all about faster builds, teams with faster builds, ship faster, and win over the competition. It's just science. And I'm here with Kyle Galbraith, co-founder and CEO of Depot.
Starting point is 00:01:30 Okay, so Kyle, based on the premise that most teams want faster builds, that's probably a truth. If they're using CI provider for their stock configuration or GitHub actions, are they wrong? Are they not getting the fastest builds possible? I would take it a step further and say, if you're using any CI provider with just
Starting point is 00:01:47 the basic things that they give you, which is if you think about a CI provider, it is in essence, a lowest common denominator, generic VM. And then you're left to your own devices to essentially configure that VM and configure your build pipeline, effectively pushing down to you, the developer, the responsibility of optimizing and making those builds fast. Making them fast, making them secure, making them cost effective, like all pushed down to you. The problem with modern day CI providers is there's still a set of features and a set of capabilities that a CI provider could give a developer that makes their builds more performant out of the box, makes the builds more cost effective out of the box
Starting point is 00:02:31 and more secure out of the box. I think a lot of folks adopt GitHub actions for its ease of implementation and being close to where their source code already lives inside of GitHub. And they do care about build performance and they do put in the work to optimize those builds. But fundamentally, CI providers today don't prioritize performance. Performance is not a top level entity inside of generic CI providers. Yes. Okay, friends, save your time, get faster builds with Depot, Docker builds, faster GitHub action runners and distributed remote caching for Bazel, Go, Gradle, Turbo repo, and more.
Starting point is 00:03:07 Depot is on a mission to give you back your dev time and help you get faster build times with a one line code change. Learn more at depo.dev. Get started with a seven day free trial. No credit card required. Again, depo.dev. Today we're joined by Torsten Ball from Sourcecraft working on AMP. Excited to dig into this with you. Hi, nice to be you guys.
Starting point is 00:03:45 Thanks for having me. I was very impressed by your blog post back in April on ampcode.com, how to build an agent or the emperor has no clothes in which you walk us through kind of line by line, a pretty, a basic but functional coding agent written in Go, and it really did a good job
Starting point is 00:04:07 of demystifying it for myself. Can you talk us through some of that, your motivation for writing that blog post, and then maybe just help our listeners as well as you did for myself, understand just how easy or I guess basic it is to get a working agent in your terminal? Yeah, the reason why I wrote the blog post is I had my mind blown so much that I couldn't
Starting point is 00:04:31 shut up about it and I had to get it out there. And the blog post ended up resonating with a lot of people. That's the most likes I've ever had on a tweet, I think, and the most visits surely. But it started before as an internal blog post that I wrote for the rest of the team here at Sourcegraph. And before that, what happened was that Quinn, Sourcegraph CEO, and I, we started hacking on what is now known as AMP. And we basically started with the realization that came up while experimenting with,
Starting point is 00:05:06 you know, the models back then it was Claude three, seven, uh, sonnet three, seven, the realization that, wow, like the game has changed. You don't need a lot anymore for, to make these models work, to get them to edit code. And I can go into what previously would you would have to have, but you just give them these tools and they go off. And I've had this, I had this moment, I wish I, I don't know, we could edit a screenshot and I send a text message just long to a friend of mine. And the text message was basically,
Starting point is 00:05:39 man, I think I just felt the AGI. I was in San Francisco at that point, so you have to talk like someone in San Francisco. I felt the AGI. I was in San Francisco at that point, so you have to talk like someone in San Francisco. I felt the AGI. Yeah. And I felt the AGI because what I had running was a super tiny prototype. It was Cloud37 and I gave it a read file tool so it could access files. I gave it a list directory tool and a run terminal command, the tool.
Starting point is 00:06:07 So it can run bash commands. And I was playing around with it and I was like, oh, you know, it goes through the directories, it reads the files. That's crazy. And then I, while testing, I said, like, can you change this file to something, something I can't remember? And suddenly the program stopped, it hung. And I was like, where's the loop?
Starting point is 00:06:29 Why does it hang? What's going on? And then suddenly I saw in my editor show up, file modified. And I'm like, I didn't give it a tool to modify files. I didn't give it an edit file tool. What? And then I looked at the transcript. And what did on its own, like with the system prompt this big and like three tools, it wrote an echo command that echoed the contents of the
Starting point is 00:06:57 file, including the modifications and redirected it over the file. So it figured out that me, the user, wants the agent to edit a file, but it doesn't have an edit file tool. So it resorted to running terminal commands and echoing the contents and overriding the file. And I was sitting there thinking, there's engineers that I could give this challenge to. You know, like you don't have the ability to edit files. You can only list directories and read files, and you can only run shell commands. How do you make this edit? And that model figured it out.
Starting point is 00:07:30 And I was sitting there, mind blown, how this is crazy. This is changing everything. This is nuts when you see this truly happening and how little code it is. So to spread the message inside of Sourcecraft, I wrote this blog post about how to build an agent, which is basically a modification of what I just described to you,
Starting point is 00:07:51 like Cloud 37, three, four tools, and then off it goes. And pretty well received. And then still, like, I saw more and more people talking online about how to build these agents or working with agents and a lot of stuff about what's agentic and whatnot. I got so anxious, restless, nervous about, guys, you really need to see this. Friends of mine who are AI skeptics or were skeptical about AI, they didn't know or they didn't really know what an agent was and have not seen how powerful these models
Starting point is 00:08:26 are. And I'm like, I got to get this out there. So I wrote it all up basically in one go. And you know, here's what you need. And it's only 300 lines of code and it was pretty well received and the amount of people that still tag me and say like, Hey, I wrote an agent in Python or in whatever it is with this model based on this blog me and say like, hey, I wrote an agent in Python or in whatever it is with this model based on this blog post. And oh my God, I had a mind blown moment. And I don't know, I think it's one of the most well received things I put on the internet in the last 15 years or something.
Starting point is 00:08:58 Yeah, it was really, really nice, really nice response to it. And I think the nicest one was somebody, you know, basically was saying that this is the first non-hype thing that makes this approachable and tangible. You know, it's not this like magical thing and here go watch 18 Kapothi videos and learn about neural networks and all of this. And you know, it was like, write some code, you roughly know how an LLM works. Tool calling is not that fancy of a thing. Just type this out and look at it yourself and play around with it and you will have, you know, a light bulb moment.
Starting point is 00:09:44 And they were saying, you know, like, it feels like you're democratizing this, like making it more accessible to others. And I know that made me happy to hear really happy. You said the word or at least the the acronym AGI and implying, you know, general intelligence, artificial general intelligence. What makes you feel like that brute force nature was AGI? Or you felt the AGI? I mean, I was half kidding, right?
Starting point is 00:10:12 Like just making fun of this. I'm not saying that. I was just checking your literalness there on that. Yeah, no, it's, it's, I mean, you know, half kidding, because I mean, we could talk about what does it even mean to be intelligent and whatnot, right? But what I said is, look like you have like this model that, you know, can have tools and then you give it a problem.
Starting point is 00:10:40 For example, I can build it as today. Or I mean, that's what all of these agents can do. You start it on your Linux server, and then you say, restart my nginx instance. And I'm pretty sure what it will do is, or at least, you know, sonnet four, clad three seven sonnet, is it will check like,
Starting point is 00:11:00 syscontrol nginx restart, does that exist? Or et cetera, nidd nginx, does that exist? And et cetera, in a D engine X, does that exist? And if it fails and it doesn't get a response back, it will look at the error messages and then we'll try a different thing. And then we'll, I've seen this happen. It will say, wait, is there an engine X running? Let me do a PS, grab engine X.
Starting point is 00:11:20 Oh, what's the pit of this process? And then it will look in the proc directory with that pit and figure out in the exe file, like, what is it? Proc slash pit slash exe or something. Like what's the binary location? And then based on that figure this out. And it will do this with only the prompt restart my nginx. That's it.
Starting point is 00:11:40 And we could talk about what AGI is or what it isn't, but I don't have another word to describe this as to say, it did something smart here. Like it looked at it, what it's doing, and it looked at the feedback, it got back from what it's doing, and it acted on that feedback and tried to achieve this goal. And that's not, you know, it's not AGI, like it's not, it's, who knows, but it's still like, the ability to.
Starting point is 00:12:09 Yeah, it looks like, yeah, you know. What do the transcript describe? I know that you alluded to this transcript, I've never read one of these transcripts. What is, can you describe the transcript? What is it, what details are in there? Can you allude to like the thinking part of this? Oh, you mean the transcripts of like...
Starting point is 00:12:25 Right. You said you saw it do this, you're like, how did this happen? You look at the transcript, when that transcript was revealed to you, what did you see? I mean, the transcript is just a conversation. So every time you talk to an LLM, at the basic level, you send text in and you get a completion back. If you say what numbers are in the flag of the US, then you say blue, red, and then it will come back and complete with white. They're trained on completing conversations between a user and an assistant.
Starting point is 00:13:00 If the user says, hello, my name is Bob, And the assistant says, my name is Joe. And then you want it to complete. What's my name again? Then it comes back based on this with your name is Bob or whatever I just said. And I mean, that's, that's it. Like that's a transfer for me. That's the conversation. And, you know, the funny thing is that with tool calling, you add another element to this. So I described this in the article, that tool calling sounds super fancy,
Starting point is 00:13:30 it sounds like there's a lot of stuff going on, but it's, in some sense, the way I describe it in a blog post is, you're having a conversation with a friend, and you say, hey Adam, I'm gonna talk to you, and in the following conversation, if you want me to raise my arm, I'm gonna talk to you and in the following conversation, if you want me to raise my arm, I just need you to wink, right? And then you wink and then I raise my arm.
Starting point is 00:13:51 And tool calling, yeah, it's how, you know, weird conversation starter, right? Like you don't get people excited. But with tool calling, you basically start the conversation with the LLM and you say, in the following conversation, when you feel they need to say, read a file or list files or what else, like run a terminal command, respond in this specific way. Respond with a message that starts with tool call, name, read file, tool, you know, like
Starting point is 00:14:24 in a specific syntax. And they've trained on this. So when the model thinks in quotes, air quotes for everybody listening, if it thinks it needs to call a tool, it will respond in a specific way. And that's it, that's the whole magic trick. So what you say to the model is, you are a coding assistant.
Starting point is 00:14:47 You have access to the following three tools. Read file, list directory, run thermal command. Here's the conversation with the user. And then the user says, what's in the read me file? And then the model thinks, I'm going to wink. You still use an air code. Yeah, sorry. I'm air quoting everything. But then the model comes back and says, let me read
Starting point is 00:15:08 that file like that's the thing that I want to do. And then how it works on a practical level is that you sent that up to the provider, to Anthropic, to, you know, Google, open AI and the response comes back and it says, uh, the assistant or the model didn't complete the text it wants to call a tool. And then you look at what specific tool it wants to call and then you, air quotes, you execute the tool by just running that function with the given parameters and you send the result back up.
Starting point is 00:15:40 So it's pretty simple. If you draw it out on a UML diagram or something, it's pretty simple. If you draw it out on a UML diagram or something, it's pretty simple. And the magic is in how much is enabled through that. So if we go back to the first example, you would ask, you have three files, list file, read file, run terminal command. And you say, what kind of project is this?
Starting point is 00:16:03 That's what you ask. And then the model, just like, you know, that's what I keep saying, this? That's what you ask. And then the model just like, you know, that's what I keep saying, just like us, just like us. What it will do is, well, let me list the files. Let me see what's in this directory. And then you execute the list file thing and you send it back up the list file results, which can just be, you know, list of strings or a string with new lines in it, right? Or just LS dash L or something. And it has this list of files and then on its own, it will say, Oh, I see. You have a go dot mod file, or I see you have a package dot JSON, or I see you
Starting point is 00:16:35 have a PNPM log file. I'm assuming this is a web app because of blah, blah, blah. Let me check in this other file how you define or what's in this file, how this is documented. And then it goes on its own and explores these other files. And it's, again, I've saying this 18 times now, I think the last 20 minutes, it's mind blowing. It's truly, it's crazy. It's crazy to see how much, how much that enables these, these tiny, tiny tools. What's interesting about it is that
Starting point is 00:17:06 it's a very basic algorithm, right? It's like loop until you have a solution. And really that's kind of what we do as human engineers is or we give up and that's the difference. It's like this thing's not gonna give up. So it very much is a brute force. But if you come to me with a problem and you say, Jared, I gotta solve this thing,
Starting point is 00:17:26 and you'll read from a file, I'm gonna like pick my most obvious known solution, I'm gonna try that. If that doesn't work, I'm gonna get another idea. I'm gonna try that. If that doesn't work, until I've exhausted all of my ideas, and then what? Then I go ask a friend, or I go out to Stack Overflow,
Starting point is 00:17:42 or now I go to an LLM, and get more ideas. Like, okay, I need more ways of doing it until I eventually get there. And one proxy for like good programmer in the past 50 years has been, how long will you persist through that process until you get to the solution? Like some people just give up.
Starting point is 00:17:57 I'm stuck, roadblock, whatever, I'm done. And there's other people who actually power through and then they learn because they've had experience to like jump straight to the right one sometimes. Like you can just, you know, early exit from your loop. And it's amazing how that simple algorithm, which is like try a thing, loop until it works, when brute force with something that is inexhaustible,
Starting point is 00:18:21 like I'm just gonna loop, I'm gonna try things really fast and just keep looping until my problem is solved. It approximates to like human intellect, doesn't it? Like that's what we're doing in different ways. And so it's very effective too. And that's why it's mind blowing because you're like, oh, try this.
Starting point is 00:18:40 And because it has a corpus of all these ideas because it's already ingested them, right? So it has all these different ways of doing it, ways that maybe I wouldn't have thought of. And it didn't think of them either, it just indexed them and has access to them. And the end result is very impressive and very productive. And yeah, I think mind blowing is fair.
Starting point is 00:19:00 On that, you know, the simplicity of this algorithm, what we kept saying last few months is, what we've seen over say the past year is that a lot of tooling or a lot of stuff that has been built around this model has collapsed into the model. Meaning, say a year ago, they weren't that good at tool calling. So what you would do is you would say, here's
Starting point is 00:19:25 the contents of this file. Can you edit this file? And it would come back and you would prod it and you would say, reply in this really specific way, reply in this diff format. And then you parse out that diff format and then you apply this or you use another model to apply this. And this has collapsed into the model because now you can give them tools and they do this on their own. And it's truly like just a for loop. And the funny thing is, like if, I don't know, if you ask hundred engineers, half of them would say it's just a for loop.
Starting point is 00:19:56 And the others would say with a smile on their face, it's just a for loop. Like this is crazy. Like you just, right. It's all in this model. You just give it output of five commands and then say, what should I do next? And it goes and tries 15 other things because it know, you know, like based on the previous conversation it then thinks the next best step is to do the following. And it's again, I'm not going to use the same word again.
Starting point is 00:20:21 I'm going to say it's, it's nuts. It's bananas. It's nuts. It's bananas. My mental model for that, which is not a super complicated thing to think the same word again. I'm going to say it's it's nuts. It's bananas. It's nuts. It's bananas. My mental model for that, which is not a super complicated thing to think about, but I compare it to tool calling specifically. I compare it to like shelling out of a programming language. It's like, you know, Elixir has all these things you can do in it when it comes time to tag an MP3 with ID3 tags. Well, there's no like Elixir can't go there
Starting point is 00:20:46 unless you build it. But you could also just call FFmpeg, right? Now you're just tool calling and wait for that to do its thing and then hand back to what you need. And we've been doing that forever in programming languages, right? You just shell out, wait for the response,
Starting point is 00:21:01 and then move on. And now you can do all kinds of things you couldn't do otherwise. And really that's what this tool calling is doing with these agents is like, yeah, it doesn't know how to do these things, but it knows you tell it how it can do those things. You tell it to wink when it needs to.
Starting point is 00:21:14 And then it waits for something else to do it. And it can just tie, if you tie those things together in a tight loop, you know, magic happens. And I mean, that's how they've been trained, right? So a year ago, one of the big topics was hallucinations, right? And that's because, to use your analogy here,
Starting point is 00:21:35 the model was only inside the programming language. It couldn't shell out. It only knew what was in its standard library. It's gonna break down the analogy, but it, you know, it didn't have- All metaphors break down eventually. So really quick. It didn't know that there's a world outside in some sense. So it would, if you would ask it, like, what's in this directory? I mean, we've all tried this, you know, people tried this and without telling it what's in the directory, it will come up
Starting point is 00:22:05 with something. It will then say in this directory, there's probably a readme file because in that whatever context, that's the most likely thing, but now they've trained them to use these tools and then they shell out. It's like, I don't know what's in the directory. Let me run list files or, you know, LSL as a bash command or something. Yeah. It's nice. So this is a lot unlocked a huge opportunity, which of course Sourcecraft is trying to jump on and other people are, we were talking before we started recording,
Starting point is 00:22:33 Google just got into the game. We know OpenAI is in the game. We know that Anthropic is in the game. There are open source players of this game because there's huge value here. There's lots of opportunity. And so you are one of the creators inside of source graph of AMP.
Starting point is 00:22:52 We talked with Steve Yegge a few weeks back now and he's like saying, just try Codex AI, OpenAI Codex, try AMP, try Cloud Code and you know, mix and match, and these have different things. And that's when Adam was like, we want to talk about AMP and learn more about AMP specifically. Of course, all of us probably want to learn more
Starting point is 00:23:13 about Gemini CLI, just announced today. I think you were playing with it before we hopped on. I know I downloaded it and it has some interesting stuff. I mean, Google's going to be a good player in any game. So Gemini CLI, free of charge, open source, unmatched usage limits, it looks pretty good. So curious eventually at your opinions on that, but let's talk about AMP.
Starting point is 00:23:35 What's Sourcegraph's angle, its view of the world? Steve kept saying it's for enterprises. But I wonder your thoughts. My thought is that AMP was built in February, which seems like an eternity ago, when basically this phase shift happened where suddenly with Cloud 3.7, people started to realize that these models are really good at tool calling and that you can quickly get something running, hence the blog post. quickly get something running, hence the blog post.
Starting point is 00:24:10 And AMP is built on the assumption that the results are amazing if you just get out of the way of the model. Give the model tokens, which is, you know, what we do. And that's why AMP is also more expensive than maybe other providers, but give the tool more tokens and don't try to match your $20 a month subscription and restricting stuff and cutting output. But just let the model run and give it access to tools, a curated set of tools, a set of tools that you think is good for doing coding. And just get out of its way and give power to the model.
Starting point is 00:24:47 We started working on this and we're amazed by how well it works. Quinn and I quickly started building AMP with AMP and just all day long sending each other messages, this is amazing. It just did this. It just did that. Then nobody's going to believe me, but we actually started working on this before Cloud Code came out. And then Cloud Code came out.
Starting point is 00:25:09 And I think it's still AMP and Cloud Code that are the most agentic of these tools. I think Cursor and Windsurf, great products, but I think their agentic mode feels a little bit slower, feels a little bit more like there's some sort of abstraction between you and the model and there's other stuff going on. And we specifically made a distinction or the decision to say, no, no, we, you know, forget about accepting each change, forget about not giving the model real access to the file system, forget about, you know about being able to modify the previous conversation of this. No, no. It's give the ball, the transcript, like the whole conversation, give it access to the tools, give it access to the file system and let it go and let it run. And I think
Starting point is 00:25:58 that's what people are now discovering. This is powerful stuff. And is it for the enterprise? Is it for individual devs? I'm an individual dev. I love using it. I know a lot of other individual devs love using it. When it comes to enterprise, I think it's just our expertise at Sourcecraft of working with large scale customers and some of the best software companies in the world gives us customer trust.
Starting point is 00:26:23 It gives us the ability to build something for day need. We know what their code bases look like. We've seen how many thousands of large repos they have. So that plays into it, but it's not, you know, I wouldn't market it as well. Cloud code is for the individual deaf and AMP is for the enterprise. Def to me, AMP is for everybody, everybody who wants a powerful, a powerful tool. And of course, you know, it sounds ridiculous now because we're in times where individual devs spend hundreds of dollars a month to use these tools.
Starting point is 00:26:57 And you too have also been around a while. You know how crazy that is that even two years ago, if I would have said to you an individual dev for their side project, project will spend 50 bucks on a weekend just to, you know, blow some tokens and ship stuff that sounded crazy. And I think we've accepted this change and that this is now how you are productive and how this stuff works. And if you to say, well, the individual cannot afford this and it's costing maximum five bucks and you get only that many tokens and whatnot or requests, that's not what AMP is. If you want the best agent and you want to put some money in the agent and let it rip and let it go, that's what AMP is for. And the other thing is on a super coming up
Starting point is 00:27:47 from the level of product principles or product vision, on a purely practical level, we are a CLI application, like AMP is in the CLI. We are in a VS Code extension, which works in Cursor, Windsurf, so and VS Code, obviously, and Codium. And it even works in the, what's it called, the Firebase, the web-based VS Code version. So you can use it in all of those.
Starting point is 00:28:11 You don't have to use a different editor. You don't have to use a different IDE. And what's also is different to the others is that we have a server component, so all of your conversations you can share with your team. They can see how you talk to the agent. You can share links to these conversations. You have a leaderboard.
Starting point is 00:28:28 You see how many tokens everybody burns and you see how many lines of code everybody generates. And that's been pretty nice. And that also resonates a lot with large enterprises where I'm sure you can imagine, well, maybe not. I was surprised when I heard this, but apparently in large enterprises, there's a big divide between people who've seen what these tools can do and want to encourage the rest of the engineering
Starting point is 00:28:54 org to use these tools and people who are really skeptical. There's a big divide. And there's a big divide in how successful each of them is with these tools. And when we show these customers or potential customers, we show them, look like with AMP, you can share the threads and you can share the prompts. You can see what the results are.
Starting point is 00:29:12 They go, perfect. Then I can send this around and can show others, this is how I would prompt it. This is the trick that I use. This is how I set up this feedback loop or something. So yeah, that's roughly the overview. And the other meta thing to mention here is that we specifically started on NAMP with the assumption that
Starting point is 00:29:35 every week a model might get stronger and better and stuff might collapse into it again, and we need to be prepared for changing our product again. Like, if you get Beyang on, the CTO of Sourcecraft, he said something that stuck with me a couple months back. He said, in these times when you build with AI, the old startup playbook of try stuff out, find product market fit, scale it up,
Starting point is 00:30:04 that playbook has worked for the last 15, 20 years, but maybe that's over. Because now, as soon as you find product market fit, there's another huge technological change that might pull the rug out from under you. You need to be prepared that you cannot say, we found this, let's scale this up. You need to be able to move with the technology because we're in a phase of upheaval, a phase of change. And we try to embrace this from the start by saying our products get out of the way of the model. The picture I use is built light scaffolding, wooden scaffolding around the model. So when the model gets better and bigger and stronger, the scaffolding falls away and you again get access to the raw power of this model.
Starting point is 00:30:50 And yeah, that's the meta thing. Keep it simple, be able to move fast, move as fast as you want, be able to, that's, I should have mentioned it at the start. We don't have a model selector. We pick the best model for the job that we think is the best model for the job right now, and we are prepared to change this. So if tomorrow a new model comes out, we want to be able to say this is now the best model for coding. But we want to provide the best experience without the user having to select
Starting point is 00:31:22 out of these premium models, low cost models, fast models, select one of 18 for this given task. Now you need to activate ask mode and then go into planning mode and then go in XSKUMA. We want to say, no, this is the best way to do this and you don't have to worry about this. We pick the best model for you. And right now under the hood, it's a mixture of different providers, models, and we just want to provide the best experience.
Starting point is 00:31:45 Yeah, when we talked to Steve, one thing that was stuck out, I suppose, was the copy, the web copy, when I say this, the word copy, on Amcode.com, and it said, and I just can't believe some of the words that was written here, and this is Quinn, apparently, because I asked Beyond who wrote this, and he said it was Quinn, so there you go. It was me. It was you. Was it you? Yeah, this is Quinn, apparently, because I asked Bianca who wrote this and he said it was Quinn.
Starting point is 00:32:05 So there you go. It was me. It was you. Was it you? Yeah, it was me. You wrote all this. Okay, so I'm gonna read your words then. Now you're talking to the right guy.
Starting point is 00:32:12 Well, Bianca credited Quinn and you're crediting yourself. Let me read the words, confirm this for you. Awkward, but it's true. Awkward, but true. I believe you. Okay. It says, and this is older
Starting point is 00:32:24 because the website has since changed. I had to go use the Wayback Machine. Thankfully, and this is older because the website has changed. I had to go use the Wayback Machine. Thankfully that still exists. And I'm able to actually see back to like last month or earlier this month is something like that. So the heading says, everything is changing. Then it says, we believe programming with AI is going through massive changes. Dot, dot, dot again.
Starting point is 00:32:42 The models yearn for the tools and tokens. We hold them back if we make them, and this is kind of harkening to some of the things you just said, we hold them back if we ask them, if we make them ask before they can change a file. Give them tools and tokens and everything changes. What we use them for, how we use them, how many we run at the same time,
Starting point is 00:33:02 how they talk to each other, how they talk to you, what they even are. It's all going to change. And so I'm not going to read the whole page because we should miss a few more lines, but a really good copy for one. So very profound. And then you mentioned how you and Quinn had been working on this and kind of chatting back and forth. Like this is what I did today. And he says that, and you say that. What exactly are you doing with this thing to like build this thing?
Starting point is 00:33:33 Like what is, give us a glimpse into what it's like to have this be true for you and put this to work. Yeah, so first of all, thank you for the compliment on the copy. This is an amazing copy. A dramatic reading of my copy that I made. Yeah, does that feel good? It was really good.
Starting point is 00:33:52 I mean, the reading as well as the copy. Yeah, the reading was excellent. Can I read the rest of it? Actually, I think there's only a few more lines. Go ahead. I'll finish it off, man. Yeah. It says, it's all going to change.
Starting point is 00:34:02 AMP is embracing it. So that's what AMP is. Our way keeping up question mark shipping we add and we add remove every day We're building for where these walls are going if that means ample look completely different three months. So be it It's like this. It's almost like a rap song It's like this anthem against this, this route rally calling away. And it goes on to say, if you want long-term support and the same UI in 2032, if you want to spend a maximum of 20 bucks per month, AMP is not for you. If you want to find out where this is all going, come with us. And this says, read the manual. And I think that it's just like, so cool. It's like, you're going
Starting point is 00:34:42 off into this, this sort of like burning man journey in a way It's like, you know, I don't know where we're gonna be when we come back But we're going on this journey and we're opening the flood the floodgates We're letting go of all the restrictions and what happens happens. Is that kind of what's going on? Yeah, I think So I didn't have rap song in mind. It was more like 60s, you know, like the Rolex copy or whatever, you know, like the magazine advertised. But I think this come with us thing builds on this idea that we've had that is, you know,
Starting point is 00:35:23 I said this to Queen and Beyond, that I want to spread excitement and curiosity and the joy of discovering these new things. If people want to come along and they are open-eyed and are also excited by this, let's pull them along. Let's explain how this works. Don't act like this is something that nobody else can do and it's magic. You wouldn't even understand, just click run agent and accept what the agent is doing.
Starting point is 00:35:50 Like no, no, this is a tool by professionals for professionals, a power tool, and you can understand how it works, and I wanna show you how this works, and I wanna pull you along, which is also where the blog post comes from, right? This, the subtitle of the blog post is, The Emperor Has No Clothes, because a lot of the copy from other AI software is this AI magic.
Starting point is 00:36:15 It knows everything about you. It's going to replace you, and it's going to replace your job and whatever you're doing. For me, the fascinating bit is that these are incredibly powerful tools. Let's figure out how to yield them. Let's figure out how to make real use of them and just build, you know, come along and let's use this. Like everything is changing. These are incredible tools that will change software in the next years.
Starting point is 00:36:41 Tremendously. Let's come along. And to go back to your question, what does it look like in practice? It's when Quinn and I started building this, so AMP started out as the VS Code extension and it's written in a standard web stack. In VS Code, the sidebars is usually a web view,
Starting point is 00:37:00 so in our case, we use Svelte for this. And what we would do is we would hack on this and we started with like the normal, you know, what everybody knows, like the user message, assistant message, tool calls, like the display of this. And then for example, we would add a new tool call, like say format file.
Starting point is 00:37:20 There's even a video recording of me and Bian doing this. You add a new tool and say the agent can now format files and then in the UI it would show up as like just you know, like a unstyled Jason message like tool format file arguments and then You would go Let's see if the agent can build me a nice looking Component for this and then I would take a screenshot of this come of this thing And I would open a new conversation
Starting point is 00:37:45 with the agent and say, can you make this look better? It's this tool call and here's all of the other components that we already have and also check your work by opening this URL. And then what the agent would do is it will go, oh, let me look at these other components. Oh, this is how tool calls are displayed by using these components. Let me look at this. Oh, so it's missing this. Let me add a new component. Let me add a storybook entry. Let me open the storybook in the browser. Oh, here's the screenshot. Now it looks good. And then you're sitting there and you send a message to Quinn and you go, you won't believe
Starting point is 00:38:20 what just happened. Like it used the browser and took screenshots and then it ran into an error and it figured out how to fix this error and it got the diagnostics. It's just this excitement of seeing when you put it on the right tracks, it's an image I use is that you kind of just scream at a nation and say, fix this issue that I have. what you have to do is you kind of have to set some rails and say, here's a file, here's an example file, here's how you get feedback about your work. You know, here's the command you have to run to get linter output or compiler errors or whatnot, go and do it. And then it goes off and it will run into obstacles and issues and it usually will jump over those hurdles.
Starting point is 00:39:05 And it then comes back and says that then here's a new component. And we had, you know, there was another thing where we also have a, at the same time we started sending so many DMs back and forth that I was on a bike ride and I was listening to another podcast and they were also talking about agents and I was listening to another podcast and they were also talking about agents and I'm like, oh yeah, we should do this and this and this. It got so excited, I stopped the bike and I said, Quintin, we should record a podcast
Starting point is 00:39:32 just to share this excitement. So also on Amco.com, we have like this five, six episodes, I think, podcast where we just talk about what we do. And in that first episode, I described something that I also couldn't shut up about. I was working with AMP on the right as the assistant on AMP itself, and I was refactoring some tests. And what I was doing was standard TypeScript tests with this describe and test, test, test,
Starting point is 00:40:03 test, like pretty repetitive stuff. And so I was start refactoring these tests. And back then we didn't have amp-tab, which is completion. And I was like, I wish I could have recorded what I just did to five of these eight tests and then say to the agent, you now do the rest, right? So I started to type out a prompt to the agent. I said, I want you to rebuild the feature that records the keystrokes. Like once you hit record, it should record the keystrokes that you make in the editor.
Starting point is 00:40:32 And then when I stop recording, it generates a prompt and sends it to the agent. It's like, here's all of the keystrokes I just did. Go and finish the work I was doing, right? And I sent this off to the agent, and the agent went off, and as they say, one shot at doing, right? And I, I, I sent this off to the agent and the agent went off. And as they say, one shot at it, right? It went off and it actually built something. And I was like, surely this isn't going to work. So I've booted up the debug build. I start modifying. No, I, I, I start this start recording command and it shows like a little
Starting point is 00:41:00 recording icon and VS code. I was like, okay, cool. But now it's going to fall apart. So I started modifying the tests again. I changed two out of eight tests. I hit recording, it says recording stored. Surely now it's gonna break. So then I hit this other command that it added to add like the keystrokes to the agent and say to the agent, this is the edits I just made.
Starting point is 00:41:20 And I hit that button, and at that moment I realized that what it did was a pretty naive version. It truly recorded every keystroke, like not as a diff, but truly like, you know, T-E-S-T, new line, return, all of this, and it sent like 60 lines or something, or 160 lines of just keystrokes. And I was like, yeah, what the hell? And I send it off to the agent,
Starting point is 00:41:44 basically saying, you know what I just did? And then it's 160 lines of keystrokes. And it came back and said, like, I see you're trying to refactor these tests to switch from assert to expect. Let me finish up the rest of the tests. Get out of here. And it went and did it. And yeah, exactly. So then I sent Quintinventus like, dude, we got to record the podcast. Like this is crazy. You know, like it's, is this something that you would use all every two minutes? No, you know, is it like a little feature that I built in 15 minutes? Yes. Is it amazing? 100%.
Starting point is 00:42:19 Like the ability that, that you can just like build a tiny feature by just describing it roughly, it comes back and it works. And then this mind blowing thing of sending 160 characters and then you realize, yeah, they are somewhat like us, but they're also not. They can make sense of 160 characters and say, okay, this is what you're trying to do here. And it's just... Couldn't you make the same sense as the human though? I, this is what you're trying to do here. And it's just...
Starting point is 00:42:45 Couldn't you make the same sense as the human though? I know this is amazing. I'm not arguing against this. But what are you arguing for by saying they can comprehend the 160 lines of code? Well, as a human, yeah, you could. If I give you 160 lines of one character, you could. For sure. It takes a little longer, maybe. It takes a little longer, That's what I'm saying. But that's the other thing, you know, where you can sometimes just paste some error messages
Starting point is 00:43:11 in that are not formatted. And if you do this with the human, they're like, what is this? And then you realize, oh, it's one file path broken up into four lines and not four separate, you know, whatever stuff like this. And for these miles that just, it's, you know, it's not an issue. Like, it's like the red and butter. It's like, yeah, yeah. Making sense of text.
Starting point is 00:43:32 And then the other thing, right. Is, is you can sometimes send them like the rot 13 encoded text or something and talk to them in that. Just to troll it or why? Yeah, but they, they get it. Like sometimes they get it and they're like, oh, this is raw 13 encoded and reply also encoded text or something and talk to them in that. Just to troll it or why? Yeah, but they get it. Like sometimes they get it and they're like, oh, this is Rod13 encoded and reply also encoded back and stuff like that.
Starting point is 00:43:51 It's a troll back. They really like us in some sense, but they're also strange. You're going to encode, you're going to encode your question. I'm going to encode my answer. Yeah, exactly. Let's see you do what I just did. How do you deal with this? Yeah. Well, friends, Let's see you do what I just did. How do you deal with this? Yeah.
Starting point is 00:44:13 Well, friends, Retool Agents is here. Yes, Retool has launched Retool Agents. We all know LLMs, they're smart. They can chat, they can reason, they can help us code, they can even write the code for us. But here's the thing, LLMs, they can talk, but so far, they can't act. To actually execute real work in your business, they need tools. And that's exactly what Retool Agents delivers. Instead of building just one more chatbot out there, Retool rethought this. They give LLMs powerful, specific, and customized tools
Starting point is 00:44:41 to automate the repetitive tasks that we're all doing. Imagine this, you have to go into Stripe, you have to hunt down a chargeback. You gather the evidence from your Postgres database, you package it all up and you give it to your accountant. Now imagine an agent doing the same work, the same task in real time, and finding 50 chargebacks in those same five minutes.
Starting point is 00:45:03 This is not science fiction. This is real. This is now. That's retail agents working with pre-built integrations in your systems and workflows. Whether you need to build an agent to handle daily project management by listening to standups and updating JIRA,
Starting point is 00:45:18 or one that researches sales prospects and generates personalized pitch decks, or even an executive assistant that coordinates calendars across time zones. Retool agents does all this. Here's what blows my mind. Retool customers have already automated over 100 million hours using AI. That's like having a 5,000 person company working for an entire decade.
Starting point is 00:45:41 And they're just getting started. Retool agents are available now. If you're ready to move beyond chat bots and start automating real work, check out Retool Agents today. Learn more at retool.com slash agents. Again, retool.com slash agents. Let me ask you this.
Starting point is 00:46:02 You said earlier, there's this divide in the enterprise. And for lack of a better term, let's just call it believers and unbelievers, right? So you have the bulls and the bears. And of late, you know, on this podcast, we've been just doused in believers. Yourself, Chris McCord was just on the show. He's into it, obviously Steve Yegge's into it.
Starting point is 00:46:26 Chris Anderson, who's got vibes DIY. They're building a vibe coding thing. And like all these stories are in alignment, but there is a lot of people that are super skeptical for various reasons. And I can list off what I think are some reasons that I've heard, but you said with your enterprise customers,
Starting point is 00:46:45 there is this inside enterprises like skeptics and believers. What's the skepticism's argument that you're hearing? Maybe you can steel man it for them and not just dismiss it. But like, what are the skeptics skeptical about inside the enterprise about these tools and their future? I don't want to distinguish between enterprise and whatever. Let's just say in general, I think I had a conversation
Starting point is 00:47:09 last year at a Rust conference in Italy with a senior engineer who was done some amazing stuff in the last 20 years, apparently a crazy good programmer. And he was like, back then I was working on Zed and talking about the text editor and he's like, back then I was working on Zed and talking about the text editor and he's like, so you have AI features. And I said, yeah. And he, and I knew by the tone, you know, not a believer. And he's like, can I turn them off? And I was like, yeah, sure you can. And then I said, why do you have you not played around with
Starting point is 00:47:40 it? Just curious. I'm like, have you not played around with it? And he's like, ah, well, I played around with it a year ago, two years ago, and it just gave me back garbage. And I'm like, what did you use? Chat GPT, and he's like, some website. And it stuck with me. It stuck with me so much that I wrote a whole blog newsletter post about it because it was this, there was no interest at all.
Starting point is 00:48:03 There was no curiosity at all. There was no curiosity at all. It was just, I tried it once, it doesn't work. So I think there's a lot of, I don't want to say willful ignorance, but I think there's a lot of ignorance where people have kind of tuned out all of this. Just like some of us have tuned out crypto or whatever it is, blockchain in general, you know. And I think some people have just tuned this out and whenever AI pops up, their eyes get blurry and they just ignore what's next.
Starting point is 00:48:34 And I think that's a big thing, that some people just, it's not like they looked at stuff and thought it's not for me, they just don't realize how much has changed in the last few years. And that's one part that I see. And then the other thing is that some people describe it as the bell curve meme, where the junior engineers get a lot out of it because they don't know that much. So the AI takes care of a lot of stuff for them.
Starting point is 00:49:05 They never have to learn how to center a diff with CSS, you know, like nice. And then on the other end of the bell curves, the senior engineers get a lot out of it because they know a lot and they basically know how to review what the AI is doing and they know where the pitfalls are and the trap doors and what tests to write and what not to do and how to architect the thing. So they're like, you know, hey, nice. I don't have to type all of this. Like I know what goes in this file. I know what goes in that file.
Starting point is 00:49:35 I don't have to do this. And then there's like the middle part of the bell curve where it's the engineers who are trying to get better at what they're doing and they want to learn all of this and they are not, I think, comfortable with, you know, agent take the wheel kind of thing. They're like, I don't know what's going on here. I want to know what's going on. I don't know, understand this and they get skeptical. And that's also what a lot of people describe in the enterprise. And then the third problem is that it's something you have to learn. Like you have to get better at this. It's, it's you, you, the first time you try an agent, you won't have amazing
Starting point is 00:50:18 results, possibly, you know, on a small scale, yes, but some people do crazy stuff, but others will fail. And I think this is a problem that all of the hype and marketing over the last few years hyped it up as, you just say, build me a website and it's gonna look amazing. And the expectations were made so high that now some engineers try this stuff out
Starting point is 00:50:41 and they go, you know, fix this distributed database. Oh, look, it doesn't do it. It doesn, you know, fix this distributed database. Oh, look, it doesn't do it. It doesn't know how to fix a distributed database. And so they bump up against these expectations. And then they're kind of let down when it doesn't work on first try and then they give up. And what you know, Mitchell Hashimoto recently said, I think last week, he's like, when have you ever become more productive with a tool after using it for just a single day? You have to put some effort in, you have to get better at this.
Starting point is 00:51:13 And I think that's a problem. These models are anthropomorphized, which is also what I've been doing in the last hour. Right? I'm at fault, right? And talking about intelligence and it's the age and whatnot. And there's this mode of thinking where, well, these things are smart and these are like humans and these are close to AGI and whatever else somebody somewhere says. But the reality is that these are large language models based on a transformer architecture. They have a certain way of taking the context that you put in and producing something and
Starting point is 00:51:55 they cannot do everything. They don't know everything. You have to know about what goes into the context and what shouldn't go into the context to not derail them. So there is a learning curve, but it's not something that somebody tells you on the website, you know, like, hey, you gotta learn this, you know? Because nobody says we have a learning curve.
Starting point is 00:52:15 That's amazing because the last 20 years of software has said learning curves are bad, you know? Same as a Vim user, you know? Like learning curves are- Do you even use Vim anymore? I hear the IDs dying. I mean, don't you just let the AMP take the wheel? Yeah, yeah, yeah.
Starting point is 00:52:30 Like, that's a hot topic. Like I said, this, I, yeah. I know it is. I've been bringing it up nonstop. Do you use Vim or not? Yes or no? Yeah, exactly. Not anymore.
Starting point is 00:52:43 I use Vim mode in VS Code. But I don't't honestly, I don't type that much code by hand anymore. Like it's truly crazy. I mean, Say that again. Say that one more time clear. Okay. So the journey in some sense of the last one and a half years was that I worked at Sourcecraft, I wanted to try something else. I wanted to go as hardcore programmer as I can.
Starting point is 00:53:13 And I went to Zed and we built a text editor in Rust from the ground up with our own, you know, GPU framework and whatnot. That's truly some of the best programmers in the world work in that team. And it's truly a, an amazing product. It's amazing code base. I feel like I've reached the core of what programming can be. But then over the year with AI getting better and then trying out stuff like back then, like cursor tab where I was sitting down and I just went,
Starting point is 00:53:45 we were building the completion for Zed. So I was working on that, the tab completion, the fancy AI completion and figure out what the competitors are doing, you know. So I tried out tab, cursor tab, and I was sitting there and I would change a switch statement or whatever it was, like some repetitive thing, where back in the day, I would have been so proud to pull out an amazing, impressively good Vim macro, and I would just start typing like a console log or something in one of the switch statement cases, and it would say, oh, you wanna add this down here too, tap.
Starting point is 00:54:21 You wanna add this too, tap, tap, tap, tap, tap, tap, tap, tap. And I hit tap 10 times and it had the whole thing. And I sat there thinking, damn, like this is faster than I will ever be. Like all of the Vim, you know, I'm going to use ColMac and I'm going to use quick scope and Vim and blah, blah, blah. I'm like, you now have models that are faster than you at doing this. If you have like a CSV file or something, and you, I don't know, remove the last column, I would have done this, you know, selecting normal mode, jump to the end, delete, blah, blah, blah,
Starting point is 00:54:54 or a macro and repeat it for 990 times and whatnot. Now with these models, you could just remove the first column, second column, it would go, you wanna remove all of the column, like the last column, tap, tap, tap, tap, tap. I was like, that's crazy. That changes a lot of things. And then I worked on the Zed completion and we built this ourselves and I realized I'm
Starting point is 00:55:14 not an ML guy. And Antonio, the guy I worked with, he's maybe the best program I ever worked with, but he's also not an ML guy. But we could build something of equal quality as cursor tab was. And I was sitting there thinking, this truly is going to change a lot of stuff. Like if, if, if like an open source mall, like you can use a Quen or DeepSeek or whatever, if you can turn this into a completion model that edits code faster than somebody who's
Starting point is 00:55:43 really good at Wim, myself, right? If it's faster than I can be, like that has to change stuff. That has to change stuff in Death Tour. And then after that moment, I had this thought of like, you know, I don't know how to phrase this in a way that doesn't offend anybody, but. Go ahead. This thought of,
Starting point is 00:56:07 offend away. Am I working on a horse carriage? By working on a text editor? Am I working with? Oh dang. I don't mean it, but it's just this like. It's a really good text editor though, you know? It's really good.
Starting point is 00:56:21 It's an amazing product, but it's just this, I've worked in the Vim mode with Conrad at Zed and I was like, all of that stuff and it's amazing, but then you're like, I want to be efficient, you know, at the end of the day. I'm not somebody who loves programming and being fast at stuff because of macros and key bindings. I like doing stuff fast. I like being efficient. And now suddenly I realized that all of my Wim macro stuff was kind of invalid because I could just tap, tap, tap in another editor to get rid of this brute force, whatever, chores. And that changed a lot of stuff. That
Starting point is 00:56:59 changed a lot of stuff with how I look at developer tooling. And then basically I, you know, just to round this up, I looked at different other companies and talked to different other people, and then I ended up coming back to Sourcecraft because I talked with Quinn and I told him, everything I just told you, and I'm like, dude, everything is changing. He's like, you want to come build the future here?
Starting point is 00:57:22 You know, like, I agree with you. Like, a lot of stuff is changing is like, you want to come build the future here? You know, like I agree with you. Like a lot of stuff is changing. So yeah, that's, and now I'm, I'm working in VS code, which I've never wanted to do. And I don't like VS code, like aesthetically, I don't like it, but it's just, I also realized I don't care that much anymore. And then I thought, am I the leper here? Like Like is this, what's going on? And then I talked with a bunch of other people, colleagues at Sourcegraph and people I met in San Francisco or at conferences, and they use Amtoo or they use Cursor, Windsurf.
Starting point is 00:57:55 And, and there was at least five of them that said I was a hardcore Vim person. But I switched to using VS Code, Cursor, whatever, because I realized it's just a 10x multiplier and the other stuff doesn't matter that much anymore. And if you were to ask Primogen or TJ or whatever, they would give me sh** for saying this, of course, but it's, I had this feeling and a lot of other people have this feeling too that the age of fast mechanical movement in an editor, it's kind of over when you have like these malls that you know are much faster than you. And when you project out the future that these things are getting faster and cheaper and
Starting point is 00:58:42 that maybe surely you will have this running on your laptop. Right. And then it's like, instead of having your Vim key bindings and ColMac and a split keyboard that you can put up vertically or whatever it is, you just talk to computer, maybe, I don't know, but sure. Yeah. It goes back to your bell curve. Well, I was going to say you said horse carriage.
Starting point is 00:59:04 I kind of want to come back there not to like slap the offense on there, but to really think about that though. Like you have to think about, I'm trying to do this in the moment. I'm trying to like listen as a podcaster would and I'm trying to think about where we can go and I'm trying to like, you know, think about this idea too. But when I'm, what you had me thinking about really is like, what do I do today? You know, automatically go to the easy choice because that has become the 10 X multiplier in my life.
Starting point is 00:59:35 And I think the easiest response would be vehicular movement. How do I get from A to B? You know, for example, I took my kid to he's in golf now and he went to a camp today. I didn't walk him there. You know, we didn't get onto our horse carriage with horses and go there, although distant family members may have done it that way. No, we got into our new version of the carriage, which is called an F 250. It's still it's a diesel truck, but can't modernize that quickly and that fast. But we got into a vehicle and we went back and forth
Starting point is 01:00:10 from here to there. I didn't think about or cry about the fact that, you know, my grandfather, my great grandfather may have, you know, traveled via horse at one point. And they loved it because of the nature of the care of the horse and all the things that go into the pasture and all the pure things that are beautiful, right, doesn't take away the beauty, but it takes away,
Starting point is 01:00:31 it changes the utility of day to day life. The utility of me maintaining a pasture with space for my horse so I can go from A to B is over, that way is over, right, There's no one doing that. I now take a vehicle and that's just how it is. And so that's what you had me thinking, like this most obvious thing of like, I would never go back to the way,
Starting point is 01:00:55 or maybe I would if I could retire if I had a few money. And it's like, yeah, now I can afford the pasture and I can take horses everywhere because time doesn't matter, you know? Maybe that's a different kind of thing, but yeah, the obvious answer is a vehicle that moves faster pasture and I can take horses everywhere because time doesn't matter. Maybe that's a different kind of thing. But yeah, the obvious answer is a vehicle that moves faster because that's the way of the world.
Starting point is 01:01:11 I think that's a good point. There's a generational divide too. And I forgot to mention this, but also last year I gave a talk in Munich at something from the university. Basically there were a lot of people much younger than me hanging out at this meetup. And we started talking about AI and you know, back then, I'm sure you've, you can confirm this,
Starting point is 01:01:38 a lot of hand wringing about, is it pure programming? Is this still programming when you use cursor and these other tools and Kodi and Windsurf? Is this real programming? Do you unlearn stuff? Do you, I don't know, do you get dumber by doing that? Do you really learn it the right way? All of this and is it not artisanal code if it's not written by hand? And I talked to these young people at this meetup and I realized they don't care. Like that's over for them.
Starting point is 01:02:16 They've never think my code is not real code because I didn't use Emacs to write. They just use the tool available and they would tell me, oh yeah, I organized my docs in this way so I can use like cursor to pull in this whenever it needs this. And I also have like this library of these rules and I organize my files like this because then it's easier to do this. And they didn't even spend two seconds thinking about is this the right way to do this? It's just the way they now program. They grew up with AI and if they cannot get this out of the editor, they ask
Starting point is 01:02:50 chat GPT or Claude or whatever and they ask stuff here. And it just opened mind that, you know, made me realize that maybe a lot of this hand-wringing is just like old man wondering about the clouds, you know? Like, is this still true programming? Turns out people, younger people don't care, you know? And it felt like somebody in 1965, like, oh, the electric guitar is this real music, you know, if it's not a violin or something. And it turns out the young people don't care.
Starting point is 01:03:27 They don't wanna hear it. They moved on. And that's how I felt a lot about a lot of stuff there. And yeah, like you said, like it's a generational thing, right, and if you're the young generation, I mean, go to any 20 year old at university right now, studying computer science or programming in their spare time. I will guarantee you they use AI and they don't bat an eye about it.
Starting point is 01:03:53 Like they don't worry about is this true programming. Okay. Here's something visual for the, for the, you know, those tuning in via video and they can see me. What am I doing here? What is this? Calling somebody. Taking a phone call. What is this? Or maybe this, since it's like a real object. This is taking a phone call to folks these days.
Starting point is 01:04:19 Right. This is not. Right? And so there's a generation to divide. It's like, that is not the way the phone operates anymore. The phone has moved to this other thing that is not connected to a line to the house. It is now free range and roaming to wherever you go. And it's personalized.
Starting point is 01:04:38 The phone has changed even, you know, so you can't, as much as we want to stay in the past, the future comes no matter what. Time is linear, we cannot stop time. We're only in the, we literally only have this present moment. The past is gone, we can't change it, because all we can do is this moment,
Starting point is 01:04:56 and the future is coming no matter what we do. That's how time works by the way. Is this analogy like, well okay, so if if and what I mean by doing that gesture of the phone and not the phone is that what if writing code or being a software developer is not in Vim anymore? What if it is not an even VS code really that much longer. What if it's in a version of what these tools are evolving into and that's. Yeah. You know what I mean?
Starting point is 01:05:31 Yeah, I went on percentage. It hurts if you have a Vim tattoo though, you know? Yeah. I mean. I mean I kid but there is an identification factor which is actually baggage to change, is identification. Just like we identify perhaps with the music of our youth or our formative years and not the current music.
Starting point is 01:05:54 There's an aspect of that because so much of what we do with our computers is who we are and what we care about. And we can express that through our tool choice, our editor choice. And that's why there are flame wars because like who really cares? Well we do. Why? Because we're the kind of people who care.
Starting point is 01:06:10 And so of course we're going to care when you call Vim or Zed a horse carriage for instance. Like yeah you didn't want to offend anybody but it might be the case. Now there's also rational skepticism that I think comes from being around enough to see a lot of fads come and go. And I think if it weren't for my particular perspective through this podcast, I would have written these things off pretty early myself because I, and you could probably go back if you had copious free time and like track my change of mind throughout episodes because the results I was getting early on were really bad.
Starting point is 01:06:50 And I was like, this is not useful. This is a distraction. I'll just keep coding. And it came through like prolonged exposure and progress to actually get to the point where I think it's just recently. I'm like, yeah, this is amazing. You know, the mind blow is happening. But I could have easily just been like heads down, going back to my work, because I've seen a lot of things that are quote unquote promising or
Starting point is 01:07:15 game changing and they weren't, you know. And so that's part of it. I'm just going back to some of your thoughts around resistance, Torsten, and people who are not on board yet. And then the other one is, is the, you went to the bell curve and you know, that I think they call it the midwit theme or midwit meme where like the one in the middle thinks they're like the smartest and has the worst take, right? It's like the junior gets it for a different reason. The senior gets it and the middle ones don't get it. And I think in this particular case, it's because of the skills.
Starting point is 01:07:47 Like Vim is a skill. And you put a lot of effort into learning that. Maybe it was easy for you, but for most people it's not easy. And so there's some sunk cost fallacy there. Like, well, I've worked really hard at these skills. And if you look at that bell curve, like, well, the junior doesn't really have any skills,
Starting point is 01:08:02 so they don't care. They're like, cool, this helps me do stuff I couldn't do. And the senior, I think, if they are continually curious and self-aware, they realize that the skills are a means to an end and really what they're about is the end, and they can also get to the end better, faster, stronger with this tool versus the ones that, I mean, I know them, I've spent a lot of time learning them
Starting point is 01:08:28 and yet I don't identify myself with them and so I can just set them aside when I think that it's no longer the best thing for me to do the thing. But when you're at the peak of that, the peak of that bell curve, you've spent a lot of time, a lot of money, a lot of effort maximizing your engineering skills.
Starting point is 01:08:47 And so it's the hardest for you to say these skills aren't actually all that useful anymore, all that valuable. And I think that for a lot of us just hurts. I think there's a lot of identity wrapped up in this. Like you said, like people identify with, I'm the guy who never has to look up a method in Rust. I'm the guy who knows all of the syntax. I'm the guy who's really good at Vim and to use that example that you brought up with the senior engineer, I think, I mean, I guess you can affirm this, as a senior engineer, there's like these moments
Starting point is 01:09:26 where you realize that the code does not matter that much. Like what matters is also the marketing and the business and the team and how you shift stuff, how often, you know, like you're not this line. There's so much more to it, like it matters, but there's so many other things. Yes. You realize that code might be a liability even and whatnot.
Starting point is 01:09:44 And that same curve, I think you can go through over time when it comes to tools where you're like, well, I mean, I had this experience early on. I was pretty good and fast and then super proud of it. And I thought everybody who is a really good program has to use Vim and you know, whatever. And then I had a senior colleague and he used Supply and I don't think the guy has ever configured any keyboard shortcut ever in his life. And he was still incredibly fast and he did a lot of amazing stuff and made a lot of smart decisions and he was a senior engineer for a reason.
Starting point is 01:10:20 And I realized maybe that's not the differentiator. And I think there's a lot of this going on where right now people are running around with AI as the sledgehammer to hit people over the head and say, you know, this is over. Like what you've put a lot of effort in is not worth a lot anymore. And to some extent that's true. To some extent that's harsh and you have to empathize with people. But I also went through it.
Starting point is 01:10:44 I struggled with it for a long, long time. I get it. Yeah. I think what we have to recognize in order to accept that harsh reality, but also overcome it and really leverage it, is that it's not that our skills are useless, it's that they've lost value,
Starting point is 01:11:04 but because of where we've been as engineers, we are well-positions to leverage the new tools better than other people, and to adapt quicker, and understand when things do go wrong, what went wrong, and help the agent better than a neophyte could help the agent do its thing even though It's pretty good. You just tell it what you want and it's getting better, you know, but I feel like Skilled people can adopt new tools
Starting point is 01:11:36 As long as they don't have the baggage that we're talking about Probably more effectively than people who don't know how to use tools at all. Yeah, there was this amazing Kent Beck quote that popped up, I think yesterday or a couple of days ago in the Programmatic Engineering newsletter. And I think Kent Beck said this even two years ago about chat GPT, and I'm getting the numbers wrong, but I think he was saying that, oh, I just realized that, you know,
Starting point is 01:12:03 90% of what I can do as a programmer is now worthless and the other 10% have just gone up in value by 100x. Exactly. Yeah, he drills it. Yeah. And it's this like a lot of the mechanical stuff like which framework do you configure how and what goes into which config file and how do you type this and how do you do that? And what, how do you construct an FFM pack command?
Starting point is 01:12:31 What command line arguments, all of that stuff, right? Right. Pretty worthless right now, but what to build and when and how to say, organize it, how to architect it, what dependencies to pull in, what pitfalls to avoid, how to build this for future use, all of those like meta, let's say engineering skills, right? It's about trade-offs, it's about making decisions
Starting point is 01:13:00 of how to build something under a set of constraints. That's now super valuable. Like that's the multiplier now, not how fast you can type. 100%. There's analog to this, I just told this to my kids this morning because of course, as parents and teachers, we're trying to figure out how to approach these new things as well.
Starting point is 01:13:20 There's a lot of upheaval in school systems right now. I mean, it's a mess out there. There's a lot of cheatingheaval in school systems right now. I mean, it's a mess out there. There's a lot of cheating. That's just way too good to keep up with the cheating detection tools. And what I said to my kids, which I think applies specifically to our work as well, is there's a big difference. It's a it's a small delineation, but there's a huge difference between using AI to help you think
Starting point is 01:13:47 and using AI to think for you. And if you're using it to think for you, then we're headed towards idiocracy and you're not gonna make it. You know, like you're gonna be one of those. But if you're using it to help you think, now you're basically just a superhuman. And I think when it comes to coding, it's very similar.
Starting point is 01:14:04 Like we do have to be engaged and be making those decisions and judging the results and like doing all the things that are unique to us and our contacts and our business goals and like the things that we know because the coding agent just knows what you tell it to do and it's gonna do its best to get it done and probably do it better than you can do it. But it can't decide what to do.
Starting point is 01:14:26 Not yet, we're not there yet. And so, use these things that help us build way better than we could before, versus just building for us and just being along for the ride. Even though it does feel like a ride along the way, which is kind of why it's fun, right? Like you're like, wow, it's just happening.
Starting point is 01:14:43 Yeah, I mean, that, that, that the windows kind of passed. And we, I was just arguing back really just to this idea, maybe to the VIM folks who have the tattoo and all that. I've been on VIM folks is like, you know, you can still SSH into a machine today. It's not common to do it. Usually you use a CLI to do it. And at some point you'll have an agent use a CLI to do it. You usually use a CLI to do it. And at some point you'll have an agent use a CLI to do it.
Starting point is 01:15:08 Or maybe today you should be doing that. But SSH still exists. You still have a username and login and you can still control how you access a Linux machine. It's just not common. Like it doesn't mean that you can't use Vim anymore. It doesn't mean that those skills go by the wayside either, or even nurturing and curating your Vim file.
Starting point is 01:15:34 Those are still truths. They just live in a different world now where that used to be a productivity tool for the ultra X programmer and now that version of a programmer is sort of flatlined in a way because the agent can go faster than it right in a way or the person controlling babysitting as Steve said babysitting the agents and Torsten you didn't describe it as that you didn't seem like it was tedious toil maybe Steve is in a different realm than you are, but I think it would like that. Like it doesn't mean that Vim doesn't exist
Starting point is 01:16:09 or that SH doesn't exist. You still SH into a machine. It's just Kubernetes orchestrates your C of machines versus you individually doing that and SH again to each one of them and provisioning them. It's just like not how you do it now. You know, it's just not the way. So SH still exists. Vim still exists. It's just used differently.
Starting point is 01:16:28 I think what you're saying also touches on something else, which is that in these discussions, a lot of stuff gets thrown into one bucket that is programming. Will AI be able to replace programmers? And I think people need to understand there's a thousand different types of programmers out there. There's like a programmer who works at BigTac on distributed systems. There's a programmer who works at a hardware company on embedded systems. There's a programmer like me who works on dev tools and there's programmers with web stuff and people that work in agencies. I live in a small town in Germany. If I would try and go meet the hundred programmers closest to me, most of them work in companies that are not software companies and they modify
Starting point is 01:17:22 old Java programs and whatnot. And some of them do WordPress websites. And I think when we say it's going to change a lot of programming and then people push back with, oh, it cannot modify the storage layer of Postgres. And it's like, that's not what I mean. But what I mean is that every day, 10,000 of times somebody gets a phone call to call somebody else to say, can you change this on our WordPress website?
Starting point is 01:17:48 You know, and that person who makes that change, maybe it's called a programmer, but it needs some skills to do this. And I'm thinking that a lot of this will change in the future. Maybe not in the next four, six, eight, whatever months, but a lot of the stuff on the fringes will change heavily. And come back to your point of like SSH into a machine, you know, when the cloud got big, a lot of people were saying, the cloud is just another computer, you know, like it's not, it's not different. And we will always have SysAdmins, you know, we will always have sysadmins that administrate those machines. And yes,
Starting point is 01:18:28 we do have sysadmins still, right? 2025, we still have sysadmins. Yeah, they're just like graduated. Exactly. And if you look at the number of job postings that hire for sysadmins, I'm pretty sure that's changed in the last 15 years. And I think some of those changes will happen to programming. I don't think you will find the same amount of, you know, in Germany, call it web designer or front end developer, like people who build websites for a living for like companies and whatnot. I think a lot of that might change in the next couple of years. But if you work on a distributed storage layer at Google, an agent is not going to take your job in the next two years.
Starting point is 01:19:07 You know, probably not. Yeah. If you're working on like really edge, fringe, R and D, those are areas that are more protected. I would say if you're in the 80% realm where 80% of the development is done by a common, hireable, typical developer these days, that job is probably more in jeopardy of either getting compressed or agentic and you become a babysitter and you have taste and curation and humanistic tendencies, which is like care and humanity, you know, the things that that thus far machines can probably reason it, but not really feel it the way we feel it.
Starting point is 01:19:48 You know, those are still traits and qualities that, that remain, you know, um, it's such a wild thing to think about how this is changing though. Like even in the moment, like this podcast is for software developers. Okay. Just in case you didn't know that listener and you're a human, this is not a podcast for robots yet. Jinx. So that's a profound thing to even think about too. I've been thinking too about the kind of code you're writing.
Starting point is 01:20:27 Have you been amping with AMP? Can you kind of like give us maybe a more clear picture of the behind the scenes that I asked you for that and you kind of alluded to it and you basically described what you did versus literally how it looks, what it looks like to sort of like write this level of code. It seems like I would describe your job today, maybe not the R&D version of AMP and where you're taking that platform, but your job as a trained professional software developer, you're not writing code these days,
Starting point is 01:21:03 you're trying to generate as much code as possible and solve as many big problems as possible. Is that pretty accurate to what you're doing? Well, I guess my title is still software engineer, right? And I do, I wrote this other blog post about how I use AMP. That's also an app code.com. It kind of goes into this. And I think the bigger picture idea is that
Starting point is 01:21:26 I call it paint by numbers programming. So that means my job as a senior engineer is to think of how will I implement this? And then what I have constantly running in my head in the last few months is like, will the agent be able to write this code for me? Like, can it do this? And if the problem is too large or there's a lot of implicit knowledge in my head that I couldn't write down or it's too cumbersome to write down, I go in and
Starting point is 01:21:57 paint by numbers programming. I put in the lines. Like I say, I want this file and this file and this new service, and it should have these methods and it should do this and these arguments, please write this code for me. And on a practical level, I think 50% of our, or more, save 60, 70% of the code in our code base, which is a standard TypeScript, Svelte, SvelteKit, you know, there's standard looking code base.
Starting point is 01:22:25 A lot of that is generated and a lot of the say load bearing parts, the, the, the stakes in the ground parts where you say like, this is the architecture. These are the central pieces that has been kind of written by hand or revised by hand, but I'm going to guess I would say the test suite, 90% of that is generated, right? Like, oh, cover all of these cases. Then we have a storybook that is pretty long by now, which is just a web service website
Starting point is 01:22:58 that shows all of our UI components, right? Like the Svelte components, and it shows them in all of those different configurations. Show error two, what's the active state, is it active, deactive, like all of that stuff, so you have one page and you can see one component in all of the different states. And imagine an activity indicator,
Starting point is 01:23:18 it can be blue, red, green, or idle, or whatever it is, and it has a tool tip. So you wanna to see when you develop this you want to see what does it look like in all of those different states. What I previously would have done is create the storybook page, create one version of this, create some mock data, do some vim stuff, duplicate the mock data in like different states and different configurations and then you know render it or put in other terms.
Starting point is 01:23:47 You have a test suite and you have some mock data like five different users. One user is deactivated, one user is activated, one user is an admin, one user is a group admin or whatever it is. And previously you would have used your editor to duplicate that information multiple times or you write other helper scripts to remove the code duplication. But what I'm doing now with the agent, when I want to do like tests or storybook or whatever, I'm like, here's the component, here's where I want to render it, or here's the test that I want to write.
Starting point is 01:24:18 Go and type out this code for me. Like it, most of it is not super smart. Like it's not coming up with amazingly new algorithms or whatever it is, but it's just the chores and like this just typing, you know? And one other thing that people kept saying is that I didn't realize how much dumb typing I did, you know, while programming. You're like, you know, we've all heard this, oh, thinking is the bottleneck, you know, like typing speed doesn't matter, but then you're doing it.
Starting point is 01:24:49 And it's like, okay, yeah, fix the import statement. No, add that missing import. No, no, no, align this, auto-complete this, blah, blah, blah. There's a lot of typing still involved. And, you know, that's what I try to get rid of. I don't try to get rid of, you know, the thinking part, like you put it. It's more that I know what the structure is. Please write out the rest for me.
Starting point is 01:25:12 Give you a concrete example from two hours ago. I have two components in the UI and they should do the same thing. Two buttons, you know, one of them has two buttons and the other one has one of two buttons. And I'm like, why are they inconsistent? Like they both should have these, let's call them sign in and sign out buttons, right? One only had sign in and the other had sign in and sign out. Dumb example. Doesn't make sense. Should be the same button. You get the point. Two buttons. And I was like, okay, if I do this, then I have to duplicate the code from here, update the import statements, adjust this,
Starting point is 01:25:49 make sure that this is flex box aligned and it renders correctly and adjust the height of this. So this button blah. So I say agent, this component has one button, this component has two buttons. The first one should also have two buttons. Please make it happen. And it looks at both components. It figures it out.
Starting point is 01:26:05 It's not a hard task, copies it over and, you know, 20 seconds later it's done. And I didn't have to type this out. And that obviously goes like, that's a, one of the smallest examples. And what I did earlier today was I built, like I wanted to have like some, I don't know, let's call it a testing script where I have like a bunch of data and I wanna go through the data and I wanna for each piece of data, I wanna run it, send it against the API
Starting point is 01:26:37 and see what comes back, right? And because I noticed that my feedback loop is, start up the dev build, try this out manually, hit the button, do this. I'm like, I could I could build a tool that where I can do this 50 times in a row and see all of the results on the same page. And half a year ago, I would have never attempted to do this because I don't want to build another thing and type out 300 lines of code. And I would need to figure out how to do with three pane layout and but then I was like I can generate this you know if I say here's the data in this folder here's the API give me an ability to put an API key in and then render me a website with the three pane
Starting point is 01:27:21 layout where on the left side you list the data when you click on one you show the data and then you have a button to send the request and then you see the results on the third pane. It goes and does it. Like there's no issue. And I don't care about how it looks. Like is it styled or not? It's just, it's just, yeah, it's useful. It's like a little, what do you call this? When you're woodworking, you build like tools to like a tool, what do you call this, when you're woodworking, you build like tools to like get a tool for the tool, right? Yeah. It's like a, yeah.
Starting point is 01:27:50 Is it shim? I don't know. But it's basically, give you a really concrete example, another one from a few months ago. This was early days of AMP. And yes, for everybody listening, this is not a production. This was early days of AMP. And yes, for everybody listening, this is not a production. This was early, early days. But just... Like two months ago.
Starting point is 01:28:11 Yeah, so... This is two months ago. So, look, we had the agent fail sometimes to edit a file, right? So users would say it uses early alpha test early, early alpha testers, okay. They would say it sometimes fails. And then I would go, that doesn't help me. I need the data. Like what was the input?
Starting point is 01:28:32 What was the actual file on this guy? What's the thing that happened? And in order to figure out what went wrong, I guess in the past, what I would have done was figure out some logging, like some error reporting, use Sentry or logging or something, and then put the data in some form that makes it readable for me to figure out what the problem was. But then I thought, I now have at my hand a code-spooing, like a machine that can spit out code really fast. And if that code is a standalone project
Starting point is 01:29:06 under a thousand lines, it will 99% of the time get it. So what I did was I put in the code, don't do this at home, but I put in the code something like, if you are on Torsten's machine and you run into this error, put the raw data dump, like a JSON that was this big, for everybody listening, I made a huge gesture. Like let's say 600, 708, whatever line's big. Yes.
Starting point is 01:29:31 Take the raw data, put it in this folder on Torsten's computer. And I just ran this for two days and I collected a thousand files. And then I said, AMP, let's build something. Here's a folder full of data. What I want you to build is a data viewer. I want you to build a little web app in Go, and I've never seen the insides of this code. Build me a Go web app that lists all of these files.
Starting point is 01:29:54 It takes out these two fields, it syntax highlights them, and then it shows me a diff between these two fields. And then give me keyboard control so I can go through the data. And it did this in 45 seconds. I open up the website, I go click, click, click, go through, and I go through like 50 examples. And just by being able to look at the data, I spotted a bug.
Starting point is 01:30:16 I realized, oh, it's a white space issue. And that's only because I had syntax highlighting and a diff, and I could easily go through data. And I never would have built this on my own because syntax highlighting pain in the butt, like the diff in the JavaScript, the three pain layout, I would have given up, but the barrier to entry with this, you know, the agents or the AI in general is so low that you can build stuff that you never would have attempted before.
Starting point is 01:30:43 Right. And to come back to what I would originally have tried, like logging and whatnot, imagine you get logs of like, input this, output this, and it's just this, like a line of logs, would you have spotted a white space issue in those logs, you know, with like, oh, use two spaces instead of a tab or whatnot. And I don't think I would have, but just being able to say with the push of a button, I can generate 500 lines of code.
Starting point is 01:31:16 I changed how I approached this problem from an engineering perspective. Right. And I think a lot of people are now realizing this, where they say, oh, somebody tweeted, they said, Jeffrey Litt, I think. He's like, oh, I kind of want to figure out, what was it? How many words are in each section
Starting point is 01:31:38 of this markdown document or something? What would you have done in the past? Would you have sat down and written a tool to do this by hand? Probably not. Like it would have been an idea that you brush off immediately. And he's like, Claude built me in like a one file, whatever strip to do this. And it did it just because it's affordable now. So now, you know, come back to real software engineering, how many tests and debug tools and test suites and like introspection tools or analysis tools have we not written because it would have been too much effort and that's now
Starting point is 01:32:16 affordable and how we starting to realize this and the question is when we really leverage this, you know, when will we make use of this? And to go one even further, you know, back, I've worked at SoScribe since 2019 with one year break. And a lot of large scale customers, they say, can you make this work for our code base? Can you make the tool that you have work for our large code base? Can you make this work for our code base? Can you make the tool that you have work for our large code base? What we're seeing now is that people change the code base to leverage AI more.
Starting point is 01:32:52 They're like, Oh, these files are too long for an agent. It's blows up the context window. You know what? Let's split it up in five files. And I'm telling you two years ago, nobody would have split up their files for any tool. They would have said, this is our file. This is like our 20,000 lines. You're not going to touch this. But now the levers change and the amount of leverage you get out of these tools change. And now suddenly the code base will adapt. That's my bet. The code base will adapt to these tools. And the really interesting bit for me is how will our engineering practices change?
Starting point is 01:33:26 What code will we write by hand? What code will we generate? Thinking even further, will there be code that we won't check in, but instead we just check in the prompt or whatever it is and just generate it on the fly? Or will all codes still be checked in? You know? Yeah.
Starting point is 01:33:44 This actually opens up a whole new line of thinking for me, which I haven't thought before. How does this impact open source? Because I was thinking through your situation, I was like, well, in the past, that one off tool to help me do something else, right? Like it's like a side quest basically. Like I either would have forgot about
Starting point is 01:34:05 it like you said like nah too much work it'd be nice to have but I don't need it or I'd say screw it I'm gonna write it I'm gonna open source it and then other people can use it and now it's worth it for me right or I'd say well let's go see if someone else has done it and see if I can just use somebody else's maybe not in that order I probably would go look first and then decide to build it or not. But in a world where we can just ad hoc generate one off tools and check them into the code base or not keep the prompt or throw the prompt away. Like does the amount of open source diminish? Does my use of open source not matter as much because I can just generate anything I need.
Starting point is 01:34:42 Yeah. Have you thought through this? Cause I haven't even thought about the impact on open source. Quinn and I talked about this in our podcast that let's be honest, the GitHub contribution graph is not worth as much as it was 10 years ago, five years ago, two years ago. And it had a sharp drop, I think in the last whatever year or so. Yeah. And you also know Go and you know that, you know, say on one end of the extreme, one extreme is the JavaScript community
Starting point is 01:35:15 where it's like, here's one function, I published this as a package. And on the other end, there's like the Go community, which like little copying is not bad. Like I don't have to pull in this dependency. So now I'm thinking with AI, why would I pull in like a tiny, you know, package or why would I write it by hand if I just need like a, why would I even go somewhere and look up a function that formats a timestamp in whatever format I want.
Starting point is 01:35:46 Like I can literally ask the LLM, here's the timestamp, here's all of the five formats. If you don't have all of the five formats, here's the command, write me a program that generates all of the possible formats so you see all of the possible formats and then write me a function to parse them. Like, you know, like even the act of, you know, code as a, as a way to reduce duplication
Starting point is 01:36:09 is not up for grabs, but it's kind of, it's kind of changing because- It's a question at least. Dumb example. And somebody listening will say Torsten is an idiot, but just to illustrate the point, um, say, say you have a function that validates something and you want to make sure that it validates these 50 cases of to make sure that it validates these 50 cases of whatever, or say 150. It always was best practice to type out these 150 cases.
Starting point is 01:36:32 You would write a regex or something, right? And you're like, ah, let's not, you know, regex, blah, blah, blah, because we cannot maintain this list. Now with LLMs, you can generate 150 examples. Like why you can literally add code write time, generate all of the variations. You don't have to let the CPU go through all of the variations. And just stuff like this where even frameworks, the goal of a web framework is to help you reduce the amount of code you have to write.
Starting point is 01:37:03 But if the amount of code that you can generate is suddenly large and it's fast and it's getting cheaper, do I need like fancy templating helpers when I can just say, change all of the user avatar components and make them all green? You know, like something like this. And I don't know. The general principle becomes do repeat yourself. Maybe, I don't know. But it's this.
Starting point is 01:37:28 It's certainly a lot of code is based on the assumption or a lot of code and a lot of the way we write code is based on the assumption that writing code takes time is hard and we've won't avoid it as much as possible. You know, but now will that change? Right. Because I can generate your websites with one, you know, push off the button now in like 50 different variations. And then coming back to original question of open source, well, does it make sense to store pre-generated pieces of code and build libraries that are available and configurable for 15 other use cases when you could just say, well, here's like one version of this, and then
Starting point is 01:38:18 maybe you feed it into an LLM and you generate your own versions of this. This is a bit sci-fi and obviously it's not performing blah, blah, blah, but it's just, you know, like this stuff is changing. And one thing that to go one level higher even is, and it's also one of, you know, why I think, you know, the business I'm in is so interesting is Eric Meyer, ex director of engineering at Facebook or one of or one of the ex, you know, he's a Haskell guy, like really smart functional programmers, done the program for 40 years. And he said, he had a presentation two years ago, I think, pretty early, where he's like,
Starting point is 01:38:57 why search for code when you can have an LLM generated for you? And what he means is that when you go and you search for stack overflow and you search your own code base or you search the code basis of your company, what you want to answer is, I want to, you know, build a user avatar component or something. How do we usually do this? Because you don't know how, or you don't want to do it, right? But if you have a model that knows how and can do it in less than a second, why go and find those examples? You know, why not generate this here? It's the same as
Starting point is 01:39:34 you're trying on new clothes and it's like, oh, I'm going to try on the red shirt and the blue shirt and whatnot. But instead you could have a photo taken of you and then say, give me 15 variations where I wear the same shirt in 15 different colors, you know, like stuff changes when you don't have to go look for it but can generate it on the fly. Well, what you're saying is is that the efficiencies or the perceived efficiencies we've done in the past have been based upon human efficiency. Exactly. Right? Human, like we thought it was efficient
Starting point is 01:40:06 to write shared code so that it was more efficient to stand up a new project. So that was more efficient for teams to unify around codified ways, standards, et cetera. Right? Those are all the efficiencies, but if those efficiencies are rendered mute, or moot, sorry, is that they're no longer,
Starting point is 01:40:27 they're no longer important. So those efficiencies to an LLM or the thing that begins to generate this code is like, well, you know what? I don't need to worry about this one unified away for 25 applications to connect. Cause I can just write it on the fly. Right. For the bespoke need it has, very specific in that efficiency versus the efficiency of
Starting point is 01:40:49 some other ephemeral efficiency that doesn't really matter anymore. The analogy I used in the past was, at the end of a book, you have an index with different words that you can look up quickly. And you have that index because it takes a long time to find that specific thing in the book. If you're able to read a thousand pages per second, do you need an index still? The thing itself is optimized for how it's consumed right now. But if the way you consume it and suddenly we, why would I need an index at the end of the book if I can just have photographic memory and can read a thousand pages per second?
Starting point is 01:41:29 And I think a lot of code is still like this. A lot of... It has to be, right? It's based on how we write and consume code and how hard it is to write code. But when the capabilities change, the know, the tools or how what we produce with those tools will also change. The kids are going to totally get this. The kids, you know, the next generation, the AI natives, they're not going to ask these questions because they're going to grow up in a world
Starting point is 01:41:58 without that constraint, you know? Like they'd say like, why, why would I share? Like, well, you have to have a shared library. Like, why would I share, well you have to have a shared library. Why would I share my library when I could just tell my thing to make a new library? Why would I refactor when I could just rewrite? Or why would I maintain when I can replace? Like when the cost of replacing, maintain your car because it's expensive to replace it,
Starting point is 01:42:23 but if the cost of replacing is approaching zero, why maintain? I don't know, you start to ask a lot of questions that we've assumed were like fundamentally unaskable, right? Because all the calculus changes. I'll give you one funny example. I've started to build this because I posted it and people got riled up about it, but it's a little bit philosophical. But basically a lot of stuff we do when we work with computers is about putting things
Starting point is 01:42:56 in a certain form, in a certain structure so the computer can work with it. Example, static site generator blog posts. Right now the format is you have to have a YAML, whatever, front matter it's called, right, the slug, publish date and whatnot, and then the text of this. Now with LLMs, you could technically write a blog post with anything. Like you could have a folder called my posts and it could be a screenshot of a text message. That could be one blog post. You could have a screenshot of a posted note, a photo of a posted note, a markdown file, a text file. And then you ask the LLM,
Starting point is 01:43:33 here's my five blog posts. Here's the basic template I want for these blog posts. Generate me my blog. And you don't have to put any structure in it because these LLMs are now these, I call them the fuzzy to non-fuzzy adapters. You can throw pictures, screenshots, audio messages, you know, videos, anything at them and they can spit out text. And when you think about it, it's sci-fi and philosophical, right? But when you think about it, it's how many beautiful things can we build when we don't have to think in strict database column schemas, you know? Where we can say, well, a blockboss could be anything.
Starting point is 01:44:15 It could be a picture, a video, an audio recording, you know? And we now have a tool that lets us transform this into another form. We don't have to put it in a specific thing. That is interesting. So what exactly are you building then? I started to build a static site generator that at build time will just look through folder called posts and generate out of images and videos
Starting point is 01:44:39 and audio files and screenshots, an index of blog posts and put them in a format. And I had the prompt is for each blog post, modify the layout so it matches the content of the blog post, you know, like make it look serious or make it look fun or whatever it is. And I think that's just something we've never had in computing or software where you could say, make this one look like the handwriting, you know, make or, you know, it's just orderly handwriting.
Starting point is 01:45:09 Is it fun handwriting? Is it a little throwaway note? Make the page look like this. And it comes up and probably does something, you know, and that makes it look like this. Yeah. So is it non-deterministic then? Are you going to have? Yeah, it's on deterministic.
Starting point is 01:45:22 It's how are you going gonna have a reverse chronological listing of posts? Isn't that what a blog is? I mean, does that also have to be non-deterministic? Okay, then the- What is a blog? I mean, a blog is a dumb example, but it's like, then the filenames have timestamp in them,
Starting point is 01:45:37 or whatever it is. But, I mean, still, right? Like it's a large step up. Yeah, I'm with you. And it's for sure. Because I don't like YAML front matter. I only do it because the computer likes it, you know? Yeah, exactly.
Starting point is 01:45:48 That's what I mean. That's what I mean. And what I did was, this was two months ago or something. Somebody sent me an email and they were like, hey, on your personal website, it still says you work at Zed, but I heard you back at Salesforce Graph. And I was like, oh, you're right.
Starting point is 01:46:02 And I opened my website with AMP and I took a screenshot of that email, paste it into the agent and said, fix this. And it went and it found that bit on my website where it says where I work right now. And it updated it based on that screenshot of that email. And I sat there thinking, isn't that amazing that I can take a screenshot of an email and something changes based on it. And I send it back that person who sent me the email and their personal was like, I'm sure you could have done it faster.
Starting point is 01:46:31 And I'm like, don't you see, man? Like, this is crazy. I could build you something where you forward an email and it opens a pull request on your website. You know, that's not hard to build anymore. Yeah. Back in the day, it was a startup, right? Exactly. That was somebody with a pitch and seeking funding.
Starting point is 01:46:49 Yes. Now it's, uh, yeah, whatever. We got here though, specifically by Jared, you asking about open source, like this entire last 35 minutes ish has been about the question of open source. Yeah. More or less. And at first, you know, I almost said everything by default is open source now then, because like if you can generate every line of code, then the,
Starting point is 01:47:13 the critical factor is not what the code that gets produced is the thought and the intellectual property potentially that makes it proprietary or not around that idea. You know, if by default then everything is just open source. But then now as the conversation goes on, it's like, well, what if open source doesn't matter anymore because when we need something, we just make it. There's gotta be some living standards though. The value in the source. I mean, what's where's value in the source anymore?
Starting point is 01:47:44 Right. Like that's what I'm traveling some source out there because the robots need to learn more. That's yeah, well that's a second-order effect right? Like if we all say there's no value in sharing stuff anymore then the dwell dries up, you know, to make these models better. I think we'll find value in sharing things though still yet. I mean, I think there will still be libraries and frameworks that will get made and maybe at some point the source will be just to codify way of the LLMs using this stuff and there'll be a user like we're a user and We're only user by proxy in the fact that we care about
Starting point is 01:48:25 The name that gets associated to a source They don't want source necessarily, they want tools. Like for training them they need source, but for their actual building they need tools more than source. So I don't think we can answer this question in the next three to five minutes, let alone the next three to five years. I feel like this is a generational question. Like if you go out now 20 years and say,
Starting point is 01:48:44 what is the impact of open source on the world in 2045? Will it be dramatically different now? I think it might be. I'm not sure. I'm not sure. Can I make an optimistic prediction? Sure, please do. I think the value of what's creative and truly human and tasteful and based on experiences, unique experiences, I think the value of that will rise. I think if there's one thing that only you, only you in that moment with that combination of this model and that model in this scenario can produce, I think that's still valuable. But like a really piece, a really creative piece of code, a really insightful
Starting point is 01:49:31 algorithm, really efficient, good data structure, you know, but yet another two-way framework or I don't know, you know, like a date parsing library or something, or a one-off function to check temp.de existence or something like this, right? I think the value of that will diminish, but the value of uniqueness and taste and the creativity will stand out. I think that's a good note to end on.
Starting point is 01:50:04 Don't you think, Adam? I think so. I think the only thing good note to end on. I think Adam, I mean. I think so. I think the only thing I would add to really this conversation is just that it seems like perspective is in order because when you're closer to the problem, the specifics matter more. Like, for example, future humans may say, you know, when human validation was based upon lines of code or characters written in their life or whatever? Like, and now it just like, it doesn't matter because that doesn't, it's not a metric that
Starting point is 01:50:33 matters to track when you zoom out, right? When you zoom in, that matters. When you zoom out, it's like, well, you, you measure things based upon the broad strokes versus the specific definitive on the zoom in. Well said. Dorsen thanks so much for coming over to our podcast and sharing. I know you've been on go time a few times.
Starting point is 01:50:55 We've known you and known of you especially back when you're writing those books about compilers and stuff but I haven't had you on the changelog so this was a joy. I'm fascinated. I'm inspired. I'm excited. More than scared. Sometimes I'm scared but today I'm more excited about the future with these agents helping us do better, cooler stuff faster. I mean mostly good, right?
Starting point is 01:51:18 Thank you for having me. And small anecdote is I told my wife before we started recording, I'm going to go record this podcast. And she's like, what podcast is it? And I said, it's the first podcast I've ever been on in 2016. You know, back then Go Time. Yeah, totally. Was Adam. Yeah. Yep. That's awesome.
Starting point is 01:51:38 Always happy to hear origin stories that include us. You know, we've been around a while, so we have a few of those. I remember that podcast, man. That was a long time ago. It was a long time ago. Yeah, it's nice. It's almost a decade.
Starting point is 01:51:51 It's good to be friends till yet all these years, you know? Yep, it's pretty cool. That's what it's all about, man. Right there, not the last time. We'll have you back. Appreciate the conversations, really enjoyable. Very much, very much. Thank you, Thorsen.
Starting point is 01:52:04 Thank you. Breaking news about our Denver live show. Not only will Breakmaster Cylinder be in attendance, BMC is now officially performing some fresh and some classic change log beats, live on stage, 30 minutes prior to our 10 a.m. start. So, if you were planning on arriving just before the show starts, maybe get yourself to the Oriental Theater a little earlier.
Starting point is 01:52:31 And if you haven't bought your ticket yet, you now have one more reason to get in on it. Fifteen bucks cheap and free for Change Log Plus Plus members. Find a way to get to Denver on July 25th and 26th. The FOMO is very real. Learn more at changelog.com slash live. Thanks again to our partners at fly.io and to our sponsors of this episode. Retool agents are waiting to work for you. Go to retool.com slash agents and depo 10x faster build times at depo.dev.
Starting point is 01:53:03 That's it. This show's done. But we'll talk to you again on Friday, and we do hope to see you in Denver. Chainsaw.com slash live. So Thanks for watching!

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.