The Changelog: Software Development, Open Source - We're all Builders now (Interview)

Episode Date: June 5, 2025

We're on location at Microsoft Build 2025 with Amanda Silver, Corporate Vice President of Microsoft's Developer Division. Amanda leads product, design, user research, and engineering systems for some ...of the tools you use every day. We discuss the latest AI announcements from Microsoft at Build 2025, how AI is reshaping development tools, what's next for VS Code, TypeScript, GitHub's evolution, and even emerging editors like Windsurf that are forking the VS Code ecosystem.

Transcript
Discussion (0)
Starting point is 00:00:00 What up nerds, it's your favorite podcast, the changelog. Yes, this is Adam Stokowiak, Editor-in-Chief here at changelog and today Jared and I are on location at Microsoft Build 2025. We're talking to Amanda Silver, Corporate Vice President for Microsoft Developer Division, Sheely's Product Design, User Research, and Engineering Systems for some of the most awesome dev tools we use every single day. We discussed the latest AI announcements from Microsoft at Build 2025, how AI is reshaping development tools, what's next for VS Code, TypeScript, GitHub's evolution, and even emerging editors like Windsurf that are forking the VS Code base. A massive thank you to our friends
Starting point is 00:00:45 and our partners at fly.io. That is the home of changelog.com and us and robots alike love the platform and we think you will too. Learn more at fly.io. Okay, let's build. Let's build. Let's build.
Starting point is 00:01:01 Let's build. Let's build. Let's build. Let's build. Let's build. Let's build. Let's build. Let's build. Let's build. Let's build. Well, friends, Retool Agents is here. Yes, Retool has launched Retool Agents. We all know LLMs. They're smart.
Starting point is 00:01:17 They can chat. They can reason. They can help us code. They can even write the code for us. But here's the thing. LLMs, they can talk, but so far, they can't act. To actually execute real work in your business, they need tools. And that's exactly what Retool Agents delivers. Instead of building just one more chat bot out there, Retool rethought this. They
Starting point is 00:01:38 give LLMs powerful, specific, and customized tools to automate the repetitive tasks that we're all doing. Imagine this, you have to go into Stripe, you have to hunt down a chargeback. You gather the evidence from your Postgres database, you package it all up and you give it to your accountant. Now imagine an agent doing the same work, the same task in real time and finding 50 chargebacks in those same five minutes. This is not science fiction. This is real. This is now. That's retail agents working with pre-built integrations
Starting point is 00:02:12 in your systems and workflows. Whether you need to build an agent to handle daily project management by listening to standups and updating JIRA or one that researches sales prospects and generates personalized pitch decks. Or even an executive assistant that coordinates calendars across time zones. Retool agents does all this.
Starting point is 00:02:32 Here's what blows my mind. Retool customers have already automated over 100 million hours using AI. That's like having a 5,000 person company working for an entire decade. And they're just getting started. Retool agents are available now if you're ready to move beyond chatbots and start automating person company working for an entire decade. I I want to read your title because I mean you just can't memorize that thing. I know. Sorry. I apologize.
Starting point is 00:03:33 It's a spectacular title. I love it. Let it loose. Let it loose. Well, today we're honored to be joined by Amanda Silver, CVP. That's corporate vice president for Microsoft's Developer Division. Well today we're honored to be joined by Amanda Silver, CVP, that's Corporate Vice President for Microsoft's Developer Division. You're the head of product design, user research,
Starting point is 00:03:49 general manager of engineering system, that's a lot. It is, it's incredible. What does it all mean? What does it all mean? What does it all mean? I mean, I think at the end of the day, there's a group inside of Microsoft that's focused primarily with developers
Starting point is 00:04:04 as our primary customers. And so when you think about what do we actually, what does Microsoft actually deliver to customers? Visual Studio Code, Visual Studio, you know,.NET, TypeScript, you know, our Azure application platform, you know, our DevOps solutions, right? We work very, very closely with the GitHub team, do a lot of product integration across our products. So that's kind of what the gig is. Mm-hmm. How do you feel about leading people?
Starting point is 00:04:33 Is it fun for you? I love leading people. I mean, I've actually done it since fairly early in my career. I think maybe two or three years then I started to be a manager of people. When I first started at Microsoft, I think maybe two or three years then I started to be a manager of people. When I first started at Microsoft, I was working on the interop layer between.NET and unmanaged
Starting point is 00:04:53 code. I think about it as I started at the systems level, and then I started working more and more on programming language design and API design. Then from there, I got more involved in the editor experience, the debugging experience, and that's about when I started to become a manager. And you know for me I think initially it was a way for me to have more control and more you know more influence over the product so that was exciting. But I think over the years I think I've you know I think anybody who's been in the industry long enough recognizes
Starting point is 00:05:27 that software is 95% about people. And how do you construct the team and how do you motivate them day to day? How do you have the right balance in terms of trying to push them to do what they may not be ready to do on their own versus when you take the temperature off and let them recoup from an intense period. So I find a lot of joy in the act of management. And I also will say, I've come across a lot of managers in my history that are much more self-serving. That's their primary objective,
Starting point is 00:06:07 right? And empire building, a lot of people call them. And for me, like, I just, I think it's really important that I maintain my personal integrity and I don't, I have to check my ego a lot to make sure that, you know, I'm not putting myself before my people. That's tough. sometimes, right? I don't know, I've never led quite as many teams as she's led, I'm sure. So you said empire building, and I date back to, all the way back to when I, as a young college student,
Starting point is 00:06:38 used to refer to Microsoft as evil empire. Yeah. So just full confession, you know, the M dollar sign people? Yeah. And over the years, I've changed, Microsoft full confession, you know, the M dollar sign people. And over the years I've changed, Microsoft has changed it seems, and the embracing of open source, which has been like kind of a decade long story
Starting point is 00:06:52 and perhaps more has been an amazing thing to watch from the outside and like see my relationship to Microsoft change over the years. And I think you've been along for the entire ride. Yeah, I mean, that's been kind of core for my career at Microsoft in a lot of senses. And even when I started at Microsoft in 2001,
Starting point is 00:07:13 let's back up for a second. When I started- Okay, that's my apologies right there. When I started at Microsoft, I had two older brothers who were both in the tech industry and were part of the.com bubble bust, right? They were each on their third job or something like that by the time I graduated from college. And so I thought when I was graduating, I thought I was going to be a scientist, because my dad was a scientist,
Starting point is 00:07:40 but I thought if I'm going to go get a PhD or whatever, maybe I should try industry a little bit. And so I just on a long shot just went to the tech career fair and like handed out my resume to different companies that I thought would like pay me decently in cash because I wasn't interested in stock at the time. And Microsoft seemed like a relatively stable company and Google at the time was a startup. Right. And I was like, not going to apply. Too risky. Too risky.
Starting point is 00:08:10 So I ended up at Microsoft and, you know, I think my first decade or so, I was really focused on enterprise software dotnet primarily. Right. And it was like the Java dotnet, you know, tension. And that was kind of the main primary competition that we were thinking about at the time. But open source wasn't in the vernacular, it wasn't a thing at the time. At the same time, I think it was.NET, ASP.NET that was the first to actually include open source in the product, jQuery, right? Everybody had to use JQuery to be able to manage the different browser experiences. And so we first started to ship JQuery as part of ESP.net, I guess in the
Starting point is 00:08:53 2008, 2009 kind of era. At the time, I was moving more, I kind of moved into the JavaScript space. I started to work on around 2009, 2010, I started to work on the Chakra JavaScript engine that was inside Internet Explorer at the time. And that was like a really big change in terms of the day-to-day competitive atmosphere that we were working in, right? Enterprise software moves much more slowly the day-to-day competitive atmosphere that we were working in, right? Enterprise software moves much more slowly
Starting point is 00:09:29 than the pace of the web at that time. And so it really changed the cadence for what we had to think about. And that's actually when we started to work on TypeScript. TypeScript, originally, when we first started, was really trying to answer the challenge that we had inside Microsoft, which was that we were building what we now call M365,
Starting point is 00:09:49 which is like the web experience for Excel and PowerPoint and SharePoint and everything. But we had this challenge that we had a lot of developers inside Microsoft that had deep, deep, deep familiarity and decades of experience with C++ and C sharp, but they really didn't know how to build for the browser. They didn't know how to build complex applications in JavaScript. And actually at the time, the industry didn't really either.
Starting point is 00:10:16 There wasn't really- Do we now? We're a lot better. We're getting there. But a lot of the challenges were really about encapsulation and modularity and how do you create modules. And so that's where TypeScript came from. And that's when we, TypeScript was really our first open source project that we did
Starting point is 00:10:36 fully open source from the get go. And I remember in that era, it took me like six months to convince the muckety mucks that we should. How did you finally convince them? What was the winning argument? I think the argument was, so our objective at the time was to make sure that our internal developers didn't end up on a different path than the broad open source ecosystem
Starting point is 00:11:05 that was benefiting from the evolution of JavaScript. Right? We had decades of experience at that point of building our own C++ compiler that kind of became really more the internal Microsoft C++ compiler and was divorced from other paths of C++ compilers that were being used more broadly in the industry.
Starting point is 00:11:29 And there was a challenge in terms of trying to keep them aligned. And so over time, the internal Microsoft developers didn't get to benefit from what was happening in the broad industry on the C++ compiler. So with that experience, looking at this problem of how do we address this large-scale JavaScript solution challenge, we decided, first of all, we have to kind of stay in line with what the broad web industry is going to end up using. But then secondly, we also thought
Starting point is 00:12:02 that if that was going to be the case, if we wanted to create something where we call it first party equals third party, meaning that our first, our developers internal to Microsoft use the same tools that our third party developers use, our external developers use. If we wanted to accomplish that, then we had to build something that the JavaScript community would actually use and like. And at that time, there was still a fair amount of hostility towards Microsoft in that community. And so we had we absolutely had to launch it as open source to be able to introduce TypeScript to the world
Starting point is 00:12:44 and start to get tractionScript to the world and start to get traction. From the inside, describe the hostility. So what do you say? The hostility from the community? Like, enumerate over how hostility shows up and manifests. How do you see it? You know, from the developer community at that time.
Starting point is 00:12:58 Well, I mean, you know, there's many different forms, right? There's some folks who would never even consider anything from Microsoft because there's just some kind of halo effect of history or something like that that they would refuse to use anything from Microsoft in their stack period and that they won't even look at how good the technology is. Then there's kind of like the indifference or treating Microsoft products as though they are irrelevant and they just again wouldn't use it or consider it because Microsoft couldn't come up with it.
Starting point is 00:13:37 It's kind of a disbelief, right? Microsoft couldn't come up with anything useful. And then there's the more kind of common conversations that we have inside the industry. I think everybody has them, which is more of the debates, right? Where it's like you end up with one developer who likes the technology and can speak about the advantages of the technology and another developer who has another argument and they dislike the technology and they will enumerate all of the ways that they dislike the technology, but really at the core of it, it's really some other kind of emotional
Starting point is 00:14:15 thing that it's not actually on the technical merits, right? I think in the past that was a lot clearer lines to draw because you kind of could like live entirely in Microsoft's world or live entirely in the open source or Unix-y world. And now it's just much more of one world where it's like, you know, even if you are skeptical of Microsoft, like try not to use some Microsoft open source,
Starting point is 00:14:40 it's gonna be used around you and probably forced upon you perhaps by your teammates or something. And at the same time, I mean, I think Azure made that change to a large degree between open source and Azure in the cloud. It's like, yeah, it's kind of ubiquitous at this point. Well, that was the next thing that happened after TypeScript is, you know, we started to get a little bit of traction with TypeScript. And actually there was a fantastic partnership that we had built with the Google Angular team at the time that actually kind of got TypeScript, and actually there was a fantastic partnership that we had built with the Google Angular team at the time
Starting point is 00:15:05 that actually kind of got TypeScript, you know, in some senses, no programming language starts to get traction until it has frameworks, right? And so it was the Angular team at Google that we had a really close relationship with in building TypeScript in that era. And this was, again, 2011, 2012. There was a lot of fickleness in the web community
Starting point is 00:15:27 in terms of different front-end stacks, right? It was Angular, then React, and then Vue, and so on and so forth. But, like, you know, six or seven different front-end frameworks kind of made it through in those four or five years that were very, very popular. And what became fairly obvious to us was we needed to create something that was a little bit more durable that would be able to survive those different epochs of front-end frameworks. And I think over time TypeScript kind of became that thing, which was great, and started to get more of the front end community to use something in our stack, almost to their chagrin or reluctance or whatever, but I think that started to open
Starting point is 00:16:13 the door. And then- And then VS Code kicked the door open. And then- Am I right? And then we introduced VS Code. And I think that, you know, in a lot of senses, TypeScript in VS Code actually went really hand in hand,
Starting point is 00:16:29 because part of what TypeScript was doing was creating static types over JavaScript. And the tooling for JavaScript wasn't particularly good at the time, because it was very hard to build great tooling for a dynamic programming language. And what TypeScript did is it created a way that we could create fantastic tooling whether you were writing code in TypeScript or in JavaScript, it didn't matter.
Starting point is 00:16:58 We could create great tooling based on the TypeScript language service in VS code. And so the hypothesis was that if we created a great developer experience for TypeScript and JavaScript in VS code, that every developer that's a web developer, doesn't matter what you do on the back end, everyone has to do a little bit of JavaScript. And so if we created a great developer experience for JavaScript that that would open doors, that would ultimately allow us to kind of
Starting point is 00:17:29 pitch a larger tent that brought more developers into the fold and help them to consider Azure or any of our other services. In retrospect, it's kind of a master stroke. I'm not sure if you masterminded this or Satya did or somebody else, or if it just happened kind of a master stroke, you know. I'm not sure if you masterminded this or Satya did or somebody else, or if it just happened kind of organically over time as things tend to do, where it's like,
Starting point is 00:17:50 these dominoes just lined up and really did, I mean, change the brand and the developer relationship to Microsoft over time. Well, I mean, I will say, like, you know, certainly while I was present in helping and involved in shaping the strategy, there's no, definitely cannot take credit for the overall direction.
Starting point is 00:18:19 And whether it was Anders Halsberg helping Shepard TypeScript to come to the fore or Eric Gamma and team kind of building VS code, you know, just incredible people that I've gotten to work with over the years. Well, friends, you know, I'm excited about the next generation of Heroku. Who isn't? Well, I'm here with Chris Peterson, Senior Director of Product Management for Heroku at Salesforce.
Starting point is 00:18:53 Chris, tell me, why should developers be excited? To the firm platform? What does that mean to you as a Heroku developer? It means a few things. One, it means that we're going to be working on investing in our ecosystem. One of the standards we're adopting open telemetry is a big step up over the way Heroku's done metrics traditionally. We had a piece of technology called L2Met that converted logs into something that kind of approximated open telemetry metrics. But now there's like a real standard, there's like a real toolkit, and there's a whole ecosystem around OTEL.
Starting point is 00:19:23 And so being able to have open telemetry dashboards out of the box and our partners that tap into all of your Heroku telemetry so that you don't have to go build a dashboard and you're not actually constrained to what we provide on our dashboard is exactly the type of value we're seeing out of this. So it's tapping into the ecosystem effect. Similarly, cloud-native build packs. One of the features that I'm excited about is supply chain security
Starting point is 00:19:44 that we're going to be working on later this year, but that was an open source contribution to the C&B project itself. Bloomberg actually contributed support for software build materials generation. And so the things that I'm excited about are the things that developers are excited about, which is we're not going it alone. We're not building a proprietary solution. We're using the same tools and technologies as other superstars in the industry are, and we get to play into that ecosystem effect. A huge part of Hiroki's value has always been the elements marketplace, being able to bring in databases and key value stores and telemetry and observability
Starting point is 00:20:16 tools. And so renewing our investment in open standards lets us renew our investment in our ecosystem in our marketplace. Very cool. So how is this next generation and what is coming changing the game for you and the product team? To me, on the product team, let's be put out a roadmap that's way more ambitious than what I could do if we were trying to build some of the primitives ourselves. Kubernetes has really established networking technology. That means our roadmap has a lot of networking features
Starting point is 00:20:41 that our customers have been asking for for a while that we're going to be a lot slower to build on the CDER stack than they are on the first stack. And so you should be excited about the open standards and the modernization there on day one. But the thing that I'm excited about is what we can do by the end of the year in terms of roadmap and features, not just getting to parity on some of the more nuanced
Starting point is 00:21:01 features that we have on CDER, on FER, but also the new things that we can build, taking advantage of AWS VPC endpoints, which is something that Salesforce customers have wanted for a while. There's a huge number of these features that just wouldn't be possible to get done this year otherwise, and that's where I'm excited.
Starting point is 00:21:18 Very cool, I love that. Well, friends, the next generation of Heroku, I'm excited about it. I hope you're excited about it. I know a lot of people who have been really, really looking forward to the next thing of Heroku. I'm excited about it. I hope you're excited about it. I know a lot of people who have been really, really looking forward to the next thing from Heroku. To learn more, go to heroku.com slash change all podcast and get excited about what's to come for Heroku. Once again, Heroku dot com slash change log podcast.
Starting point is 00:21:46 So they have VS Code and TypeScript. Yeah. And GitHub. Yeah. Well that was a little bit later. Yeah, a little bit later. Maybe I'm jumping ahead. But I'm trying to get to present day because here we are on build 2025.
Starting point is 00:21:58 That's right. You know, I was counting the mentions of agentic in the keynote this morning, because I'm a nerd like that. Yeah. And I got to 187. Okay, yeah. Yeah, I even a nerd like that yeah and I got to 187 okay yeah yeah I even left off like things like model or mcp or I left copilot off which is like wow
Starting point is 00:22:12 because I feel like you're just going to get to a thousand you know I can't count that fast well you could always take a tab transcript give it to an ai and it can count it that's true I wanted to do it live I thought about it as well I thought about that too. I was just doing, I was like, yeah, you know. I thought about it as well. It's going to be easier later. I thought about it as well. But you know, you got to, I'm going to keep the mind going as well. VS Code, I want to call it a Trojan horse because it has native connotations,
Starting point is 00:22:34 but it's kind of this thing where, that you've gotten out there now as an open source project and as a product that, I mean, how many millions of people use it, right? You probably know rough numbers. We have 50 million monthly active users, 50 across VS Code and Visual Studio together. Okay, close to two of them.
Starting point is 00:22:54 And which one's bigger? VS Code. VS Code's bigger. Significantly. So you have both arms of that, you have like the ID people in Visual Studio and then you have the text editor people and VS Code. And through those platforms, you can launch all this other stuff, right?
Starting point is 00:23:10 Like all this copilot stuff. Is that how you look at it, or is that just how I look at it? I think that we can kind of bootstrap a lot of developers to get more familiar with copilot for sure. And I think that in a lot of senses, the code editing experience, it really, the table stakes have changed, right?
Starting point is 00:23:29 You have to have AI as part of the code editing experience. So I don't know if it's as much as us going in and forcing it on everyone as much as it is. This is what is now expected of a modern code editor. Right. And I don't mean forcing it, I just think that you have this platform in which you can launch other stuff. And Copilot really has had a great opportunity there to just be like, bam, right there. You're already using VS Code. It works great in VS Code. Click the button. Bam.
Starting point is 00:24:00 I think that in a lot of senses, Microsoft has always been a developer first company since the MS-DOS basic days. That's been where Bill Gates' heart was at. And I think that as it moved through Balmer to Satya, we've consistently had great sponsorship for our developer tools and platforms throughout history, right? And I think that the reason for that is because at the core, the CEOs and Microsoft always thinks about its reason to exist as a platform company, right? We are building a platform that other people build
Starting point is 00:24:42 incredible things on top of, and to do that, you have to have developers as your focus. And so I think of the work that we do in the developer division as it is a platform to bootstrap new things, new platforms, new adoption of new tools, new workflows, no question. But at the same time, there's lots of things that we've tried to launch in that way, and it didn't take that.
Starting point is 00:25:08 So it helps, but it's not like a cure-all. Yeah, exactly. Still gotta be good. Yeah. Well, I think it's telling, though, that it happened with VS Code, though. It's 50 million developers across two different editors combining them, I guess, that way.
Starting point is 00:25:23 And that's a lot of developers you have, a captive audience. I think that's what. That's a lot of developers you have a captive audience. I think that's what he's alluding to, is like you have a captive audience to say, okay, as you launch, do things, or even breakthrough, like Copilot, for example, that you have a lot of developers that already have attention, and it's not that much harder to launch. It's distribution for an idea.
Starting point is 00:25:41 I would say, actually, that developers are one of the most empowered audiences across all of the audiences that we target, right? That they vote with their feet more than any other audience that exists, whether it's consumer, enterprise, et cetera. Enterprise, totally different way that they drive decision making.
Starting point is 00:26:04 Developers, it's an end user consumer audience that basically chooses things based on what's working for them, right? And we see all the time developers picking up new coding editors, picking up new frameworks, they, you know, they're technology enthusiasts, right? That's actually one of the things that makes the job so rewarding is, I can launch something at the beginning of the day and by the end of the day, I know if it was a hit or a dud because I have so many early adopters
Starting point is 00:26:32 that are kicking the tires, right? And I think that we don't have a captive audience at all with our developer tools. I think that developers have a tremendous amount of agency. And the way that we think about it is we have to win their loyalty every day with actual great product experiences. So this reminds me of a post I actually put in Change.News today. I think Avdi Grim wrote this called Developer Tooling is a Lousy Business.
Starting point is 00:27:01 And he actually enumerated some of the points that you're making. You're saying it's great for us as developers, right? Because we have the agency we speak of and we're really, I guess, adept at handling our tools and changing our tools. And that makes us somewhat of a fickle audience because what have you done for me lately? That's right.
Starting point is 00:27:18 And so in that sense, I guess, how is Co-Pilot changing? Because I remember when it first came out and it was auto-complete and it kind of was game changing in that way but it was really a non-deterministic autocomplete that was good, had his problems and has grown since then you guys continue to iterate make it more and more awesome and this year you're you're announcing a lot of the it being a coding agent can you tell us all about it? Yeah well so you know in a lot of senses Copilot has gone through the same epics that AI itself has gone through.
Starting point is 00:27:47 Over the past couple of years, the AI basically got to the point where it was good at doing token prediction and things like that. But then over the last couple of years, it was... Over the last couple of years, it was... We introduced chat and that allowed you to have a conversation with a knowledge base based on retrieval of augmented generation. And then just over this past year, it started to get to the phase where it could actually start to take actions because the models themselves got to the state where they could actually reason over what they were
Starting point is 00:28:22 working on. And so for Co- Copilot, we started with completions, and the completions at first were just a single line of code, and then they got to full function bodies, and then they got to longer, maybe a whole file, and then last year we introduced multi-file edits, right? So that you could actually make multiple changes to your code base at once based on a basic prompt.
Starting point is 00:28:46 We introduced chat capabilities, which could be based in the context of the project or the social code that you're already working in in the repo. And that kind of started to change the game, but I would say nothing has accelerated the capabilities as quickly as what we introduced with Agent Mode this past February, and then it's kind of rolled out over the past few months. But Agent Mode, it is shocking what you can accomplish in just a really short amount of
Starting point is 00:29:21 time just letting it go. Basically what you do with that, and it's also had its own acceleration, but what you do with Agent Mode is you go into the prompt, into the Copilot Chat experience, and you switch it to Agent Mode, pick a model, whether you want, you know, Sonnet 3.7, or you want, you know, GPT-40, or whatever you want to use,
Starting point is 00:29:43 and then you give it a prompt, like, add tests for this particular project, and it will then iterate and self-evaluate as it's iterating. So for example, in my demo earlier today, all I did was I gave it source code repo for a website, and I said, test this solution, and write some tests, write some integration tests for me.
Starting point is 00:30:09 And it basically started to do all of the automation to bring up the Playwright automation framework, bring up the website, start to traverse all of the different paths that it could go through in terms of the various different customer journeys on the website. And then it started to generate the tests themselves, right? And so what used to take me hours, maybe a half day, I can now get done in just a couple of minutes using Agent Mode. And so I think that's really like changing the equation in terms of what these tools are capable of. Yeah, it's really ramping up the potential gains. One thing that I fear with that is the more you change,
Starting point is 00:30:52 the harder it is to find the thing that went wrong. You know, it's like the needle in the haystack problem where if it's an auto-complaint of a single line, cool, I can just see what it says. If it's a function, I can kind of read through that real quick or maybe go in and make changes. If it's a file, okay now it's getting big, but as it's like multi-file changes with all of this stuff going on and like it's been thinking for four minutes and I'm not sure what it was thinking. They'll tell
Starting point is 00:31:14 me if I want to look at it, but you know here's this big change we're gonna make. Change log. Exactly. And now I got my code review step basically and there's so many things there that I just get a little bit Apprehensive of like am I gonna be able to find potentially where that One needle is and the haystack that maybe make the whole thing go haywire are there guards or their helps Are there concerns that that might be an issue? I think that's something first of all that we need to think about in the design of the tool itself, right? We need to make sure that we are not introducing something that creates so
Starting point is 00:31:46 much cognitive load for you to ingest in terms of change management that like, it's beyond your ability to reason over, right? So we, we are very intentional in terms of like how we guide the prompts to, to kind of control just how much code is actually going to be generated. Other reasons that we do that as well. But I think the other thing is, you know, we really want to make sure that we are working with the workflow that you're used to, right? So another thing that we introduced today is the coding agent,
Starting point is 00:32:18 where, you know, if you think about agent mode in VS code as the experience that allows you to give it a prompt and then you can do synchronous supervision over how it's completing all of the tasks in that prompt. The coding agent is asynchronous. And so you can almost think about it as agent mode is your peer programmer. It's looking over your shoulder. It's accelerating your capabilities. But the coding agent is your it's looking over your shoulder, it's accelerating your capabilities, but the coding agent is your peer programmer where you can assign tasks to it
Starting point is 00:32:49 as though it was another member of the team. And so it's just like in GitHub, in GitHub, if you're assigning an issue to your colleague, you would instead assign it to Copilot and it can asynchronously go and execute that task and figure out what, come up with a plan and go and create a pull request and complete that task for you, right?
Starting point is 00:33:10 And so what that I think enables is for you to then still think about the tasks that you're assigning to the agent in the same granularity that you would assigning it to another developer on your team. Right. So you could theoretically do a multi-file edit, that you would assigning it to another developer on your team. Right. So you could theoretically do a multi-file edit, like build something, prompt it for
Starting point is 00:33:30 that, and then run an agent against that and say, debug this, make sure it's sound code. Correct. So versus, you know, like your question was the fear of the code sucking basically. Right. I got my code review agent, I got my coding agent, my code review agent. Well, don't worry, because in the end you can just unleash your agent and just say, agent, check that. That's right.
Starting point is 00:33:48 I mean, you could certainly have parallel adversarial agents kind of working against each other. You could set up that kind of system where you basically start to build your code. You then want to say, okay, I'm going to have one agent that's going to be focused on the readability and maintainability of this code, and another agent that's going to be focused on maybe the performance and optimization of the code. And you could have both of those things kind of going at the same time based on really at the end of the day, in a sense, you kind of can create these things by virtue of just
Starting point is 00:34:24 providing the prompt. You just say, oh, here, copilot, I have a new issue, I wanna optimize the performance for this particular page. Right. It's actually gonna do it. It's actually gonna do it. It's gonna do it. I just think of these little functions you can call.
Starting point is 00:34:43 Check this, check that, write tests here, convert, that's really wild, and then do it. Yeah, and we also have, obviously, pull requests reviews as a part of Copilot as well. So we also apply it at that large scale for basically every code change that we make inside of Microsoft or in our open source repos as well, we do kind of AI based code reviews.
Starting point is 00:35:10 Right. I know you're using the classical data science form of adversarial there, but I kind of had a moment where I was like, it would be fun to just pit these agents against each other. I was too. Listen, okay, this agent's not very good at their job. And they always get things wrong.
Starting point is 00:35:23 Now your job is to watch them. Keep an eye on them. He's known to hallucinate. You don't like them very much, you know? I was thinking that too, like, could they either, when will they get upset and will they compete? You know, can you compete them? You both have the same job,
Starting point is 00:35:38 the one who does the better gets the task. You know, they define their job is to do the task and be excited about getting it right. And then you'll get along. And so they compete with the person or the other. They're like, they find their job is to do the task and be excited about getting it right. And so they compete with the person or the other. Well, I mean, I think that actually, there's something to that. And I think one of the things that for folks who are working on different machine learning models
Starting point is 00:35:58 for the application to coding, we have benchmarks, right? Just like in the old days when I was working on the JavaScript engine, we had performance benchmarks that we had to work on. Nowadays, we have benchmarks for these SWE agent kinds of models that are coming up. And so part of what we, there's different kinds of techniques that you go through in terms of kind of
Starting point is 00:36:21 getting the performance of the benchmark to be better because you can optimize for different things. You can optimize for token consumption. So what's the cheapest way to accomplish it? You could optimize for performance. How can I complete the job more quickly? You could optimize for accuracy. And so in a lot of senses, what you're saying is actually not wrong.
Starting point is 00:36:44 And I do think that over time, when we think about different competing agents that could actually go and fulfill your job, you could have ones that are experts in different types of tasks. Fascinating potential world. You mentioned every line of code or a lot of the codes that we do here at Microsoft. That got me thinking about your own products.
Starting point is 00:37:09 Yeah. And Satya mentioned code modernization. Of course, a huge opportunity, right, for these agents to do. I was just thinking about Anders and the team's port over to Go. Yeah. And how tedious that could be unless you have some agents doing the work for you. I'm curious about your old software projects. I'm thinking Visual Studio is pretty old at this point. Yeah, we have 20 year code bases, 25 year code bases. Are you harnessing these things to modernize things like Visual Studio?
Starting point is 00:37:39 Can I say I'm glad you asked? You can say that, sure. Well, actually one of the things that we introduced today is allowing GitHub Copilot to help you modernize your.NET and Java code. So if you have a dependency on.NET 6 and you need to move to.NET 9, or you have a dependency on Java 8 and you need to move to Java 21, then you can actually use GitHub Copilot to help you do that. And this is like a big deal because, you know,
Starting point is 00:38:08 it used to be that those kinds of jobs, like that's the thankless job that as a developer, you hate that. Didn't somebody spend like 10 years doing that stuff? Yeah, that was hard. We just heard the story. That moment when like your boss comes in and it's like, we need to modernize the code base
Starting point is 00:38:22 and I need you to work for six months on doing this port. Like, oh my God, that's crushing, right? Because you don't get to do anything that's like exciting. Just tedious. You know, it's kind of tedious. And you know, from our customers who are using it, they're telling us it takes care of 70% of the code migration, so at least.
Starting point is 00:38:41 So that's pretty incredible. We have like success rates of like in the 90s of just upgrading your code base. So I think that's one dimension of technical debt that this kind of stuff can take care of. Another is security vulnerabilities, right? And that we're applying this internally at Microsoft very, very broadly. You can basically go look for CVEs, and once the CVEs are found, then we can actually go and file a PR to get it automatically fixed, and then all you have to do is review the code change.
Starting point is 00:39:13 So, like, my hope, I think that the industry is still saddled with, like, incredible amounts of technical debt. And if we can actually go and erase a bunch of that technical debt, like, just imagine how much more innovation could happen. I got to thinking about this idea of the state of human velocity.
Starting point is 00:39:33 We are now doing things that we wouldn't normally do, not because we can't do them, but because they're just so time consuming, like porting a code base to something else. We were spending a decade to do something like that. And now I think we have access to a tool that lets us think big or not because it does the work, but because we can get past the hard things easier and sooner.
Starting point is 00:39:54 So the speed of humanity essentially is, it's kind of like maybe the inflection point of speed for humanity, because we've been going pretty slow. And since the 1900s, you got cars and industry change. And at the end of the 1900s, you have the explosion of computers and the internet. And then in the 2000s, it's social media and all the things. And now it's like AI is here to help us all go
Starting point is 00:40:16 to a new plateau faster. Yeah. I mean, I think in a lot of senses, we've been struggling. I wouldn't say struggling. Yeah, I mean, I think in a lot of senses, like we've been struggling, I wouldn't say struggling, I would say limited by the available developer talent in the world. We still have like a shortage of developers,
Starting point is 00:40:34 in my opinion, right? A significant shortage of developers across the world, and especially developers who have higher level systems thinking and reasoning capabilities, right? And so I think that in a lot of senses, like what the opportunity, what this all represents is an opportunity to kind of spread that, that knowledge a bit more broadly and like think about all of the apps that, you know, your organizations wanted you to write, but you never got to because your backlog was so long, right?
Starting point is 00:41:03 I think there's huge amounts of demand and need in the industry overall that is just not getting addressed today because developers are settled with technical debt and what they already have on their plate is significant enough to occupy them, right? And so if we can actually erase a bunch of that, I think that's going to make a huge difference overall. But I think the other thing that's super important in all of this is there are aspects of the
Starting point is 00:41:31 developer job that are not awesome, right? Technical debt being one of them. Another thankless task is being on call and responding to life-side incidents, right, in the middle of the night. I know that a lot of our developers do not relish that aspect of their job, right? And I think that a lot of the new capabilities in AI allow us to offload at least the less complex cases
Starting point is 00:41:59 of site reliability engineering response to agents. So one of the things that we talked about today is we introduced a new site reliability engineer, SRE agent, that can actually deal with, like, if I have, if my app suddenly starts to be unhealthy, it can actually go and do a profile and understand if it's, you know, a memory issue, and even start to auto-scale your infrastructure to be able to respond and mitigate the issue.
Starting point is 00:42:29 It may not be the permanent right repair item or fix. That might come a little bit later, but it can do that first line of response. And for us inside Microsoft, we've actually been applying it internally, significantly, and have dealt with a ton of like hundreds, I think we're probably at thousands now, of incidents that have been managed in this way, such that developers never needed to get involved and then they just review their repair items from the recommendations from this autonomous SRE agent. So I think that's pretty cool.
Starting point is 00:43:05 Because that kind of, you know, there's aspects of the developer job that's all about creation. You love that moment when you get to write new code, get to scaffold out new, you know, I always loved scaffolding out new class libraries and frameworks and just doing the first rough-in of the application. That was always my favorite part.
Starting point is 00:43:30 But this means that developers, they don't have to get woken up in the middle of the night and their job doesn't have to be awful. It's like self-healing. And I imagine that, like you said, the fix may not be a permanent one. It's more focused on uptime. My goal is this this, autonomous agent is not the best long-term fix. It's keeping the application up. So yeah, I mean SRE agents, SREs in general, are always focused on application uptime and like mean time to mitigation, right? So like if there is an incident, how long does it take for them to actually respond to it?
Starting point is 00:44:06 And then they also are concerned about things like costs and operations over time. And I think they can much more meaningfully contribute to how to build healthy, large-scale systems that operate well. So are these agents actually applying the fixes and the person doesn't have to come in and hit that button that says, yeah, yeah, let's go ahead and do this? Did I hear that right? Correct, yeah, I mean, it's all within policy, right?
Starting point is 00:44:37 So you can decide what limits you want it to have, but if you need to scale your infrastructure up to be able to handle more memory, for example, it could deal with that automatically in the middle of the night without having to ping you and wake you up, and then you come in in the morning and it says, hey, we had this incident, we had this out of memory exception that happened here,
Starting point is 00:45:00 and then you can go investigate it in the morning and go figure out what's the long-term fix. You could almost have it do the fix and another agent check the fix. That's right. You know what I mean? Like, your policy, you alluded to the policy. The policy essentially is an augmentation
Starting point is 00:45:13 of potentially an agent that has different parameters. You're trying to take us out of the job, aren't you? I mean, I'm just saying, that's just where it's going. I don't think any of this is trying. Like I said, like I. Necessarily, but like that's what you can do though. So you can have somebody, have, yeah. For size of the thing all the time.
Starting point is 00:45:32 You can have an agent that just checks it to confirm based on policy. With these bounds, you have agency to do this thing. Correct, yeah. Without all those bounds, it is a no. Yeah, I think that's a lot. When we talk about what are the skill sets that developers are going to need to have for the future,
Starting point is 00:45:52 it's still the same complex systems reasoning, right? When you think about, if I have multiple policies that I am applying to how to manage infrastructure during a live site incident, right? That in and of itself, that set of policy rules that is a Complex system and you have to think through well what happens when this role conflicts with that role How is it going to how is the system going to respond? so I think that's that's where a lot of our brain time is going to start going is thinking about the how these different kinds of
Starting point is 00:46:24 Systems and agents that are somewhat autonomous are going to interact. Well, friends, it's time to build the future of multi-agent software. You can do so with agency. That's AGNT-CY. The agency is an open source collective building the internet of agents. It's a collaboration layer where AI agents can discover, connect, and work across frameworks.
Starting point is 00:47:00 For developers, this means standardized agent discovery tools, seamless protocols for interagent communication, and modular components to compose and scale multi-agent workflows. You can join Crew, LineChain, Lambda Index, Browserbase, Cisco, and dozens more. The agency is dropping code, specs, and services with no strings attached. Build with other engineers who care about high-quality multi-agent software, visit agency.org. That's agntcy.org. And add your support once again, agency.org, agntcy.org. Are you thinking about how these agents manifest as a visible layer to this agentic internet
Starting point is 00:47:50 and web that was alluded to? Because I'm thinking like there's this idea of like agents available and so why recreate the will? I was really just thinking about the idea of a secret agent. Why were you doing that Adam? I was just like that's a really cool name for an agent that like doles out secrets maybe or just deals with authentication and authorization kind of thing. Like if there was an agent, it was a secret agent. That's cute. And I want to discover that secret agent.
Starting point is 00:48:17 Okay. Well, a couple things I would say about that. First of all, yes, I think in terms of thinking about the common way that you can go interact with all of these different agents, yes, I do think that ultimately our goal is that GitHub Copilot can become that common substrate. It's almost like the omnipresent agentic command center that allows you to interface no matter if you're talking about your infrastructure or your code or your test
Starting point is 00:48:49 or even tasks that I have to do to go work through the bureaucratic layers of our ops team or whatever it is, I think a lot of that can start to become interfaced through working with GitHub Copilot. So yes on that point. I think what you're bringing up around secrets is a great question, right? Sure, I was just wanting to call it a secret answer.
Starting point is 00:49:10 No, no, but here's the thing though. Here's the thing. Inside Microsoft, I kind of do jobs at Microsoft. One is I'm head of product for our developer division and the other job is I'm basically the GM for our platform engineering team. And so that means that basically we build all of the tools and all of the policies for all of the
Starting point is 00:49:33 internal engineering teams at Microsoft. So we kind of take all of our third party products and we host them and administer them and extend them and incubate in them for our first party engineers. One of the big things that we've been trying to focus on is expunging secrets from our code bases. Because secrets are dangerous, right? And if you have them in your code bases, then if you have malicious actors, they can go
Starting point is 00:50:00 in and try to exfiltrate your code and get your secrets and then get access to your infrastructure. So generally, we are trying to move towards a system where we do not have secrets checked into our code bases. That said, like that's a great application of the kind of policy that can be applied at your organizational level to say, look if I have any code that looks like a secret, I want you to flag it, I want you to file an issue, because
Starting point is 00:50:26 I want that to be manually checked. And we also have in GitHub advanced security detection of those kinds of secrets as well, so that we can actually make sure that you never push it into your code base. Well, then I propose that you make that a product. Done, sir. And you call it secret agent, you credit Adam Stokowiak as the idea from the name. Just from the fine print at the bottom.
Starting point is 00:50:50 I just like that, it's so cool to have that. Make that a thing. Make that a thing. Now you have three jobs at Microsoft. The third one is to get that big name, secret agent. I'm working on secret agent. It's catchy. I think it's called GitHub advanced security.
Starting point is 00:51:04 Okay. Oh, there you go. Less, I mean it's called GitHub Advanced Security. Okay. Oh, there you go. Less, I mean, it's a cool name. It's a cool name, but you know, it's not a brand. Secret agent. Yeah, if you whisper it, it sounds even cooler. That's right. Oh my.
Starting point is 00:51:19 So do you think about cascading effects, especially I'm just back on the SRE side of things, and thinking about turning over so much to... Control. Yeah. Yeah. To software agents, which again are, what did Nathan Subba call it? A...
Starting point is 00:51:39 Genius golden retriever on acid. Yes. Which maybe those are the models he's working with. But you know, a thing that you don't ultimately know what it's going to do. Now you can train and fine tune and guard and watch agents watching agents all you want. However, we've seen at scale the internet operating,
Starting point is 00:52:01 distributed systems at scale, things go wrong in ways that are sometimes very quick, very catastrophic, and compounding and cascading. One thing that I think about is, you know, some of these stock market trades, when you have quick corrections or crashes, is because you have software making margin calls and trading with software programs, and they're just,
Starting point is 00:52:22 and eventually what happens is, you know, the New York Stock Exchange actually just stops everything and is like, let's chill out here. And I think there's a potential of that kind of thing with agent RSSREs, changing the memory on your VM and then this happens. Like a race condition among agents or something like that. I'm wondering like what kind of,
Starting point is 00:52:41 I know you all think about security a lot and what kind of stuff is out there for just making sure that maybe there's a pull the plug moment or a way that you can get back in the loop and say, okay, let's just chill out here, guys. That's a fantastic question. Well, just the team. I think generally what our approach is, is that we want to make sure that every agentic workflow is completely auditable
Starting point is 00:53:05 so that we can see what the agent is actually executing, look over it in history, that we do have controls over it to be able to say these are the resources that you have access to. And, you know, we also have to think about things like ways of testing the models themselves, whether it's a model that we build or it's a model that you build and for your software. And that's part of what we build with the AI Foundry that allows you to actually evaluate the models against all different kinds of checks, whether that's safety and security checks, whether that's responsible AI, kinds of harassment, kinds of scenarios or language that's inappropriate.
Starting point is 00:53:47 A lot of that is really what has to get built as a part of kind of building and evaluating models themselves. And then we also want to have this common agentic control layer across all of the software that uses agents so that we can see what those agents are actually actively doing, what resources they have access to, and restrict what they have access to if they start to go astray.
Starting point is 00:54:13 And we'll be showing a demo of that tomorrow. Cool. Yeah. Yeah, it's really just a big orchestration problem at the end of the day. And then underneath it, you have your, what do you guys call them? Not the frontier, but the founder. The founder. The models themselves. And I think it's really cool how much choice is available
Starting point is 00:54:31 at the model level because I've even in my personal use and in my coding use have appreciated the ability just to swap these different ones, especially as they leapfrog each other and capabilities. And I think it's really cool how many different models are there. Yeah, I mean, that's one of the things that I think has been really great
Starting point is 00:54:48 about kind of bringing GitHub models into, or bringing the AI Foundry model catalog into GitHub models is it allows developers to be able to go kick the tires on all the different models that are out there. And they all have different kind of strengths and weaknesses. And in some senses senses I think about it as a search space of different model characteristics that have a certain price and a certain performance.
Starting point is 00:55:15 And you just need to kind of go find what's the right one for your particular use case. And I think that the fact that we've integrated GitHub models, sorry, AI Foundry model catalog into GitHub models really does allow developers right in there in GitHub to go test these different models in a playground. And then further, because we have in VS Code the ability to like select models as a part of your chat experience, we also find a lot of developers using that
Starting point is 00:55:44 chat experience as a way to go test experience. We also find a lot of developers using that chat experience as a way to go test which model is meeting their particular needs for their use case that they then go right into the application that they're building. That's awesome, no further questions. I don't have anything to question about that, I just think it's cool.
Starting point is 00:56:00 And I like, and you're not the only one who's doing that, a lot of people are and I think it's great. I love just giving developers choice, versus saying, no, you're not the only one who's doing that. A lot of people are and I think it's great. I love just giving developers choice. Yeah. Versus saying, no, you're going to use this. It's the best one. Trust us. Always.
Starting point is 00:56:11 Always. No, don't do that to me. That being said, which one's the best one? Well, I'm waiting for a.agent to become a TLD. Oh. Oh yeah. That'd be a good one. I mean AI is cool, but dot agent. Secret. Yeah, that's all for this one thing. Domain registered. Yeah. Yeah,
Starting point is 00:56:34 that would be a good TLD because I think there's going to be, you know, there'll be selling agents like you're selling apps at this point. Don't you think? Yeah. If there's a company well positioned to sponsor in some way, shape or form,. We just learned about DNS with Anthony Eden and how TLDs come into play. Right. You got a lot of money to host a TLD. Yeah, there is a whole namespace question, right? How do you find all these different agents? How do you figure out which one you want to deploy for this particular problem?
Starting point is 00:56:59 And I think that's one of the big next things. Both agents and tools, there's going to be a catalog. Just like there's a catalog for know, there's going to be a, just like there's a catalog for models, there's going to be catalogs for agents and for tools. And the best way to catalog them is.agent. It really is. I mean, if you're building the agentic web, this open agent web that's happening,.agent. Okay, I'll take it to the top.
Starting point is 00:57:22 Two more missions, you don't mind. All right. And just, you don't have to mind. All right. And just come back to me for any final sign off on these ideas and stuff. Love it. I can give you more. Love it. So the advancements every year are interesting.
Starting point is 00:57:35 It's moving very fast. Here we are 25. If it were up to you, 26, the three of us sit down. You don't have to unveil any secret roadmaps or anything, but like what would be going on next year this time if you were excited about it? What would be the next step? We're at Agents. Where are we next year? Well first of all, I think the Agents, we've seen with Agents just how powerful they are. So we've seen a lot of promise. I think there's also a lot of peril in there as well.
Starting point is 00:58:08 And what's going to start happening is that, basically a lot of folks are using them without real security controls. And so I think we're going to need to see more ways, to your point earlier, around how do I audit and control all of these agents working throughout my enterprise or my team. So I think that's one thing.
Starting point is 00:58:29 We're going to start seeing a lot more controls in that sense. The other thing is, part of what we're seeing, and this is really only accelerated over the last three, four months as we've started to have capabilities like agent mode and VS code, is the capabilities of the software development team are also changing, right? Developers now can do better designs.
Starting point is 00:58:53 They can make things not just prettier but more easily usable without having to have designers involved. Designers can code. Product managers can now code. So what does that mean for how we think about the evolution of the software development team and what canvases do we think we need to do collaboration? I think of it like giving somebody who the whole team or the whole group of people you just mentioned, they all speak a language.
Starting point is 00:59:21 Let's just say it's English for just lack of better terms for this analogy. I think of it like these folks have a limited vocabulary. These folks have a limited vocabulary. And they're all specialized. And now they have a more shared language spectrum. Because you have the words, we have the words, we could share the words. That's called speaking, of course, as you know.
Starting point is 00:59:42 But I feel like that's what it does. Is like you now give them, they all spoke English already, because they all understood some of the code, some of the design, some of these different features, but now they can all speak a certain language. You know what I mean? But I think that part of what's going to happen is that everyone is going to start contributing
Starting point is 00:59:59 to the code base more easily, right? Which is better for the product. I mean, and the user, obviously. I think so too. But I think we're already starting to see designers contribute code, right, in terms of like, rather than handing a design over the wall and say, engineer go, implement it, know the designer can actually go.
Starting point is 01:00:18 And if they want rounded corners, they can get rounded corners to happen, right? Or if a PM has an idea around a new feature that they want to experiment with, maybe they can go build an initial prototype with that without having to go bother the engineers to go get that done, right? And I think that starts to kind of raise
Starting point is 01:00:39 interesting questions around how do you think about architecting different kinds of systems. I think, you know, maybe one of the things that might happen is that there's a difference between solutions that are more sitting at that SaaS level versus services that are more at the systems level. I think that that SaaS level is going to start to be something that is more easily extended.
Starting point is 01:01:08 Today, when we think about something like a Microsoft Office or M365 or we think about something like GitHub itself, it's essentially an end user experience, a SaaS that is being provided and it has extensibility points. And those extensibility points are painstakingly crafted because every time that you expose a new And those extensibility points are painstakingly crafted. Because every time that you expose a new API that allows you to customize the environment, it both empowers your users to go build other things,
Starting point is 01:01:35 but it also represents kind of a boat anchor that you're stuck with the back compatibility for that contract, right? But I think that what this is now kind of enabling is for you to start to build other systems, other automation on top of other systems, much more easily, even without the API to be frank, because now it can just like traverse the DOM
Starting point is 01:01:56 or whatever it needs to do. And I think that means that like a lot more software becomes a lot more extensible. And I think that that means also that the way that you build software itself is going to end up changing over time. The other thing that I think is super interesting is like, we also spend a huge amount of time right now
Starting point is 01:02:13 on the view, creating the view. Sure. Right? Like if an MVC kind of model, right? I think in the future, like, it might just be that you focus on the model and the view you actually go and specify with a design system. Well, I saw a t-shirt downstairs. It had a bunch of things crossed off. I think developer was one of them and something else, and it just said builder. Yeah. And so I feel like, you know, been ahead of the times basically
Starting point is 01:02:43 with, I think people becoming builders rather than just developer or just designer and the word just not pejoratively, if that's even a way to say it, but more so builder. Everybody's a builder and everybody can contribute. Yeah, love it. I have a question that I think is maybe, would have been way better back in the VS Code part,
Starting point is 01:03:01 but as we wrap up. Spicy. Yeah, I don't know if it's spicy or not, but I'm curious, we were kind of debating, you know, VS Code's foothold and is it a strategic advantage and this and that, and the open sourcing of that and what it's been for a good thing for Microsoft. What we've seen recently in the vibe coding space is VS Code forks.
Starting point is 01:03:21 And these forks come up fast and furious and they're getting huge valuations. Self-rebellions. Right, they're self-rebellions. They're getting large user bases very quickly, at least it seems like they are. And in a sense, that's a little bit of a backfire, right? Because now you're providing VS Code, this platform, which is just forkable. And now you're giving competitors an opportunity to just Catch up real quick and compete with copilot in weeks I mean they could vibe code their way to a three billion dollar valuation does that Bother you is that do you even see it are these just like the the cockroaches that you just get out of your cockroach
Starting point is 01:03:59 Or how do you guys view it? How do you know? I mean? I How do you guys view it? How do you view it? Kelsey, how do you tell her that one? I love you, Kelsey. I think that there's real innovation that's happening in the industry and a level of competition that we haven't seen for a long time. I have a tremendous amount of respect for the competition that we're seeing right now in the code editing space. I will say, yes, I think that there is a lot that has been built on top of the open source code base of VS Code, and I think that is creating a foothold that allows others
Starting point is 01:04:31 to kind of go and create additional features or differentiation on top of it. But that's, I think, one of the reasons why we've decided to actually open source the GitHub Copilot extension for VS Code and build it directly into the VS code code base is, you know, we really do think that the AI experience is now table stakes for any code editor. And in the same way that VS code has been open from the get go, we think that now the AI capabilities in VS code also need to be part of the open source code base. And I think that we certainly believe in, whether it's VS Code or TypeScript or anything
Starting point is 01:05:11 that we do in the Azure SDKs or.NET, all of that's open source. So half of what my team works on is done in the open and open source. We certainly see that when, especially when you're trying to build a community around a technology, an ecosystem around the technology, building in the open actually creates better products. You know, it allows more people to contribute, whether they're contributing pull requests
Starting point is 01:05:34 and code contributions, or if they're just logging issues and just, you know, they care enough to actually follow through with really great issue descriptions. That's an important way to contribute to the code base and we see that across all of our open source code bases. And I think what we hope to see now is there's a tremendous amount of innovation that's been happening in AI-based coding. What we hope to see is more of the community
Starting point is 01:06:02 contributing back to the VS code code base To really advance the state-of-the-art for everybody. Yeah, not sure we call them competitors though You don't think windsurf is a competitor of copilot. I would say when you use one I'm gonna use one or the other aren't I and I guess so But is that your customer for like a better terms? Is that they want all the customers? Don't you I don't you? I don't think so, I think- Let her answer. Well, this is what I assumed you were thinking.
Starting point is 01:06:31 And you said a lot of things, but you didn't say this. But if I read between the lines and I think if I were you, I wouldn't see as competitors because you are focused on developers using VS Code. And I don't think they're not developers, but if you're vibe coding, it's not a developer action, it's a different way to get the end result. Yeah.
Starting point is 01:06:53 A developer writes code and cares about the code, whereas the other way is not so much less about the code, it's a different way to get there. That's how I look at it. So I think there's kind of two pivots to that. First of all, I would say So I think there's kind of two pivots to that. First of all, I would say like, I think that code editors, generally if your primary task is to write code, then,
Starting point is 01:07:14 then I would say any of the popular code editors are competitors in a sense. You know, but to the point earlier that we were talking about, like we believe that we live or die by day-to-day product truth. And we have to basically win every developer based on their usage and experience with our products. And we strive to make the best products we possibly can.
Starting point is 01:07:36 I think to your point around vibe coding and how does that relate, I see vibe coding as a really interesting evolution, right? It's not quite... I'm throwing zero stones at that. I'm an agnostic when it comes to all innovation. Whatever gets us to the next place, we all love it. That's what I'm for, right? And if vibe coding is one of the ways we get there, or it invites more people in to build software, cool.
Starting point is 01:07:59 I think vibe coding invites a lot of people in to go build more software. I think that it's not your typical pro dev software developer, but I think that what VybCoding has kind of started to create is more of this pattern of what I would call natural language driven development, which starts with maybe a Vyb initial prompt, but then it evolves into a full spec. And the spec is still written in natural language. It's not written in C sharp or TypeScript or JavaScript or anything like that.
Starting point is 01:08:34 It's written in a natural language, English in my case, but could be written in whatever. But then you take that spec and then you use that spec as the prompt. Right. Right. And that allows you to then iterate to this level that's like you can get to a much more sophisticated first implementation of the code that you're aiming to implement and then you continue to iterate and you may even modify the spec as opposed to modifying the code. Right. And so I think
Starting point is 01:09:03 that's starting to change things. And so from there, then it's like, okay, well, now the PM can contribute even a spec for a feature, or they can contribute a spec for the initial product. The active testing could also be, in some sense, as a large prompt. The designer could contribute a design system into the code base.
Starting point is 01:09:26 And so I think what we start to see is that over time, the code base is not just your everything that builds. It's all of the prompts for all of the systems and all of the different phases that you go through of the software development life cycle. I think that's really insightful. And one of the things that I've noticed as an experienced developer trying to adapt and adopt the tools is that software isn't built in like a chat scenario.
Starting point is 01:09:55 Like you don't chat your way to a software system because there's just like so much chatting that goes up. You know, you design specifications and yeah, you may have conversations that lead to that design decision that you make. Once you make that, you don't want that to be like one moment in a conversation that was way up here. You got to tell your coding agent to scroll back and remember what I said back then. Like you want to actually have tangible outputs.
Starting point is 01:10:19 I expect that gets created through, whether it's a vibe session or just a peer programming session. Now I have this written document that evolves and it'll be so cool to be able to take that spec and be like, all right, here's a different model. You know, start fresh. We don't need to use the code. We have the spec.
Starting point is 01:10:37 And like you write the same, you can take the same spec, six different models, write the program, you know? May take the best one on each or whatever it is, have something that you could like start from. So you're actually building out an architecture. Yeah, but I think- Is that formalizable? Okay, go ahead.
Starting point is 01:10:53 Yeah, it is. I mean, we now are starting to do more spec driven development. We have.prd files that, you know, would be the description of the spec. And those kinds of things are starting to get checked into the code bases. But I think also there was a nugget in what you were talking
Starting point is 01:11:09 about in terms of like design decisions that are made throughout the process. Sometimes it's not just the spec that you actually use. Why? Well, it's the conversation. Like if you think about you were having a conversation with your designer or another engineer on the team in terms of how something should work, maybe over time, the history
Starting point is 01:11:30 of that conversation should actually be something that's persisted into the code base in some senses. Yeah. And hopefully in some summarized fashion. Yeah. You know, so it's actually grockable. Yeah, exactly. I know there's some people that keep actual, I can't remember what it's called, a why document,
Starting point is 01:11:45 but it's basically around their decisions. Not the decision we made, but why we made the decision. And so you can go back to that and be like, no, here's why there's this fence here. You see a fence, you're like, why is the fence there? It doesn't need to be there. It's like, well, there was a reason, and none of us know why it's lost to history.
Starting point is 01:12:01 But now you can go back to that chat or that context and at least link it somehow, whether it's summarized or linkable or whatever it is, to be like, here's our spec and then this part of the spec here, why is it like that? Well, here's the why. Yeah. Maybe in that world, developers don't have to write documentation. Now you're selling me. Oh my gosh. Or they write one. They write one, the initial spec. That's right. Yeah, I was thinking about that too. All right.
Starting point is 01:12:26 Awesome. This has been a blast. Yeah. Thanks so much for having me guys. Thanks for sitting down with us, Amanda. Very fun. Thank you. Okay.
Starting point is 01:12:38 On location at Microsoft Build is always a treat. Big thank you to our friends at Microsoft for making sure we're there. Richard, you're awesome, and the rest of the team, man, so cool. Always good to be in Seattle. Always good to have that awesome Pacific Northwest weather. And to our friends over at Five Iron,
Starting point is 01:12:58 it was fun swinging with you. Okay, so what's next? Big things happening around AI. It's always moving. It's not hype. It is real. This is not science fiction. This is science reality. And it's here to stay. And the adventure has just begun. A big thank you to our friends over at Retool. Retool Agents looks so cool.
Starting point is 01:13:17 I'm using it in a couple of different scenarios and I just can't believe it was, I just can't believe it was possible. Seriously, I just can't even believe it was possible. Our friends over at Heroku are launching the next gen of Heroku. Big things coming. If you love Heroku, you're gonna love what's next. And to our friends and our partners at Fly, those robots, those humans, everyone loves
Starting point is 01:13:37 the Fly.io platform. It is the best. That is the home of changelog.com. Learn more at fly.io. And the Beat Freak in Residence Break Master Cylinder. He'll be mixing some beats live. We are going live, by the way. If you didn't know this changelog.com slash live. We're in Denver launching Pipely, launching our next gen CDN. What's happening around our platform? Gerhard, me, Jared, BMC, Jason, others. I mean, it's not to be missed.
Starting point is 01:14:11 You are invited. Learn more changelog.com slash live. We want you there. Check it out. Tell a friend. Okay, that's it. This show's done. We'll see you on Friday. Thanks for watching!

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.