No Priors: Artificial Intelligence | Technology | Startups - Copilot, Agent Mode, and the New World of Dev Tools with GitHub’s CEO Thomas Dohmke

Episode Date: March 13, 2025

This week on No Priors, Sarah and Elad talk with GitHub CEO Thomas Dohmke about the rise of AI-powered software development and the success of Copilot. They discuss how Copilot is reshaping the develo...per workflow, GitHub’s new Agent Mode, and competition in the developer tooling market. They also explore how AI-driven coding impacts software pricing, the future of open source vs. proprietary APIs, and what Copilot’s success means for Microsoft. Plus, Thomas shares insights from his journey growing up in East Berlin and navigating rapidly changing worlds. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @ThomasDohmke Show Notes: 0:00 Introduction 0:37 GitHub Copilot’s capabilities 4:12 Will agents replace developers? 6:04 Copilot’s development cycle 8:34 Winning the developer market 10:40 Agent mode 13:25 Where GitHub is headed 16:45 Building for the new challenges of AI 21:50 Dev tools market formation 29:56 Copilot’s broader impact 32:17 How AI changes software pricing 39:16 Open source vs. proprietary APIs 48:01 Growing up in East Berlin

Transcript
Discussion (0)
Starting point is 00:00:00 Hi, listeners, and welcome back to No Pryors. Today, we're joined by Thomas Dunk, the CEO of GitHub, a platform used by over 150 million developers worldwide to collaborate and build software. As CEO, Thomas has overseen the development of tools like GitHub co-pilot. Before becoming CEO, he helped shape GitHub's product strategy and powers global expansion and previously worked at Microsoft. In this episode, we'll talk about the future of software development, the role of AIing, coding, open source, and product plans for co-pilot. Thomas, welcome to NoPriars. Maybe we can start with the meat of it.
Starting point is 00:00:37 What is happening with Copilot and the new releases at GitHub recently? You're heading straight into it. We're really excited about making co-pilot more agentic. A few days ago, we announced Agent Mode in Co-Pilot and VS Code. So instead of just chatting with Copilot, getting responses, and then copy and pasting the code into the editor or using auto-completion, the original copilot feature, you can now work with an agent and it helps you, you know, to implement a feature. And when it needs to install like a package, it shows you the comment line terminal commando
Starting point is 00:01:10 and you can just say, okay, run this. You're still in charge, right? So that's the crucial part of these agents that we have available today. That as the human, you still, as the human developer, you still need to be in the loop. But we also showed, you know, a teaser of what's about to come in 2025. We call this project Padawan, you know, because it's like a jeta and a Pada van. You've got to have patience and you've got to, you know, learn how to use the force. But we think, you know, in 2025 we get into a place where you can assign a GitHub issue,
Starting point is 00:01:39 a well-defined GitHub issue to a co-pilot. And then it starts creating a draft pull request and it outlines the plan. And then it works through its plan. And you can similar to how you observe a coworker, you can see how it commits changes into the pull request. And you can review this and provide feedback to co-request. pilot. And so co-pilot basically graduates from a pair programmer to a peer programmer that becomes a member of your team. The obstacles to that right now are some new model advancements. Is it just building out some other port technology? Is it just the UI? Like, what is keeping that
Starting point is 00:02:14 from happening right now? Yeah, I think the first thing is the model, the full O3 model that's not available yet, but the Open AI showed as part of the shipmiss right before the holidays. We're going to see, you know, improved reasoning. And I think as the models get better in reasoning, we're going to get closer to 100% of this V-BENCH, which is that benchmark out of 12 rebos, open-source Python repos, a team in Princeton identified 2,200 or so,
Starting point is 00:02:42 issue-request pairs, effectively all the models and agents are measured against. So that's number one, you know, the model and the agent combination. I think the second piece is just figuring out what's the right user interface flow. So if you think about the workflow of a developer, right, you have an issue that somebody else filed for you, you know, a user or product manager or something that you filed yourself. Now, how do you know whether you should assign co-pilot to this, the agent to it, or whether you need to refine the issue to be more specific, right? It's crucial that the agent is predictable, that you know that this is a task the agent can solve. If not, then you need to steer it. So steerability is the next thing.
Starting point is 00:03:21 and to either, you know, extend the definition or the agent needs to come back to you and ask you additional questions. And then at the end of the process, you want to verify the outcome. And so in our demo, that's where we're thinking the right flow here is actually that the agent works in a pull request, like similar to a human developer with lots of commits and then you can roll back those commits or check them out in your VS code. We saw that with some of the agents that are available, is that do I, as a developer, actually tolerate the agent?
Starting point is 00:03:49 And like, is it actually saving my time or is it wasting my time? And the more often you see it wasting your time and just burning compute cycles, the less likely you're going to use it again. And so if you're predictable, steerable, you know, verifiable and tolerable, if you get to that for all four criterias to a certain level, I think we're going to see a wide adoption of agents. How far away do you think these agents are from being sort of the media and programmer equivalent? And then how much longer do you think it takes to get to sort of superhuman? You know, I thought about this this morning, right? Like if regardless of what agent, you're thinking of a travel agent or a coding agent or maybe it's an agent that designs your house, the fundamental challenge is actually the same
Starting point is 00:04:34 as you have as a human developer, right? Like you have this big idea in your head and you can sketch it on a whiteboard, but then you want to start coding and you have to take this big idea and break it down into small chunks of work. I think that's the part where we're far away from agents actually being good enough to take a very rough idea and break it down to small pieces without you as developer or as architect or even when planning your travel, constantly getting questions back of what decisions you want to make, you know, what database for cloud.
Starting point is 00:05:06 Like imagine you give the Asian a task saying, you know, build GitHub or build a mobile app of something like it will just be not specific enough, right? So that's the systems thinking that I think the median developer will not be replaced by agent. And the flip side of that is a lot of what developers do is just picking up issues and fixing bugs and finding where to fix the bugs, adding a feature that comes from a customer. And then you have to navigate the code base and figure out what files you have to modify. I think there we are going to see dramatic progress over the year. We actually, you know, when we record the demo for the Padawan project,
Starting point is 00:05:39 we actually had one of our product managers use an issue and the agent to create the progress. themselves, right? And so a PM that usually doesn't code and doesn't write code in the code base was able to use the agent to create a real pull request that was then reviewed by the developer and merged into the code base. So in some ways, we're already there. In other ways, we need to get to the point where you trusted enough that you're using it day in day out. I'm sure you guys were doing a bunch of dog fooding before releasing agent mode and Hadoan as well. Maybe if we just sort of zoom out from that from the eval phase, like can you describe what? the overall, like, development cycle is for co-pilot today, like how you do planning and make
Starting point is 00:06:19 decisions about what to try and how you improve it? The industry calls now AI engineering, which is, you know, we're extended the full stack of back and frontend development with AI development. And so how do we use a new version of a model or new model as we have now the model picker in copilot? We are constantly dealing with multiple models from multiple vendors, how do we integrate that into our stack? We have, you know, an applied science team that runs evaluations. We have a team that builds out these benchmarks that the applied science team uses to compare models with each other, but also the teams that build, you know, features like code review agents or the Svi agents or agent mode users to validate their work as part of their
Starting point is 00:07:04 test suite. So it's no longer just the data scientists and the engineer that those roles have, you know, more and more overlap and they're collaborating day and day out. We do a lot of experimentation with AB testing, where we flight new versions or new fine-tuned versions of a model after the offline test, you know, in an online test, first with GitHub and Microsoft employees and then with sets of the population. And then overall, you know, obviously we have a roadmap of features,
Starting point is 00:07:33 you know, that we want to build in a long backlog, not just for co-pilot, but all up for GitHub, right? like GitHub is, you know, turning 18 this year. I think it's 18 years since the founders in late 2007, started working on it, and then it launched in early 2008. And Microsoft turns 50, actually, April 4th. And so we have a long backlog of customer feedback where we're using co-pilot to build those features, you know, in an agent mode now, you know, to accelerate our feature delivery.
Starting point is 00:08:03 But at the same time, you know, the market is moving so fast. And whether we're meeting with Open AI or with Anthropic or with Google, we learn about new model versions and then our roadmap changes from one day to another. You know, I'm sure you guys are seeing that as well. Like, the market is moving so fast. We're literally sitting on an exponential curve of innovation that is hard to keep up. And you can't really plan more than a month or two ahead of time. Why do you think about competition being on that exponential curve?
Starting point is 00:08:31 I think, like, it is wild to think that, you know, sweet agents, as you described them, like, didn't exist as an idea. year ago, we now have a market full of folks experimenting with these products. How do you think about winning the, like GitHub is obviously a very dominant force overall as is co-pilot, but how do you think about, you know, winning the developer over and what they care about in that, you know, changing and competitive market? The way we think about winning is that, you know, we care deeply about developers. And that's always been, you know, the heart of GitHub is that we put developers first and that we are developers that are building products for developers, you know, we have the saying at GitHub is that we're building GitHub with GitHub, on GitHub, using GitHub, right?
Starting point is 00:09:16 And so everything that we do in the company, including, you know, our legal terms and our HR policies and product management, sales, sales enablement, all these functions are in GitHub issues and GitHub discussions and GitHub repos. I think that's number one that we deeply care about our own product and we're using it for everything day and day out, you know, the first thing I do in the morning is open the GitHub app on my mobile phone and then Slack as a lot of our, you know, operations company chat runs through Slack. Number two is, you know, you mentioned competition. I mean, it's obviously like they've never seen anything like that in the developer space. It's the most exciting time, I think, for developer
Starting point is 00:09:50 tools. And, you know, I've been a developer for over 30 years. It's amazing to see the innovation, you know, the news that is coming out every day. And I think that energy, you know, that is in the market. innovation driven, both on the open source side and on the closed source side, right? Let's not forget, you know, that it's not one-sided. As much as there's innovation on proprietary models and software, there is as equal amount of innovation in open source and on GitHub. And that energy obviously gravitates into us. I'm a big Formula One fan. It's good when there's competition, because the races are so much more fun to watch if there's multiple teams that can win the championship. And I think the same we feel about the competition. It gives us.
Starting point is 00:10:33 motivation every single day when we wake up to do better, to move faster and to ultimately win with the best product in the market. You have such, like, rich data about how people are actually using co-pilot. What is surprising you even from the last week or so since Agent Mode was released? The thing that always surprised us from the early days was how much CodeCopite is writing, you know, some of the folks for Microsoft and GitHub on your podcast in the past and in the early days, you know, soon after we launched Copilot Preview, it already wrote like 25% of the code. And I remember that meeting where we looked at this in the product review, I said, that must
Starting point is 00:11:11 be a mistake in the telemetry, go back and validate that. It can be true that it's writing 25% of the code because it was just auto-completion. And, you know, as cool as that was, at the same time, you know, it still made a lot of mistakes in the early days. But it quickly dawned on us that, A, the number is true, and B, that's just the learned behavior of software developers, right? Like you're typing something, you're always reaching the point
Starting point is 00:11:35 where you need to look something up so you go to your browser and you find code on stackable floor on Reddit or blogs or on GitHub and then you copy and pasting that and you're modifying it anyway afterwards, right? Like, the inner loop is always this kind of like you write something, you try it out
Starting point is 00:11:50 with the compiler and the debugger and then you keep modifying until you make it work. And that number, you know, then quickly rose to around 50% depending on the programming language. If you look now, you know, with these agents, it's hard to measure that because, you know, when you can literally go into agent mode and say, I want to, you know, build a snake game in Python and it writes all the code for you, right? Like it writes multiple files so the denominator becomes zero, right? Like it's like infinite percentage because you never, the only thing you wrote was a prompt and the 15 minute demo from two years ago is a one minute demo now.
Starting point is 00:12:28 And I think that's, you know, it's still surprising in many ways that we're already so far ahead on that curve. And then the opposite is also true, right? You can get it into a place where, you know, it just keeps rewriting the same file or deletes the whole file because it gets stuck somehow in the logic. And so it grounds us also in the reality is like we're not close to an agent just autonomously passing through all my GitHub issues and then fixing all my backlog for me. anything I'm really doing is just validating and becoming the code review of human for the software development agent, right? So we are in this, you know, we're swinging between the excitement of how much it can already do and the reality where it gets stuck in very simple scenarios where it's like you're trying to kind of like figure out the prompt of telling it, just do the one
Starting point is 00:13:18 thing and then you just go into the file and change the whatever, the background color yourself. That makes sense. Outside of a lot of the agentic, at first, you all are doing. And obviously, I think that's amongst the most interesting stuff that's happening right now. What are other big areas you want to evolve over the coming few quarters? Like, are there other big thrust or is it all kind of, it's all in on AI and that should be the focus of the company? Oh, so far, we only talked about, you know, the generics be Asian where you can assign an issue and it generates the progress. But if you actually look in the developer life, the day to day, you know, in most companies, that's maybe two, three hours
Starting point is 00:13:56 of your day that you're actually writing code and then you're spending an equal amount of time of reviewing code of your coworkers. And while we don't believe that goes away from a pure security and trust perspective, you always want to have, you know, another human in the loop before you merge code into production, the same time
Starting point is 00:14:12 we believe code review agents and code review is a big topic where AI can help you, especially when you work, you know, with the distributor team in different time zones, where you don't want to wait for, you know, the focus on the West Coast to wake up. to get an initial loop of feedback.
Starting point is 00:14:28 So I think code review is a big topic for us. And again, you know, the AI part is one piece to that, but the user interface is equally important. Like if ideally you get feedback and then you can work with the code review agent on that feedback to loop because you know won't always get exactly the right feedback to just click, accept, accept, except. You have to have a user interface, you know, cloud environment where you can just open this. If you always have to, you know, clone the repo on your local machine
Starting point is 00:14:54 and then install the dependencies switch to a different branch, you're still having way too much boilerplate work, right? So moving to a cloud environment where you can just, you know, try out the changes that came from code review and can modify them to make them work and have that, you know, fast outer loop in that same realm of security vulnerabilities, which is, A, you know, you want, you know,
Starting point is 00:15:15 your code scanning not only find vulnerabilities, but also fix them, you know, an even simpler version of that is Linter errors, you know, like code formatting and those. kind of things hopefully all go away and just the AI fixes all that instead of you going through 100 Linta warnings telling you where to put the spaces in the parentheses. But also if you look in any decent size software project, it has outer dependencies, it has lots of, you know, known software vulnerabilities, hopefully non-high risk and a lot of them low risk or where somebody
Starting point is 00:15:44 decided that's not actually crucial to fix right now because the code is not reachable or, you know, we have other priorities. Having AI to burn down that security backlog, and will make, you know, both the open source ecosystem and a lot of commercial software projects so much better because it brings that, you know, effort down, that every engineering manager swings back and forth between, you know, the tech debt, the legacy code, you know, the security, accessibility, European regulation, whatever, right, and the innovation backlog. And there isn't really like a balance between the tools just like, what is the most urgent issue, the most, the biggest fire drill?
Starting point is 00:16:21 Is it your sales team telling you, if we don't get that one feature, we can't sell the product or is it the security team telling you you got to fix that one issue. Otherwise, we're going to flag you up to the management chain. Right. And so that's, I think, is the AI side of things. But similarly, GitHub as a platform needs to involve to support or have all the primitives for these agents and the AI to work in tandem with the human. Do you think there are problems that people are not addressing yet that emerge from this transition and how software development is done, right? Like, so, for example, you know, you feel like we're somewhere between crossing the tipping
Starting point is 00:16:57 point of the majority of code being generated this year to maybe like all of the code in like some cases or some tasks. How does that change like testing or, you know, the way we should look at technical debt or any of that? To me, I don't think all of the code is written by AI. I think the way this will work is that we have two layers. We have the machine language layer, you know, which is Python or Ruby or Rust, right? those are effectively abstractions of the chip set, the machine instruction set.
Starting point is 00:17:27 And that's the last layer that's deterministic, right? Like programming language inherently, it does exactly what I wanted to do. And then human language is inherently non-deterministic, right? The three of us can say in the same sentence and mean a different thing. And so while we will use human language to describe a lot of the features and behaviors that when we're going to build, we will still have the programming language layer below that that we are going back and forth as engineers to figure out, is the code that was written by AI, actually the correct one?
Starting point is 00:17:57 Is it the one that, you know, aligns with my cost profile, if you will, as an example, right? Like at the end of the day, we're still running businesses that have to have positive profit margins. I think we're going to, as engineers, have both of these layers. And we're heading into a world of more human language and less programming language. But at the same time, you know, we are in the world where lots of financial services institutions still run, COBOL code on mainframes, and we are very far away of just taking that, you know, a code that's 30, 40 years old and just hunting an agent that transforms that magically into a cloud application, right?
Starting point is 00:18:32 Like, I think that's coming, but it's like self-driving cars are coming as well, but we don't know when that cutover point actually happens where you can have a car without a steering wheel and it drives you everywhere, you know, within the country you live in, right? like the, it works for Waymo in San Francisco and it doesn't work for Waymo all the way down to SFO to San Jose yet, right? And so the scope will increase, but we are far away from, I think, solving all the tech debt and all the legacy code that exists. And so we are still, I think, for like a decade or so, at least going to have software
Starting point is 00:19:08 developers that work in lots of old school, you know, PHP code and co-work code and all that stuff. While at the extreme other end of the spectrum with web development and AI, you're going to be able, and you're already there. Like, you know, just look at a 10-year-old, give them, you know, a tool like co-pilot or, you know, replicate, bold, you name it, and have them type a couple of prompts and have them explore how that works and how they can, similar to Stable the Fusion Mix Journey, render software themselves and iterate on that. You yourself lead, you know, a large team of software engineers, as you said, you know, you have more. more human language and instruction versus machine language, does it change what you look for or what you want to, like, develop in your own team? Well, what are you looking at right now is I think this, how do you describe actually a problem specific enough that an agent can pick it up, right?
Starting point is 00:19:59 Like basically the planning and tracking side of software development, the issue, right? That's often the biggest challenge that you have as soon as you have a decent team size. Or like 10-person startup has no problem and most of 10-person startups don't have a product manager. The founder is the product manager and the rest is just building the stuff. And if you have a problem to solve, you have very short communication paths. If you have a thousand engineers, their biggest problem is what do you want to build? How do you build it? What did you actually mean when you've wrote up this thing? And if you look into that space, there isn't much AI helping you yet. We have been early phases ourselves with that with copied workspace where we have like a
Starting point is 00:20:38 spec and a brainstorming agent that basically looks at what you wrote in a GitHub issue. compares that with a code base and describes you the before and after in human language. And then you can, similar to a NotionDoc, just modify that and basically add stuff to the specification. So I think that's going to be a whole set of agentic behavior that we're going to bring into the product management space. Similar for designers, right, like today a lot of designs are hand-drawn in Figma. I think tomorrow you're going to, as a designer, type effectively the same specification as a product management. and you have an AI tool render the code for the wireframes and then apply, you know, grounding out of your design system to make it look like your product, right?
Starting point is 00:21:23 And so those disciplines get closer to each other. And a product manager will be able to, if they're good, in writing a specification, create the whole change set. And the designer will be able to take over part of the product management role and the engineer gets closer to these other roles as if you know, if they're good in describing the feature, can kind of take over that part as well. think that's where a lot of innovation is going to happen and in rethinking how the, you know, traditional disciplines in the software engineering team are evolving in the coming years as we have
Starting point is 00:21:52 more and more of these agents available and they're actually good at what they do. As you think about these different agents and these different use cases, do you think it's going to be the same company or product that provides all three? Do you think it's going to be one interface? Is it going to be a different interface? I'm sort of curious how you think about the actual flow in terms of very different users in some sense. So that was some overlapping, either responsibilities or goals. And what are the set of tools that they interact with? And is it a singular tool?
Starting point is 00:22:21 Is it many? Is it one company? Is it many? Where does it launch out of? Like, how do you think about all that stuff? One of our strongest belief at GitHub is developer choice. And, you know, imagine a GitHub as a platform where you had only JavaScript libraries available or only, you know, React available to you.
Starting point is 00:22:37 And we would tell you that's the only open. source library you need to build an application, right? Like, there would be a set of users using React, using GitHub because they love React, and the rest would go somewhere else because some other platform would offer them all these other open source components. Right. In AI, I think we're going to see the same thing. We're going to see a stack or universe of companies that offer different parts of the software
Starting point is 00:23:02 development lifecycle. And developers pick the one that, you know, they like the most that they have experienced with, you know, that convicted are the future, you know, a lot of that is part of a belief system, you know, programming languages in many ways are very similar. And then if you look at the discussion between developers, you get the feeling they're very different to each other, right? Like, the end of the day, they're all compiling down to an instruction set that runs on your Apple M4 chip or your Intel CPU or AMD or whatever. Right. So I think we are going to have a stack of different tools, and there's going to be companies that offer, you know, all the tools,
Starting point is 00:23:42 well, not all of them because you're never going to have all of the developer tools out of one hand anyway. Like, think about GitHub. We are a big platform, but then you still have an editor and an operating system and a container solution and a cloud that doesn't come from GitLab, right? Like, you know, HashiCorp, Terraform, or Walls as an example, or Verceller and Next JS as another example, right? There's, like, go into any random company in the Bay Area, and they're all going to have.
Starting point is 00:24:06 a different stack of tools that they have combined because they believe that's the best stack for them at this point. So I think in this AI world, they're going to see the same thing. You're going to have choice of different agents. We're already there where you have choice of different models and some believe the Claude model is better. Others believe the open AI's model is better. The realities, some by the middle and different scenarios are better with different models. I think the same will be true in this agentic future that we're heading into. Is that true given the generalizability? that we're saying. In other words, if you were to remove X percent of the models and you just
Starting point is 00:24:42 got stuck with one of the ones you mentioned, up to a point, you'd still be extremely happy given the relative capabilities we had four or five years ago. In other words, it's a little bit of like, we have so many great options and some things are better than others. But fundamentally, any one of these things would be spectacular by any sort of baseline metric. It depends on what end state we're talking about, right? Like if the singularity is coming, then none of that matters. Five years from now, five years. You know, we started copied almost five years ago, June 2020. And that was what, GPT-3 at that point? CP3 was really the early experiments, and then we got this model that then eventually became codex, which was this
Starting point is 00:25:20 code-specific version of the model. And today that, you know, no longer really exists, right? Like today, everybody sets on top of one of these more powerful base models. Yeah, yeah, and that's kind of my point, is to some extent, the generalizability started to take over. And so I'm just a little bit curious how you think about generalizability versus specialization and a five-year-time horizon for agents. I can see that happening at the model layer, but it's again like predicting a little bit of, you know, when do we truly have self-driving cars and, you know, I had a Tesla for 10 years with self-driving and autopilot in one form and another and it still cannot make the left turn into my neighborhood. I can see that future happening,
Starting point is 00:26:03 but I don't know when that is and when the models are basically just all, you know, about equal. But I think for software developers, the lowest level only matters until there's differentiation at the higher layer of the stack, right? Like, I think program and language or open source libraries are great examples for that because, you know, if you zoom out enough, they're all the same, right? Like at the end of the day, you know, whether you're building an app with, you know, Swift or Kotlin or React Native, what does it matter? And like, that's just like the intricacies of software development and the belief system that we have. And so I think the differentiation is going to come from both, you know, where the developer gets the most, the best experience and doing the day to day, right? Like, where can I, you know, start my morning, pick up, you know, something I want to work on, explore my creativity and get the job done with the least amount of frustration and the highest amount of ROI in terms of what can I ship? And like software development, you know, over the last 30 years has always, or actually the last 50 years, if you go back, you know, all the way to the 1970s when, you know, microcomputers came and all of a sudden you longer had to share a mainframe with others, was always about how can I take all my grand ideas that are way bigger than what I can actually achieve as an individual. How can I, you know, get that done faster? I don't think we are at the top of that exponential curve. I think there's still a lot to come. The other question could ask is,
Starting point is 00:27:32 When do the CEO of GitHub get to the point where my backlog is empty? And I just don't believe that that point is ever coming. Yeah, there's a super related, interesting question to what you're saying, which is for how long are humans making decisions and what agents to use? Because if you look at it, there's certain roles, a lot of the ones that you mentioned, developers, designers, etc., that have traditionally tended to be a little bit trend-based. You know, it's almost memetic what certain developers were used sometimes.
Starting point is 00:27:58 And obviously there's like the dramatically superior products, and there's sort of clear choices around certain tooling. And sometimes it just feels like it's kind of cool. And so people are using it. Same with programming languages, right? So it's almost an interesting question when the human component of decision-making goes out the window, the decisions that are made radically different
Starting point is 00:28:15 because you're getting rid of trendiness. You know, you're not going to use Go. You're just going to use Python or whatever. If I look at, you know, my team, how often as a CEO, do I have to check in with them to see what their building is actually what I thought, I want them to build when it gave them a task, right? So it's number one that the human that takes over a task and feature an epic, whatever,
Starting point is 00:28:40 still has a loop with other team members to kind of like ensure what their building is actually a right thing. I don't see a world where we can be specific enough when we give the agent work that it can just do it all by themselves unless, you know, the unit size is very, very small. The other side of that question, I think, is like, when do we get to the point where all software is personal software? And in fact, I no longer install an app from the app store, I just use a natural language interface to build all the apps myself. And so I have a completely personal software on my personal computer, my smartphone instead of off-the-shelf software that is the same for all ourselves, where the user interface effectively is completely personalized. And, you know, we have science fiction movies or action movies like Iron Man, right? Like, where Java is completely personalized to Tony Stark.
Starting point is 00:29:33 And so I think that future, that will happen in the next five years for sure. It's just the question is how good is Java is going to be. And can I just tell it? Springback is coming up. Same hotel, same family, you know, and it books me the trip. And the only question I have to confirm is do it to the $5,000 trip. One other thing that's been striking about GitHub and co-pilot and everything else is the actual business success of all of it, right?
Starting point is 00:29:59 And I think it's been quite striking on the earnings calls more recently that have been done. What can you share in terms of business and financial metrics and the impact that co-pilot and GitHub more generally are happening for Microsoft? Not a lot beyond what's in the earnings call. Sure. I'm trying to remember. I think the last number we shared was a few quarters ago, 77,000 organizations using co-pilot. And back then, the number of paid users was 1.8 million paid users.
Starting point is 00:30:28 We haven't shared an updated number since I can share that latest number. But I think what's really interesting from these earnings calls, if you look at the number of logos that Satya has called out, it's across the whole spectrum of industries. It's not just, you know, cool startups, it's not just financial services institution. It's really every industry that has adopted co-pilot. I don't think there has been a developer tool that has been adopted with such a velocity across the whole spectrum of software development in any company size in any industry. You know, if you think about it, $20 on compared to the salary of an average software developer
Starting point is 00:31:10 in the United States, it's like, what, 0.1% if at all. And then we're talking about, you know, 25, 28% productivity gains on the end-to-end, 55% or higher on the coding task. But as we said earlier, right, developers do more than just coding. That's an incredible ROI on the dollar spent. And I think that's what is driving this adoption curve. And then any company is now a software company, and they all have the same problem described earlier.
Starting point is 00:31:39 They have long backlogs and way too much work. And every time, you know, one of the managers goes to their team and asks them, how long does it take to implement a feature? It becomes the Jim Kirk Scottie joke that, you know, how long does it take to repair the world drive and you get an estimate that's outrageously long and then it becomes a negotiation where the captain sets the deadline instead of the engineer actually estimating what's possible. And I think that's where, you know, a lot of the business success of copay is coming from, all the people writing software are frustrated how long it takes,
Starting point is 00:32:10 not because they don't think the engineers are good, but just the complexity of building software. How much do you think this pricing changes, and I know it's just speculation at this point, when you're actually replacing people. And I know in a lot of industries, it could be legal, it could be accounting, it could be coding. People say, well, eventually this will shift to value-based pricing.
Starting point is 00:32:28 Because eventually, instead of just paying $20 a month to make a person more productive, you're actually replacing a person who costs $50 or $100,000 or $200,000 a year, or whatever it is depending on what their rule is. Also, just in different disciplines. So I'm just sort of curious how you think about, is this eventually a rent-a-programmer
Starting point is 00:32:45 and it's price like a programmer? Does it all get commoditized and eventually something that would normally cost $100,000, $200,000, $300,000 a year, it costs $3,000 a year? Like, how do you think about where this market goes? I think it's going to be compute-based or some unit that's, you know, a derivative of compute as a metric. So it's going to be cheap.
Starting point is 00:33:04 It's going to be cheap in the same way that your dishwasher and your kitchen is not a derivative of what a person would cost you when doing your dishes every single day. But I think the buyer persona is not going to re-reasona. willing to pay for a machine, you know, whether that's a dishwasher or an agent, a price that's an equivalent of human developer. And I think that's actually the correct mindset because I don't believe that the agent is actually replacing the developer. The creative part is still coming from the software developer as systems thinking. predicting the future always has the fun part of that. I'm coming back on the podcast in a year or two and you're telling me how wrong I was
Starting point is 00:33:44 about my predictions, but I think there's a lot of decisions that are made in software development that a human has to make, what database, you know, with cloud, a lot of that are a function of the business and how it operates, you know, which cloud you're using is not necessarily a question of how much the cloud costs. It's a strategic decision of, you know, the CTO or the leadership engineering leadership team. And more and more we see, you know, companies using more than one cloud because they don't want to have a dependency on just one single supplier in the same way that, you know, any random car manufacturer has multiple suppliers for Airbags because they don't want to be stuck with, you know, their factory line when Airbags are not
Starting point is 00:34:22 deliverable from that one supplier. And so I think, you know, the agents, the price, the price points will certainly go up as these agents become more powerful. You know, we see, we see that with Open AI, the highest here now cost $200 for deep research and the O1 Pro model. And clearly, people see the value in that. And I think, you know, two years ago, ago, if we had predicted that, we wouldn't have believed it. You're willing to pay $200 a month for a chat agent because the flip side of that often is in software that people feel like a $5 subscription for a mobile app is a lot of money. And you can just see that when you look into the reviews of apps that move from a one-time
Starting point is 00:35:02 payment to a subscription model of how many people don't like that model because they feel like software, something that you buy once like a CD and then you own it. definitely we're going to price increases that will be based on the value that you're getting out of it. Because you know the other side of that is that human developers are expensive because there's limited supply. Agents will have infinite supply that will only be limited by the amount of compute capacity GPUs available in data centers. Speaking of that unlock of supply, like we've been talking about like what is the pricing of the code generation. I think there's also a question of just like what happens to the value of software at all.
Starting point is 00:35:44 Like, everybody's been talking about Javon's paradox for a while. I don't want to ask about that. But maybe something more specific. You're from East Germany. You remember the Trebant car? I do. I had one. Oh, well, my parents had one.
Starting point is 00:35:55 Oh, okay. Right. So you can tell me what it was actually like. But good for you guys because it was this. It was the okay car, but it was the default car that ended up having this like 10-year waiting list because of the supply constraint with the rest of the world. And then as soon as the wall came down, you know, the demand completely collapses. Yeah.
Starting point is 00:36:12 Right, because you have access to the world of cars and pricing at least did. I guess one question I'd have for you is I'm generally such an optimist about like the demand for software being very elastic. But I think of that as volume and quality and variation. Are there types of software that you think collapses in value when AI takes away like some of the scarcity of engineering? You know, the Toband, the Beatles was actually, I think, 17 years in the late 80s. Okay, 17, not 10. Yeah. That road, by the way, still exists.
Starting point is 00:36:44 Today, it exists in super cars, right? Like, often you can buy a supercar, like, you know, the top end portion 911, R's 3 or whatever. And then the resell price is higher than the new price because you can't get one to go to a dealer. Because at the dealer, you have to buy like 100 Porsches first before you get a slot for that exclusive top-of-the-line Porsche or Ferrari is the same thing. And so the traband, actually, the one that my dad owned, he sold, I think, in 84, 85 to a neighbor at a higher price than we bought it because you could shortcut the 17-year rate to get a car. And often parents had a subscription, quote-unquote subscription, like a signed up their kids already for a car when the kids were still young. So you could actually get one by the time you reached, you know, adult food and could do a driver's license. And so, you know, I think we're going to see coming to your software question, right?
Starting point is 00:37:41 Like we're going to see it going both ways, right? Like if you think about co-pilot, co-pilot, you know, costs for businesses $20 per user per month. That's actually almost exactly the same price as you pay for GitHub Enterprise, which is $21 per user per month. And so for storing all your repositories, managing all your issues, your whole software development life cycle was $21 per user per month. and many used to perceive that as a lot of money for DevOps. And then we came with co-pilot auto-completion, and that was $20 a month, right? And so all of a sudden, that sub-feature of the software development lifecycle, auto-completion, cost $20.
Starting point is 00:38:21 And then goes back to Elat's question, right? Like if there's the value where you get the ROI and you get 25% productivity increases, yeah, you know, you're willing to pay more for something that, you know, probably five years ago, if I told your auto completion is going to be, be that standalone feature to Rwainby AI that cost more than the average selling price for all of GitHub, you would have said, well, that sounds unlikely. I think we're going to see deflation of software prices. And so I think it's a mix of both. Some things we won't pay for it anymore. You know, nobody pays for the operating system anymore. And then at the same time, you pay way more
Starting point is 00:38:58 than ever for your Netflix subscription and for your office subscription and all those kind of things. I think both of these things will be two at the same time. And it's all about how much value do you get for your business paying for that solution, whether we're doing it yourself or using something that you manage yourself or install on your own server. GitHub is foundational infrastructure for open source. So I'm sure you have like general opinions about what's happening in the open source ecosystem. Today, you can use Claude and Open AI in Copilot and Gemini, but not necessarily open source models right now. Correct. So in Copilot, we have Claude, Gemini, and then OpenAI, Open Air has different models that I was, I was just crossing it in my head.
Starting point is 00:39:36 Wait, there's more than three models, but it's the 4-0-0-1 and the 03 mini model. In GitHub models, which is our model catalog, we have open source or open weights model like Lama, as an example, and then all kinds of other models like Mistral, Kohia, Microsoft's 5-54 model. And the model catalog, you know, why it's a separate feature within GitHub, you can add model, add model, in Copilot, because Copilot has extensions, and you can actually reach from Copilot into the model catalog. And so if you want to just run quickly inference against 5-4, you can do that by using the AdModels extension and Copilot.
Starting point is 00:40:14 So that way, if you have more models in the ones that are packaged into Co-Pilot. I know that. What do you think is the relevance of open source versus the proprietary model APIs for developers in the future? The biggest thing, I think, is that open source is going to drive innovation. And we saw that, you know, with Deepseek earlier this year, Or actually, you know, a couple of weeks ago, it's not that long ago even. Long year, yeah.
Starting point is 00:40:38 It feels like already a half a year has been passed instead of just a month and a half. But I think open source is going to drive innovation. You know, we saw that with image models like Stable Defusion. I know there's the Flux model from a startup actually not too far from my home base in Germany in the Black Forest in Freiburg. Black Forest Labs is actually the company behind Flux. And so we're going to see innovation, I think, you know, on open source models that that drive the other vendors and this back and forth between the open source ecosystem
Starting point is 00:41:07 and the propriety, close source companies will, I think, accelerate the whole space. You know, Deepseek is the most prominent example right now where you can, you know, look into this, the papers open, the models are open, some of them are like, you know, fully open source and then the MIT license, others are like open weights, and so you can look at the weights and then the code run it is open source, but the rates itself are under some of propriety lessons and governed by Chinese law and whatnot. And I think that is going to drive innovation
Starting point is 00:41:40 and it's going to open up that space and it democratizes access because if you just want to play with a model, you don't have to run inference against the commercial API. You can just, you know, try it out yourself on your local machine and play with this. And if you think about kids and students and research, that opens up a huge space, and that's ultimately what has always been part of our DNA at GitHub.
Starting point is 00:42:05 Was that a satisfying answer, Sarah? Yeah, yeah, I think the most satisfying answer is like somebody wins, right? But I think that's a very hard thing to predict right now. What has one iPhone or Android, Windows or Linux or Mac or S, for that matter? I think we like to think about these binary battles in the tech industry, and the reality is that's not actually how that works and certainly not in the developer space, right? Like React hasn't won.
Starting point is 00:42:37 And there's always going to be the next thing, you know. Before React, there was a jQuery or whatever, you know, library you preferred. I think there's going to be a next programming language after Python and TypeScript and Rust. And Rust in itself, you know, wasn't really a thing, five years ago. And so there's going to be more languages that are probably closer to human language and to be more specific
Starting point is 00:43:00 about the natural language layer and the programming language layer that converts down to the CPU or GPU. So I think there's no winning. There's always just the... You're playing the infinite game. It's like Minecraft. Software is like Minecraft. And there is no winning in Minecraft.
Starting point is 00:43:16 You can win the battles and they're isolated to a certain sub-challenge or whatever quest. But ultimately, we're building a bigger and bigger world of software, and there's always going to be the next big thing. That's a funny analogy. If I think about any individual developer, like, there's something people have been saying to me. Developers of a particular ilk, right, really strong technical people who are more experienced, not all of them, but like more experienced gremlin systems developers, often people very attached to Rust. And they'll say basically, like, they're worried about the next generation of developers building the taste. and understanding of architectural choices and the tradeoffs and corner cases of how a particular implementation can fail given some shape of data, given their experiences of the actual implementation, right? And so they're worried, you know, obviously the right thing to do for
Starting point is 00:44:07 anybody who wants to win that next, you know, level of Minecraft in 2025 is like use AI aggressively, learn to use it. But like, does that concern from this segment of, I'm sure you've heard it, does that concern, like, resonate with you at all? Like, can you foster the requisite depth of understanding of engineering at an abstract level when we're not writing the code or is it like a silly concern? I wouldn't call it silly because obviously, you know, there's some truth to that, right? It's easy, you know, to cheat at a programming exercise or advent of code and those kind of things. As these AI models get better, these competitions of who's the best hacker or coder are going to have to move to a whole different level where you assume that the developer, is using AI to solve the challenges, because otherwise it's going to be way too easy.
Starting point is 00:44:57 If you think, you know, the next generation of developers, you know, maybe not 2025, but 2035, like, look, you know, and you mentioned, you know, me growing up in East Germany and then the wall fell and they bought a Commodore 64, but they had no internet, and so I bought books and magazines, and that was it, right? Like, there was no forum I could go to and ask questions. I could go to, I went to Computer Club every Wednesday or so, until, Nobody there had anything to say anymore that I didn't know already, right? If you take that and compared to today, the kids of today and those that want to learn coding have an infinite amount of knowledge available to them.
Starting point is 00:45:36 And you know what? Also an infinite amount of patience because, you know, copilot doesn't run out of patients. Parents do. I am one. And so it's incredibly democratizing to have AI available if you want to learn coding. Your parents don't have to have any technical background. All you really need is an internet connection on your mobile phone and one of these co-pilots or chat GPDs or whatever you prefer.
Starting point is 00:46:02 And you can start asking coding questions and you can ask about Boolean logic and about systems thinking and you can go infinitely deep on any of those questions and traverse to other topics as you like, right? And so I think, you know, we are going to see, you know, a new generation of humans that grow up with the technology for them is just natural to leverage their personal assistant, their personal set of agents, you know, I recently called the orchestra of agents, and you're the conductor of that orchestra of agents, and they know how to do that, and so they can achieve in the same amount of time so much more than we could, you know, in the last 30 years, and I think
Starting point is 00:46:42 that's incredibly exciting because, like, again, like, find me a developer that doesn't have this big idea of that computer game or software system or feature that they always wanted to build and don't have the time for. Like, my engineers talk much more about being overcommitted and burned out and not having enough time for all the things I'm asking for, and the customer is asking for, and the security team is asking for. And so I think that's just where we're heading and how this is going to be super exciting. Both actually in open source as well, right, so open source of sustainability is another big topic
Starting point is 00:47:16 that we probably spent another hour on in any kind of software that people want to build. I definitely agree with that, you know, excitement and optimism. I think about my three kids and like what they would be able to learn at what pace with the, you know,
Starting point is 00:47:32 the AI resources that people will have. And I'm incredibly jealous. I'm like, I could be much better as an engineer, so much faster with, as you said, the infinite patience and understanding of today's models. By the way, I was very lucky, my parents are both engineers, right?
Starting point is 00:47:48 But, you know, it's a very human dynamic where I'd ask a question of my dad to be like, it's logic, Sarah. I'm like, oh, no. Can I ask you, you know, maybe a more personal question to close? Like, East Berlin, you know, you have this unique experience of this really rapid technological change after reunification. Do you think that informs at all how you think about, like, the speed of the current AI transition and how, like, users and human beings will react to it?
Starting point is 00:48:16 I always wanted to believe that a lot of my life has been, you know, defined by that one moment of change in 1989. And, you know, I remember, you know, the night when the wall fell or when it was announced that the wall would be opened. And it was a Thursday night and then Friday was normal school. Saturday was still school as well, a half day in school. And I think I was one of four kids that showed up in my class and then they sent us home. And we actually crossed over to West Berlin. And I think the thing that is important for that generation of kids that live through that change is that they can no longer return to that childhood. You know, home is gone.
Starting point is 00:48:54 Like, you know, there isn't like that store in the corner that's the same as it was like 40 years ago. And the schools are gone. The system is gone. The traditions are all that resolved into, you know, that new world. And so it's a bit like when you're moving from one country to another, which then I, you know, did 10 years ago as well to move when Microsoft bought my company. Once you have done that step in your life, you gained a whole new perspective on things. And I think that's unification in 1990 and then, you know, through the steps of my life, including, you know, to become the GitHub CEO through, you know, random decisions or at the time
Starting point is 00:49:30 they felt random, this is how I got here. And this is how I look forward and I'm optimistic about the future while recognizing my past and taking some of those experiences when I talk with you guys. and reflect on what it was like in the 90s to program on a Commodore 64, before and after the Internet, right, before and after open source, before and after the cloud, before and after mobile. And now we have before and after AI. And there's no looking back.
Starting point is 00:49:58 The future will be that we have AI for almost everything we do in our lives if we want to. You know, you can still always throw your cell phone into the corner and enjoy a day without the Internet. This has been great, Thomas. Thanks so much for having the conversation. Thank you so much for having me. It was good to connect, sir. I appreciate the time. everything else.
Starting point is 00:50:15 Find us on Twitter at No Pryors Pod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no dash priors.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.