Big Technology Podcast - The One-Person, Billion-Dollar Startup? — With Thomas Dohmke
Episode Date: July 24, 2024Thomas Dohmke is the CEO of Github. He joins Big Technology Podcast to discuss the state of AI-assisted coding, and whether the rest of the economy can see benefits similar to software engineers using... coding 'copilots.' We also discuss how AI assistance can help individual developers build whatever's on their mind, and whether we'll see $1 billion dollar startups built by just one person. Tune in the for the second half where we discuss the next set of AI models, how engineering jobs change when AI produces most of the code, and whether AI will eventually improve itself. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
The CEO of GitHub takes us to the cutting edge of AI, where the AI is actually writing code for engineers.
And he tells us how this might change the economy.
And importantly, how its success might translate to the rest of the AI field.
All that and more is coming up right after this.
Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond.
We're joined today by the CEO of GitHub, Thomas Donka, who is here to talk to us about the field that is probably being.
influenced and impacted by generative AI the most, and we're probably going to learn a lot about
what this era in the coding world is going to pretend for the rest of the technology industry
and industry at large as generative AI gets smarter and spreads across more disciplines.
We're also going to talk about how it's going to change our economy by making it easier
to build tech products. I'm so excited to have him here. Thomas, welcome.
Hey, thank you so much for having me. Thanks for being here. All right, first of all, just a bit of news,
that we got this week.
Elon Musk is boasting about the fact that they now have the most powerful XAI now has the
most powerful AI training cluster in the world.
He says they've got 100,000 liquid cooled H-100s, which are Nvidia chips on a single
RDMA fabric.
And basically he says that XAI is going to be on par with the rest of the AI field or better
by the end of this year.
A lot of people that I speak with in the AI industry sort of have this.
this perspective of we know Elon's doing something. We don't really know what he's doing.
First of all, I'm curious what you think about this AI cluster. And second, do you think
the rest of the tech world is underestimating Musk when it comes to his ability to build
AI? Well, sounds super cool, right? So many super cool chips. I don't think the industry is
underestimating him at all. I think, you know, he's looking at what some open has done and open
AI and what other model companies are doing. They're training larger and larger models. I don't
think we have reached the scale limits of creating greater, bigger models with more parameters.
And I think that's where Elon is looking at well, using all the data that he has through X,
you know, through Tesla and other sources and then looking and building even better models.
And are you of the belief that sort of the more compute and the more data that we throw at
these models the better they're going to get?
You talked about the scaling law?
You believe that's going to apply to a certain extent?
where do you feel this nuts out?
I think so.
I think we are not at that scale limit yet.
I think we can build the smarter models with more knowledge,
with more recenting capabilities.
Multi-step or chain of thought, I think is what Open AI calls it.
And there's a lot of problems that you can't solve with co-pilot or chat chagipT today.
Even in just the coding world where things are fairly limited in output.
And there's lots of more creative work to be done.
So I think larger models will help us to build cooler stuff.
And so you mentioned OpenAI.
The CEO of GitHub, GitHub is a partner, is a subsidiary of Microsoft.
Microsoft is a partner of OpenAI.
So you're using the OpenAI technology.
And what you've done with it is quite fascinating.
Effectively, what you've done is built this co-pilot application,
which is the way I look at it is an auto-complete for engineers.
the same way that when we're in Gmail and Google auto-completes our sentences,
you're auto-completing what engineers are writing.
And from your internal studies, you've found that using GitHub co-pilot increases the productivity
of an engineer by 55% or what I've heard is that from you, the way that you say it is
that basically engineers can complete their projects 55% faster than other engineers that
aren't using this AI-assisted co-pilot for code.
Is that right?
Yeah, so maybe let me time travel you back about four years.
It was June 2020, and we got access to GPT3, the previous major model release that
OpenEI did at a time when almost nobody outside the tech world was talking about
AI.
I don't think the term generate if AI was a big thing back then.
And we played with that model.
It was 2020, so we were all on, you know, a call.
Everybody was in lockdown, and we saw that the model was able to generate code.
That's the thing that engineers right.
And we were surprised how good it actually was.
In fact, you know, we fed about 200 of so coding exercises, you know, things that engineers do when they apply for a drop into the model.
And even, you know, four years ago, it was able to solve 92% of those, right?
And that was the, that aha moment, you know, the light bulb went on.
And we built the first co-pilot.
And all that really does is it helps developers in their editor,
you know, the app where they're writing the code to think ahead,
to predict what's the next word, the next line,
maybe multiple lines, if it's a more complex algorithm.
And if you think about, you know, other than the AI capabilities themselves,
and what really is crucial here is that you have an assistant on your side
that prevents you from losing the flow.
You no longer have to context switch between the editor and your browser.
And, you know, engineering browsers look the same as your all browsers as well.
Like lots of tabs open, you know, the thing that I wanted to buy in the vacation I'm planning and the world news.
And there's always, you know, something else going on on hacker news and whatnot.
And so with this context switching, you know, between your editor and your browser, where you look things up,
where you look, you know, foreign algorithms or documentation and things like that, that is made easier with co-pilot.
We can just stay where you are.
You get somebody, you know, the AI thinking ahead of you.
And if you don't like it, you can always, you know, just keep typing or edit what you got out of co-pilot.
Right.
So you're basically in the editor.
You're typing.
You're coding.
And the AI is suggesting code similar, like, the way that if you're, like, writing a sentence, like, you might get it on Gmail.
So one of the things, I've spoken with CEOs who have, who are like, we have an AI strategy.
We need to implement the AI strategy.
and they always talk about how code is the first place they're trying to do it.
But then I also ask them, like, are you seeing the impact?
And up until recently, I was sort of being met by a shrug, sort of saying, yeah,
our engineers are using it, but I don't see our engineering team increasing that much.
Or some, some are just like rejecting the suggestions.
Some just prefer to do it the old way.
So I'm curious what you're seeing on your end, like on the aggregate, is there uptake in the co-pilot?
and are you really seeing the change in organizations here?
Absolutely.
And you already mentioned the 55%, you know,
that came out of a case study where we asked 50 developers
with and without copilot to solve a coding problem.
And the group with co-pilot was 55% faster
and about 10% more successful than the group without co-pilot.
Now, obviously, when you go into companies,
they don't, you know, have 100 engineers do exactly the same thing
and you can compare both groups against each other.
most engineering managers and leaders have better ways of investing, you know, all their manpower.
And so we look at real customers.
We look at how often do developers accept the code that they're getting from co-pilot?
And, you know, how often do they press the tab key and getting the job faster done than without it?
And the number we see there is 35% on average.
Some are a little bit lower, some are a little bit higher depending on what programming language they use.
use. But that shows you that, you know, in a third of the time, when the developer sees a suggestion, they also accept the suggestion. I think that already gives an idea of how much code is written with co-pilot. We have co-pilot deployed now in more than 50,000 organizations with more than 1.8 million paid seats. And its companies like, you know, InfoSys and Mercedes-Benz, Ernst & Young, PWC, but also, you know, the cloud natives, Mercado Libre, Etsy, Helens.
fresh or the desk. If you look at those companies, they're all writing millions of lines of
code that is coming from the AI and no longer from the developer. The developer is still there
and the developer is controlling the copilot. He or she is the pilot that is, you know, flying
the plane. But the co-pilot is providing lots of code that the developers can just accept
and as such get the job done faster. And so are these organizations building more? Like have
they told you because that's the one thing that I say like all right so like you you have like let's
say one third of this code accepted by engineers but I also wonder like is that leading practically
to more stuff being built or in the other side of it is it leading companies to like not have
to hire as many engineers because their group that they currently have can get more done what can
you tell us about that and so let me give you an example accentia for example they saw an 84% increase
and successful builds with the help of co-pilot
and they saw 88% retention of the suggested code
and most of their developers told us
the now loud and clear in surveys
that copilot allows them to stay in the flow
it allows them to spend less effort on repetitive task
and Ernst and Young had 150 developers
by commit more than 1.2 million lanes of code
with the help of copilot so we see
these statistics all over the place
where companies are measuring over a period of time
an increase in productivity and increase in developer happiness
and ultimately an increase in output, right?
The build that gets deployed to a server
is what Copilot is helping developers to deploy more.
And are they seeing like a specific ROI there?
Are they like, all right, so it's one thing if you put more code in the project,
but we all know that that doesn't necessarily mean
like it's going to be a better product or that it's sort of getting a better return.
So what can you tell us about that?
So we, you know, at GitHub at Microsoft, we obviously also use copilot ourselves.
And we see, for one, from developers, from developer surveys, we do, you know,
developer experience surveys every month or so.
And we see that developers are clearly more satisfied for the use of AI.
They feel happy in their job.
And, you know, I have a lot of friends in the industry there, like, I would pay, you know,
$20 just for making my developers happy
and the rest doesn't matter, it will come by itself.
But if you look at the overall metrics, engineering metrics,
we see more throughput.
We see basically more ideas,
you know, more work items in a planning system
taken and shipped, you know, to the customer.
And as such, you know, the throughput of software developers
is increasing through co-pilot.
But the challenge, you know, of these questions always is,
is how do you compare teams against each other
given that the thing that developers are building
is a really creative task.
You know, most feature requests,
most bug reports, most security vulnerabilities,
all these things are not easily comparable
to the next task and the next task.
That's why you see software projects
often, not always, often being behind on schedule
because it's really hard to estimate
how long work takes.
And then it's also harder to estimate
so how much did you actually say
If you didn't know how long it would take in the first place, it's also harder to figure out how much you save.
But I'd say from our own data at GitHub and from customer evidence, we clearly see, you know, in that range that I mentioned 35% of productivity gains.
Yeah, but I also would say you could probably more easily measure it than you're letting it on, letting on because you could say within an organization, how many of our projects were behind schedule before a co-pilot and how many of them are behind schedule with co-pilot.
and then the delta is your increased improvement.
Well, I think that's true in the sense that, you know,
if you would do the same thing all over again,
you would be able to measure this,
but as the complexity of your software system grows,
and as your developers, you know, constantly you have to tackle new challenges,
it is somewhat challenging to compare, you know,
the feature you shipped in two months,
whether it's the ones that shipped two months ago.
Right.
Or the other way I would do it is maybe say,
you're all building the same
you're on the same project
building on the same timeline
what time do you get out of work
if you're getting out of work at five
and now you're getting out at three
then I would say
thumbs up this thing is working
but if you're getting out at five still
then it might be a question there
is that a stupid test
I think I think you know
if you look at
you know I've been working in software
for over 20 years I think you know
the best indication you have
is just ask the developers
if they feel that
a tool that was put into their workflow is actually helping them.
And let's face it, most of us hate when somebody comes and says IT, you know, it's like,
oh, you know, here's this new tool and you have to use it.
It's prescribed by the company.
You know, don't touch my system.
I know what I'm doing is kind of a standard response.
And, you know, what I've seen with co-pilot ever since the first ship, the preview three years ago,
is that developers are excited about it.
those that were skeptical at first, you know, after a few days of using it, I have seen the
magic, you know, they, you often see on social media that people are saying, I didn't believe
it until I used it. And then all of a sudden it understood what I'm trying to do. And part of
that is, you know, it's not only predicting the next word, is actually taking everything in the
file above the cursor, below the cursor, you know, adjacent taps. It takes what you have already
in account. And as such, it knows, you know, how you're writing code and what you're trying to
achieve and so developers after using this are like happy about you know having this AI available
to them it doesn't even feel like AI right it's just a tool that is in in your editor while
you're while you're doing your work okay I'm buying your argument that this is something that is
helping engineers within companies and then another thing like so let's say let's continue
saying that this is something that is helping and increasing efficiency by the numbers that you're
putting forth, you also have this other idea that it's not just within companies, but also that
it can empower individuals to build basically anything their mind can think of because the coding
becomes that much more efficient to do. You did a TED Talk recently and you gave this example
of wanting to build an app that just sort of logs every flight that you've taken. And no one would
ever build that type of app because like the amount of time it would take and the ROI is questionable.
If you want to build it for yourself, you said you were able to do it in the time that it would take you to finish a glass of wine.
Now, even if you're an extremely slow drinker, that's a pretty fast amount of time to build that type of application.
So talk a little bit about what you think this means on an individual level.
We've heard even in the tech world about, like, is there going to be one person that builds a billion dollar company, right?
And so I'm curious what you think about this and if this enables that.
the first thing that you know comes to mind is when you think about individuals trying to code is that they often squeeze that into their day you know they have you know another thing that they're working on whether it's you know me as CEO of GitHub being being on podcasts or having meetings with my team or planning the next quarter or it's developers that do hobby projects in between all their you know work projects and and and their personal lives and so you're constantly
in this off and on switch where you have to context a switch from what you were working on before to the new thing.
And with the help of co-pilot and specifically the chat functionality, you can easily get back into the flow, into the thing you were working on.
You can ask questions.
You can even ask questions about what did you build yesterday, like describe that code again to me from yesterday.
And what I said at TED is you can do that in the language that you grew up with.
For me, it was German.
Obviously, many people speak English, but also many people on this planet do not speak English.
And when they want to build something, what they don't want to do first is to learn a language that they are not speaking every single day.
And so democratizes that access, even more so if you think about a six to seven year old that wants to build something cool, you know, and it's fascinated by computers as many kids are.
Maybe they have already seen some computer games, and now they want to go and build a little game.
I showed at a different conference,
a snake game,
you know,
the game that was on the Nokia phones
back in the day
where you control the snake
and eats an apple
and the tail gets longer and longer
and then to prevent
not eating your own tail.
You can do that by just asking
a couple of questions
into the check functionality
and it gives you all the code
and you can copy and paste that,
you know,
into the editor and then you can keep
asking questions iterating
with the AI and you don't have to,
you know,
figure it all out by yourself.
You don't have to read a book first
and you don't have to
you know, find the right web page where this is all described. And even then if you find, you know, that web page with the code examples, you still have to make that work while while you have co-pilot helping you every step of the way. So I think there's a future coming soon where you can create any software you can imagine with just a few poems written in human language. And as such, you know, you're building, you know, your own software ecosystem in the same way that, you know, we're designing our living room.
or building Lego sets and whatnot, right?
We're exploring our credibility in so many ways as humans.
Yeah, I think the line you said is the floodgates of Nurtitude have swung wide open.
I like that.
Yeah, because, you know, like there's so many other things, right?
Like I mentioned Lego already.
Obviously, if you want to play a music instrument, all you do is buy the music instrument.
And these days probably watch a couple of YouTube use or find a webpage and you can start playing.
Nobody is stopping you.
It might not sound great.
but as long as you do that in your home or on your own computer you have all the creativity
that you have as a human available to you and now you have also the tool that helps you
through that process.
Right.
So the example of the $1 billion value company created by one person, do you see that as
something that's feasible?
I think so.
I think that, you know, I've seen a bunch of examples where small companies like
Instagram comes to mind, you know, that that started really small and by the time they got
acquired, they were still very small. WhatsApp is a similar example. And so I think, you know,
that can exist. The question is a single person company, how they're also managing support
and accounting and all the other things that are outside of creativity. And maybe a copilot also helps
them with that, you know, answering support questions. But I think there's, you know, it's more fun if you
have a smaller team of people available that handles those things.
But less lucrative.
Yes.
So let me read you this example that I saw on Reddit of a coder that was talking a little bit
about how they've worked with generative AI to build.
So they say, it's mind-blowing how quick I can move now.
They're using Sonnet 3.5, which is an anthropic model.
I'm pretty sure I could implement copies of the technical parts of the most popular apps
in the app store 10 times as fast as I could before a large.
language models. I still need to make architectural and infrastructure decisions, but stuff like
programming the functionality is literally 10 times faster right now. And this is the process that
they use. The first thing they do is they think hard about the feature and probably discuss it
with Claude. The second thing they do is write a basic spec for the feature. It's just a few
sentences and bullet points and also iterate with Claude on the spec. And then they are sure to
provide Claude with all the relevant context and ask for the implementation, the code.
So basically what they're doing here is brainstorming an app with Claude,
specking out an app with Claude, and then having Claude it.
I mean, that is remarkable.
Is this something that we're going to see be more common?
I think we're going to see it at a smaller scale.
For small projects, you can probably get there even without a lot of computer science,
computer engineering knowledge.
For larger projects, I think the step missing is the architect, you know, the software
engineering expert that knows which database to pick, you know, which cloud provider, how to make
the app, you know, scale from 10 users to 10 million users.
And the model can kind of help you with that by giving new options, right?
We have all seen that when you ask chat, JPT, a question, an open-ended question,
it gives you options and explains to you kind of like how to get there.
but to navigate then this, you know, tree of information,
you still have to have, you know, subject matter expertise.
I don't think that goes away, but I think, you know,
if you have a well-defend task that you can describe, you know,
to a certain level, to model or to a whole system like co-pilot,
you will have that agent, if you want to call it like that,
do the job for you to 90% of what you expect.
You know, the flip side.
of that is if you think about
when you work with other people
whether it's on software on other projects
is like how long
can you have a person go by
themselves when you give them a task
until they're going so far off track
of what you actually wanted to achieve
whether it's what you describe to them
right like more often than not
we need the feedback loop as humans
we can't work in isolation
for too long until we're either
completely off track or we come back
you know, with a work result that isn't really what the person or manager or, you know,
our customer expected us to do.
And I think this is where, you know, we have the boundaries of these models.
If the human can do that because ultimately, you know, the customer can't describe it or the manager
can describe it to the level of degree that you can actually fulfill all the requirements,
then a model can do that by themselves as well.
That's why we believe the human needs to be in the center.
the human needs to be involved at the step of the way
to make sure that we're not getting into the one direction.
But this is exactly what the person is describing,
that they're not only asking Claude to write the code,
but they're dialoguing with the AI bot
about the different spec and the decisions
and how to set up the components and things like this,
and then it builds only at the last step.
You know, I called this, I don't know what it was,
I called the second brain.
It's kind of like we have an external.
outside of our brain memory chip
that gives us all the information
that we can store ourselves
and even if we have a lot of things
that we learn
in university and high school
and even before that
that we can't really store
and we often forget these things
and so the AI is helpful
to retrieve this information again
you just need to know how to ask
the bad question and then work with you
that's why we ultimately called it co-partnered.
It helps you, you know, to have fun with the things that you want to work on.
And it takes, you know, over the boilerplate, as we call it encoding, you know,
the stuff that surrounds all the creative part of the process.
Right.
Now, with Gmail, so Gmail allows me to write emails within the Gmail application
and will suggest some text for me as I write, sometimes I accept it, sometimes I don't.
But also, like, I could just go to Claude.
and ask Cloud to write the email.
So I'm thinking from your, in your circumstances,
like, is there a reason why people should be using,
like, the co-pilot within GitHub as opposed to, like,
having this conversation with, let's say, an anthropic bot
and just having that write the code
and then dumping it into the code editor?
By the way, you can also use the AI to summarize the email.
So it's better use the one side writes the email with AI
and the other side summarizes the email with the AI
at which point you can ask the question, why not just send the prompt to the other person
and save the time on all the friendliness and the salutation and whatnot that we put into
emails because it's...
Or just become proficient at writing concisely, but I think as a journalist, you know,
I know that that part of the world is not going to, is more difficult to proselytize than
others. Sorry, go ahead.
Yeah, you know, and I think to some degree that will happen.
And so to some degree we have so much information around us now that the summary is good enough if you only want to read the headline and the summary and not dive into 10,000 word article because it ultimately means you have more time for other things.
We are all dealing with limited attention, limited lifetime ultimately.
And so if I can shortcut some of these things, that means I have more time for other things.
coming back to your question,
why not use the generic chatbot?
The power of Copilot is that it lives
in the work environment of the developer.
So, yeah, you can copy and paste everything
that you see in front of you into a generic chatbot
and have it give you an answer,
but it's much more powerful to have the chatbot sit within your environment
where it knows, you know, what files are open,
it knows what you wrote, you know, before that.
It can look at adjacent tabs.
It can even look in the developer world at the output to the debug output is what we call that and the console and error messages and those kind of things.
And so it has much more context available that helps it to answer the question, you know, within the specific context of the project you're working on.
You know, a very simple example is that it knows, you know, whether you like your variable names with camel case or capital.
or whether you write in German or English by just looking at the context of your file.
One question for you.
So basically you can write the prompt of what you want and the code will be developed on the back end.
Is there going to come a point where we're not going to need code at all to build?
Because like if you can just prompt, then why do we really need a B in the code?
You might say we're already at that point to some degree where, you know, when you go and ask chatypity a question,
you get an answer and you ask a question to plot the chart, for example, or do a mathematical
calculation, it actually generates a Python script that then, you know, plots that data into
a chart and it shows you the chart and still shows to that step where you see in between
the Python script, but they could as well hide that and you just see the chart output, right?
Like, in many ways, chat GPT is giving you an answer without you ever having to worry how that was generated.
And so, yes, we are going to see computer systems where large language models are just one building block in addition to code.
Or maybe it's multiple language models and image models and, you know, time series models and whatnot, plus code combined to generate all the output that the developer or the user expects.
Do we still need engineers then to code?
Yeah, because, well, first of all, you know, there's millions of lines of code out there that still have to be maintained.
You know, one of the examples I'd like to get is that most banks are still running COBOL code.
That's a programming language from invented in late 50s when Eisenhower was the president.
Right.
But this is maintenance.
I'm talking about like to build new things.
Like, is there going to come a time where we're just going to have prompters instead of coders?
Or most developers work on an existing code basis.
So I wouldn't, I would push back a little bit on maintenance.
We're building on top of an existing world.
I think developers have always moved up the abstraction that, you know,
we used to build it all ourselves.
Then it came to the Internet.
We used to start sharing software components, so-called open source.
You know, nowadays, most applications are sitting on a stack of thousand components already,
and you're building the 10% layer on top of that.
Now that 10% layer, you know, might, you know, at some point be written in 80%
by AI or replaced by AI but that means you have more time for the remaining 20% on top of
that the pile is getting always bigger and the developers are still going to have enough
work carved out for them in fact I'd say you know AI has created more work for so for developers
because now somebody has to build all these AI systems and we're not at all at a point
we can just you know have an AI engineer quote unquote do the job of a human like that
doesn't exist. And even if it exists, it works great on my demo, but it doesn't actually do
any real work. Okay, so you're in a very interesting spot because you're running GitHub, and
GitHub is part of Microsoft. And GitHub with co-pilot might be the perfect company to implement
generative AI, because it's one of those things where, like, there's usually a right answer
to the question. There's libraries and libraries of code stored on your platform.
effectively, that makes it not easy, but more straightforward to train on. And when people are
writing, the large language model can rely on that history to predict or to suggest what the
next bit of code should be. So it's almost like the perfectly suit, the most perfectly suited
discipline to use large language models for is code. And in fact, if you looked at the discussion
of generative AI, recently, there's been a lot of discussion of how it hasn't really proved
its economic value outside of coding. Now, that's the bear case. But anyway, I'm throwing it out
there for point of discussion. The question that I have and lots of people have is, is this now
something that what you're seeing in your field, where it improves the employees' effectiveness by 55%
makes them happy or allows them to do more and build more? Is that generalizable to other fields?
And if so, if you think that it's the case, then why? Because that's the bet that Microsoft is making,
right it's not just coding it's everything you know i think we have um forgotten how many things
a i already does for for us or you know we are not realizing it um you know the image recognition
in my car in the no no no but that is we're talking about generative AI in particular so
we can go on for days about how AI has been you know for feed ranking and computer vision fine
but the big moment right now is all about generative and generative
has been something that GitHub is ridden with co-pilot to this amazing moment.
But that's the question.
Is that type of technology, in particular, transferable elsewhere?
I think one scenario that comes to mind that we are already using at GitHub is support.
And so, you know, if you look at our support system today, you actually find the GitHub support
cow pilot that tries to help you before you, you know, submit your ticket to a human.
And we actually see, and it generates answers.
So it's generally if I had uses the same large language models to stay within the scope of your question.
And we see that the number of tickets that get solved that way is above 50%.
So we know 50% of those questions that go through the support co-pilot gets solved by the support co-pilot
and do not get submitted into a human.
And so as such, it makes, you know, supporting our developers, our customers, more efficient for us as a company.
So I say, you know, that's definitely another scenario where we see the efficiency gains for us as a company.
Similarly, you know, we have an internal tool called Octobot, you know, like Octocat or our logo that I feel on my t-shirt.
And it helps, you know, our folks internally to solve IT problems.
And our IT team is getting no more than three hours per IT supporter back through that internal tool.
just by helping
employees
to solve their own IT issues
instead of having to talk
to a human first. And it's all
along the same ways, along the same
lines, which is like generating
text, you know, that helps you to solve
the task that
you have a problem
with whether that's in support and IT
or that's in coding to
focus on the things that
you're really getting
value out. Okay, I want to
talk about what the next set of models might bring, but let's take a break before we do that.
So we'll be back right after this to talk about the next side of models and a bunch of other
stuff. So stay tuned. We'll be back right after this.
Hey, everyone. Let me tell you about The Hustle Daily Show, a podcast filled with business, tech
news, and original stories to keep you in the loop on what's trending. More than two million
professionals read The Hustle's daily email for its irreverent and informative takes on business and
tech news. Now, they have a daily podcast called The Hustle Daily Show, where they're
team of writers break down the biggest business headlines in 15 minutes or less and explain
why you should care about them. So search for The Hustled Daily Show and your favorite podcast app,
like the one you're using right now. And we're back here with GitHub CEO Thomas Donke.
We're talking about everything that AI can do and sort of how the advances might help propel
not just coding, but everything else forward. All right. So here's what I'm hearing about the
next set of models that are coming. And I'm talking about like the GPT
5s and the Claude 4s and whatever it might be.
The one thing that I'm hearing is that they're going to be much, much better for coding.
And I'm curious to hear your perspective on how much further there is to go for these models
to be able to handle code.
And what you think these models getting even better at coding might pretend for,
it might pretend for what you're seeing with the developer community today.
We believe one of the things these newer models will be able to,
help with is we call agentic abilities, solving multi-step tasks. One classic example in software
today in small and big companies is that you don't have to go for too long until you have
tech debt, until you have code that is old and needs to be maintained, that needs to be updated,
that needs to be scanned for security vulnerabilities. And if you look into the backlogs of most
engineers today, on the one side, they have all the innovation, you know, all the cool stuff
they want to work on.
And on the other side, they have all the maintenance tasks.
And one of them is burning down security vulnerabilities that have stacked up over time.
And so we think, you know, the next generation of models will be able,
will be helping with burning down these security vulnerabilities.
In fact, you know, we already have a feature in market that we call autofix that helps with
known issues and burns those down.
But it's already only working.
right now in a single file and just you have more powerful models you can do that in across multiple
files basically solving the issue not just in one place but in multiple places and how do you teach like a
model to be able to do that like i you know obviously reasoning agentic stuff as i've been talked about
is breaking stuff down to its component parts and then learning how to work on it one by one i mean it seems
sort of antithetical to the way that lMs work today or not antithetical but very different which is that
they just kind of like take a prompt and then just spit back a bunch of information.
You know, you described it yourself earlier, which is like this multi-step is that in the first
step you get an answer from the model that, you know, describes the solution.
And then what you typically do when you reason with the model is you ask it, you know,
about more about the first step and then about the second step.
And so you're drilling yourself down into this tree of different steps.
And I think as we, you know, move forward, agents will be able to.
to do that to a certain degree themselves.
And the tricky part is then to figure out,
when do I have to come back to the human
and ask the question that I need the pilot to make the decision
and not have the co-pilot basically go down the wrong path.
Okay, this is sort of a controversial question in the AI world,
but I'm going to ask it to you.
Do you think we're going to get to the point where,
I mean, we're talking right now about effectively AI taking the reins
and starting to build and then coming back to the human,
Do you think we're going to get to a point where AI is going to just improve itself?
It seems like a bit of a trick question.
I'd say, you know, obviously we have seen the AlphaGo and AlphaFold that's already happening.
AI has improved itself, right, and has learned to play Go and then got as good as the best Go players and in fact got better.
And what we also saw then afterwards is that the best Go players figured out how Tustle beat the model.
And so even though there was like a period of time when everybody was kind of like,
depressed that now that the game is ruined, the best players figure out they can still
beat AlphaGo.
And so I think, you know, there's definitely going to be problems that AI will be able to
solve for us.
As I mentioned, you know, burning down security vulnerabilities as an example.
And I think most folks will be very happy about that because then gives them more time
to work on stuff that they actually want to work on instead of doing the same security
vulnerability fix over and over again in multiple files.
You know, is the AI, you know, going to get to the thing?
I don't know.
And I think, you know, if I know the answer and how to get there, I probably, you know,
I built that company myself.
But, you know, I'm joking a little bit.
But like, I think, you know, we will see if we'll see over the next few years if AI can not only do what it is instructed by humans,
but can actually get to a place where it can create itself in the sense of, you know, not getting an instruction first.
and can kind of like, you know, produce ideas.
Today, you know, while it may appear that a Claude or ChatGPT is generating stuff,
at the end of the day, it's just predicting the next word,
to write the next word after that.
It has no consciousness because it cannot say no to you.
It can only predict an answer that says, I don't want to predict the answer.
But, you know, it still gives you an output.
It can be silent, if you will.
Right.
And look, it wasn't a trick question.
Like, the question is not, can the model sort of learn to get better as it goes, right?
Which is still following the model.
Like, the question is, like, the basic design of these AI programs, can AI learn to make them even better?
Like, can AI be able to take a GPT4 and turn it into GPT5, right?
That's the real question.
You think that's going to happen?
I think it's like predicting the future.
And I don't know if I could, if I can.
I don't know.
I haven't seen
you know I haven't seen any indication that that's possible today
but you know maybe I'm on your podcast again three years
and you're telling you see Thomas you're interviewing you
maybe the AI is going to do a podcast between the two of us
no but like look you know all if you look at the technology today
it's super powerful it helps developers and support agents
and IT employees to achieve their job faster
which ultimately means, you know, they have more time for other things and I think that's
remarkable. And I'm not too worried about, you know, AI taking over these jobs and replacing
them with a fully automated employee.
Now, another thing that I find interesting is this sort of constraint on computing.
And I was looking at your Twitter and I saw that you recently praised the fundraising of a company
called etched, which has built chips that are purpose built for inference, which is effectively
running these AI models, which are extremely expensive to run now, but it's much cheaper
to run on an edge chips. How important do you think hardware innovation is for these technologies
to be able to be cost-effective and grow to the point that sort of the industry is betting on?
And then on that note, what do you think about etched?
it's incredibly exciting that we have silicon companies in silicon valley again i think that's number
one there's innovation in silicon and there's not only edge there's a bunch of companies that are
going in the same direction and i think it's just fascinating to see after you know we believe that
most law is over and there's no more innovation in chips we are back you know to a world where
there's innovation across the whole stack we talked a lot about
models. We talked about co-pilot and we talked about agents, which, you know, is going up
the stack, but there's also innovation going down the stack, you know, from the model to the data
center all the way down to the chip. And I think, you know, we are going to see much more on
that in the coming years. The cost to run inference will come down with these specialized chips.
The models itself become more efficient. You know, GPT4 was, you know, announced last week
which is much faster and much more efficient
and I think we are going to have innovation
on the top end where bigger models come out
and do more stuff and we are going to have innovation
on the efficiency side
where the functionality that a few years ago
required more GPUs and more time
is now done much faster
and I think it's incredibly important to have that
because the faster you get the response
the faster you're able to iterate
whether you do that manually
you know, while you're asking questions,
or whether you're doing that automatically in a copilot
where you need to do multiple steps to generate code
or, you know, in copilot,
we're always generating 10 responses.
You can actually see them in your editor if you open a side panel.
So because we then want to pick the best one for the context you're working on
and you can cycle through those, right?
So if you can get those 10 faster,
you actually probably get the higher acceptance rate from the developer
because they saw the suggestion before they kept typing whatever they were typing.
Yeah, and you just mentioned that Open AI has reduced the cost to use GPT40.
I got a question I asked, like, what should I ask you?
And Alex Willem from TechCrunch, he asked me, he's like, why is GitHub co-pilot so cheap
when the perceived value is so high?
Why not add a zero?
What do you think?
We're really happy about the price point.
I think there's a balance with every price point and every new product to find between mass adoption
and the value are getting out of the product.
You know, $10 for individuals and $19 for employees in the company per month is a great price for all these productivity gains.
It has allowed us to go to $1.8 million paid seats.
And we're really happy about the competitiveness of that price point.
Are we going to get to a point where most of the code that's being generated is generated by AI
and the developers are basically auditors of that code?
I believe so, yeah.
I said actually two years ago at a conference that my prediction back then was 80% of code is going to be written by AI in five years.
So I guess three years to go for that to become true.
Last year, we already said that on average, 46% of code is written by copilot in those files that's enabled.
And for some languages, over 60%.
And again, I don't think that's a bad thing.
I think it's a great thing because it means the developers have more time to write the thing that actually matters.
the thing that is creative
that's the thing that's new
this thing is differentiated
and they don't have to write
all the boilerplate anymore
and then all right
last question for you
you said that you have
100 million users
on GitHub today
you think that you're going to get
to a billion with this
so I'm curious like
why you think AI
is going to drive so many people
to start coding
and then what does that mean
for a broader economy
I believe that today
the biggest adoption blocker
is the complexity of the
technology, the complexity of a language that is not the language that we learn and use every
single day when we communicate. Programming languages are great because they're deterministic.
You know, the same thing does the same, has the same output every time you write it.
But it's hard to learn. It's hard to learn when you're a kid. It's much harder to learn than
playing an instrument or drawing an image because you have to learn the thing first before you can
produce anything. And then you still have to develop.
your craft and do it over and over again to actually get good at it.
And I think AI is going to accelerate that massively.
And, you know, one billion developers by 2030 or so is a little bit under 10% of the
population, depending on where world's population is going.
That's actually a low number if you think about it, because we all use computers every
single day, yet we are not able to create the thing.
Most people are not able to create the thing that runs on those computers.
And I think, you know, most people are able to go to.
home depot and buy a screwdriver and put a school in a wall.
And I think it's just going to be a fundamental skill of humans to be able to control the
computer and create something on them.
Whether then they use that and become a professional software developer that makes money
by doing so, that's a very different question in the same way that not everybody that
has some skills at home on home improvement is becoming a professional contractor,
professional musician and professional artists.
right like those things are decoupled and i think for our economy it means that we have a much higher
literacy in uh in computer engineering and computer science and software ultimately and that means we
will be able to solve more problems because ultimately we strong strongly believe at github that most
human progress is going to be achieved with the help of software and um without that software
developer we're not going to you know climb the evolution ladder
So people used to say learn to code to people who lost their jobs first as a helpful suggestion
and then it's kind of an insult and then they started to wonder maybe they shouldn't be learning
a code because that's going to be taken over by AI but your stance on this is no we're still
going to need the coders. And we still have code. Look at AI and the you know co-party is not going
to replace the code. The code is just lower in the abstraction level in the same way that you know your
your chip in your computer still has an instruction set you know used to do punch cards and then
we had assembly language now they're going into very technical stuff but you know the the chip at the
end of the day is still you know lots of little switches and switch between zeros and runs that doesn't go
away it just moves into into a layer where it doesn't bother us as much and it doesn't you know
keep us from building the things we want to build and I think that's the true power of
or in VI. Very cool. Well, I think you should release this app, this is a flight tracker app that
you worked on. I would definitely. It looks, you know, it looks horrible. It solves one purpose.
I know every time, you know, I flew yesterday. I know whether I've been on that specific plane,
you know, it has every plane has the tail or tail numbers, like a license plate. And so you can kind of
track, oh, I've been on this flight on that exact plane before. It looks horrible. It's kind of like
asking me to go on stage with Taylor Swift and sing and do it with her.
I wouldn't do that either, even though I sing in the shower, right?
And I think that kind of well describes like the intention here is like one thing is
the freedom of being creative and the other one is being so good that you can become a
professional.
Right.
Well, Thomas, look, you're right.
The gates of Nurtitude have swung wide open and I'm totally into it.
Thanks so much for joining.
Great to see you.
Thank you so much.
It was super fun.
Awesome. All right, everybody, thank you so much for listening. We'll be back on Friday breaking down the news with Ron John Roy, and we'll see you next time on Big Technology Podcast.